text stringlengths 10 951k | source stringlengths 39 44 |
|---|---|
Lettres de cachet
Lettres de cachet (; ) were letters signed by the king of France, countersigned by one of his ministers, and closed with the royal seal, or "cachet". They contained orders directly from the king, often to enforce arbitrary actions and judgments that could not be appealed.
In the case of organized bodies, 'lettres de cachet’ were issued for the purpose of preventing assembly or accomplishing some other definite act. The provincial estates were convoked (called to assembly) in this manner, and it was by a "lettre de cachet" (in this case, a "lettre de jussipri"), or by showing in person in a "lit de justice", that the king ordered a "parlement" to register a law despite that "parlement"s refusal to pass it.
The best-known "lettres de cachet", however, were penal, by which a subject was imprisoned without trial and without an opportunity of defense (after inquiry and due diligence by the "lieutenant de police") in a state prison or an ordinary jail, confinement in a convent or the General Hospital of Paris, transportation to the colonies, or expulsion to another part of the realm, or from the realm altogether. The "lettres" were mainly used against drunkards, troublemakers, prostitutes, squanderers of family fortune, or insane persons. The wealthy sometimes petitioned such "lettres" to dispose of inconvenient individuals, especially to prevent unequal marriages (nobles with commoners), or to prevent a scandal (the Lettre could prevent court cases that might otherwise dishonour a family).
In this respect, the "lettres de cachet" were a prominent symbol of the abuses of the "ancien régime" monarchy, and as such were suppressed during the French Revolution. In 1789 and 1790, all cases were revised by a commission which confirmed most of the sentences. Historian Claude Quétel has interpreted these confirmations as indicating that the Lettres were not as arbitrary and unjust as they have been represented after the Revolution, and he hence speaks of a "Légende noire".
The power to issue "lettres de cachet" was a royal privilege recognized by the French monarchic civil law that developed during the 13th century, as the Capetian monarchy overcame its initial distrust of Roman law. The principle can be traced to a maxim which furnished a text of the "Pandects" of Justinian: in their Latin version, ""Rex solutus est a legibus"", or "The king is released from the laws." "The French legal scholars interpreted the imperial office of the Justinian code in a generic way and arrived at the conclusion that every 'king is an emperor in his own kingdom,' that is, he possesses the prerogatives of legal absolutism that the "Corpus Juris Civilis" attributes to the Roman emperor."
This meant that when the king intervened directly, he could decide without heeding the laws, and even contrary to the laws. This was an early conception, and in early times the order in question was simply verbal; some letters patent of Henry III of France in 1576 state that François de Montmorency was "prisoner in our castle of the Bastille in Paris by verbal command" of the late king Charles IX.
In the 14th century the principle was introduced that the order should be written, and hence arose the "lettre de cachet". The "lettre de cachet" belonged to the class of "lettres closes", as opposed to "lettres patentes", which contained the expression of the legal and permanent will of the king, and had to be furnished with the seal of state affixed by the chancellor.
The "lettres de cachet", on the contrary, were signed simply by a secretary of state for the king; they bore merely the imprint of the king's privy seal, from which circumstance they were often called, in the 14th and 15th centuries, "lettres de petit signet" or "lettres de petit cachet", and were entirely exempt from the control of the chancellor.
While serving the government as a silent weapon against political adversaries or controversial writers and as a means of punishing culprits of high birth without the scandal of a lawsuit, the "lettres de cachet" had many other uses. They were employed by the police in dealing with prostitutes, and on their authority lunatics were shut up in hospitals and sometimes in prisons.
They were also often used by heads of families as a means of correction, for example, for protecting the family honour from the disorderly or criminal conduct of sons. The case of the Marquis de Sade (imprisoned 1777–1790 under a "lettre de cachet" obtained by his wealthy and influential mother-in-law) is a prominent example. Wives, too, took advantage of them to curb the profligacy of husbands and vice versa.
In reality, the secretary of state had a delegation and could issue them at his own discretion, and in most cases the king was unaware of their issue. In the 18th century it is certain that the letters were often issued blank, i.e. without containing the name of the person against whom they were directed; the recipient, or mandatary, filled in the name in order to make the letter effective.
Protests against the "lettres de cachet" were made continually by the "parlement" of Paris and by the provincial "parlements", and also by the Estates-General. In 1648, during the Fronde, the sovereign courts of Paris, by their "Arrêt d'Union", procured their momentary suppression in a kind of charter of liberties which they imposed upon the crown, but which was short-lived.
It was not until the reign of Louis XVI that a reaction against the abuse became clearly perceptible. At the beginning of that reign Malesherbes during his short ministry endeavoured to infuse some measure of justice into the system, and in March 1784 the baron de Breteuil, a minister of the king's household, addressed a circular to the intendants and the lieutenant of police with a view to preventing the most serious abuses connected with the issue of "lettres de cachet".
The Comte de Mirabeau wrote a scathing indictment of "lettres de cachet" while imprisoned in the dungeon of Vincennes (by "lettre de cachet" obtained by his father). The treatise was published after his liberation in 1782 under the title "Les Lettres de cachet et des prisons d'etat" and was widely read throughout Europe.
Besides the Bastille, there were thirty prisons in Paris by 1779 in which a peson could be detained without trial. Convents were used for the same purpose.
They were reported to have been openly sold, in the reign of Louis XV, by the mistress of one of his ministers.
In Paris, in 1779, the Cour des Aides demanded their suppression, and in March 1788 the Parlement of Paris made some exceedingly energetic remonstrances, which are important for the light they throw upon old French public law. The crown, however, did not decide to lay aside this weapon, and in a declaration to the States-General in the royal session of June 23, 1789 (art. 15) it did not renounce it absolutely.
"Lettres de cachet" were abolished after the French Revolution by the Constituent Assembly, but Napoleon reestablished their penal equivalent by a political measure in the decree of 8 March 1801 on the state prisons. This is all the more striking, given that Napoleon had pushed for measures ensuring the rule of law in the codes of laws adopted under his rule. This action was one of the acts brought up against him by the senatus-consulte of 3 April 1814, which pronounced his fall "considering that he has violated the constitutional laws by the decrees on the state prisons." | https://en.wikipedia.org/wiki?curid=18089 |
Lisbon
Lisbon (; Portuguese: Lisboa; ) is the capital and the largest city of Portugal, with an estimated population of 505,526 within its administrative limits in an area of 100.05 km2. Lisbon's urban area extends beyond the city's administrative limits with a population of around 2.8 million people, being the 10th-most populous urban area in the European Union. About 3 million people live in the Lisbon metropolitan area, which represents approximately 27% of the country's population. It is mainland Europe's westernmost capital city and the only one along the Atlantic coast. Lisbon lies in the western Iberian Peninsula on the Atlantic Ocean and the River Tagus. The westernmost portions of its metro area, the Portuguese Riviera, form the westernmost point of Continental Europe, culminating at Cabo da Roca.
Lisbon is recognised as an alpha-level global city because of its importance in finance, commerce, media, entertainment, arts, international trade, education and tourism. Lisbon is one of two Portuguese cities (alongside Porto) to be recognised as a global city. It is one of the major economic centres on the continent, with a growing financial sector and one of the largest container ports on Europe's Atlantic coast. Additionally, Humberto Delgado Airport served 29 million passengers in 2018, being the busiest airport in Portugal, the 3rd busiest in the Iberian Peninsula and the 20th busiest in Europe. The motorway network and the high-speed rail system of Alfa Pendular links the main cities of Portugal to Lisbon. The city is the 9th-most-visited city in Southern Europe, after Rome, Istanbul, Barcelona, Milan, Venice, Madrid, Florence and Athens, with 3,320,300 tourists in 2017. The Lisbon region has a higher GDP PPP per capita than any other region in Portugal. Its GDP amounts to US$96.3 billion and thus $32,434 per capita. The city occupies the 40th place of highest gross earnings in the world. Most of the headquarters of multinational corporations in Portugal are located in the Lisbon area. It is also the political centre of the country, as its seat of government and residence of the head of state.
Lisbon is one of the oldest cities in the world, and the second-oldest European capital city (after Athens), predating other modern European capitals by centuries. Julius Caesar made it a municipium called "Felicitas Julia", adding to the name "Olissipo". Ruled by a series of Germanic tribes from the 5th century, it was captured by the Moors in the 8th century. In 1147, the Crusaders under Afonso Henriques reconquered the city and since then it has been the political, economic and cultural center of Portugal.
Lisbon's name may have been derived from Proto-Celtic or Celtic "Olisippo", "Lissoppo", or a similar name which other visiting peoples like the ancient Phoenicians, Greeks and Romans adapted accordingly, such as the pre-Roman appellation for the Tagus River, "Lisso" or "Lucio". Classical authors writing in Latin and Greek, including Strabo, Solinus, and Martianus Capella, referred to popular legends that the city of Lisbon was founded by the mythical hero Ulysses (Odysseus). Lisbon's name was written "Ulyssippo" in Latin by the geographer Pomponius Mela, a native of Hispania. It was later referred to as "Olisippo" by Pliny the Elder and by the Greeks as "Olissipo" (Ὀλισσιπών) or "Olissipona" (Ὀλισσιπόνα).
Another claim repeated in non-academic literature is that the name of Lisbon could be traced back to Phoenician times, referring to a supposedly Phoenician term "Alis-Ubo", meaning "safe harbour". Although modern archaeological excavations show a Phoenician presence at this location since 1200BC, this folk etymology has no historical credibility.
Lisbon's name is commonly abbreviated as "LX" or "Lx", originating in an antiquated spelling of Lisbon as ‘‘Lixbõa’’. While the old spelling has since been completely dropped from usage and goes against modern language standards, the abbreviation is still commonly used.
During the Neolithic period, the region was inhabited by Pre-Celtic tribes, who built religious and funerary monuments, megaliths, dolmens and menhirs, which still survive in areas on the periphery of Lisbon. The Indo-European Celts invaded in the 1st millennium BC, mixing with the Pre-Indo-European population, thus giving rise to Celtic-speaking local tribes such as the Cempsi.
Although the first fortifications on Lisbon's Castelo hill are known to be no older than the 2nd century BC, recent archaeological finds have shown that Iron Age people occupied the site from the 8th to 6th centuries BC. This indigenous settlement maintained commercial relations with the Phoenicians, which would account for the recent findings of Phoenician pottery and other material objects. Archaeological excavations made near the Castle of São Jorge ("Castelo de São Jorge") and Lisbon Cathedral indicate a Phoenician presence at this location since 1200 BC, and it can be stated with confidence that a Phoenician trading post stood on a site now the centre of the present city, on the southern slope of Castle hill. The sheltered harbour in the Tagus River estuary was an ideal spot for an Iberian settlement and would have provided a secure harbour for unloading and provisioning Phoenician ships. The Tagus settlement was an important centre of commercial trade with the inland tribes, providing an outlet for the valuable metals, salt and salted-fish they collected, and for the sale of the Lusitanian horses renowned in antiquity.
According to a persistent legend, the location was named for the mythical Ulysses, who founded the city when he sailed westward to the ends of the known world.
Following the defeat of Hannibal in 202 BC during the Punic wars, the Romans determined to deprive Carthage of its most valuable possession: Hispania (the Iberian Peninsula). The defeat of Carthaginian forces by Scipio Africanus in Eastern Hispania allowed the pacification of the west, led by Consul Decimus Junius Brutus Callaicus. Decimus obtained the alliance of Olissipo (which sent men to fight alongside the Roman Legions against the northwestern Celtic tribes) by integrating it into the empire, as the "Municipium Cives Romanorum Felicitas Julia". Local authorities were granted self-rule over a territory that extended ; exempt from taxes, its citizens were given the privileges of Roman citizenship, and it was then integrated with the Roman province of Lusitania (whose capital was Emerita Augusta).
Lusitanian raids and rebellions during Roman occupation required the construction of a wall around the settlement. During Augustus' reign, the Romans also built a great theatre; the Cassian Baths (underneath "Rua da Prata"); temples to Jupiter, Diana, Cybele, Tethys and Idea Phrygiae (an uncommon cult from Asia Minor), in addition to temples to the Emperor; a large necropolis under "Praça da Figueira"; a large forum and other buildings such as insulae (multi-storied apartment buildings) in the area between Castle Hill and the historic city core. Many of these ruins were first unearthed during the mid-18th century (when the recent discovery of Pompeii made Roman archaeology fashionable among Europe's upper classes).
The city prospered as piracy was eliminated and technological advances were introduced, consequently "Felicitas Julia" became a center of trade with the Roman provinces of Britannia (particularly Cornwall) and the Rhine. Economically strong, Olissipo was known for its garum (a fish sauce highly prized by the elites of the empire and exported in amphorae to Rome), wine, salt, and horse-breeding, while Roman culture permeated the hinterland. The city was connected by a broad road to Western Hispania's two other large cities, Bracara Augusta in the province of Tarraconensis (Portuguese Braga), and Emerita Augusta, the capital of Lusitania. The city was ruled by an oligarchical council dominated by two families, the Julii and the Cassiae, although regional authority was administered by the Roman Governor of Emerita or directly by Emperor Tiberius. Among the majority of Latin speakers lived a large minority of Greek traders and slaves.
Olissipo, like most great cities in the Western Empire, was a center for the dissemination of Christianity. Its first attested Bishop was Potamius (c. 356), and there were several martyrs during the period of persecution of the Christians: Verissimus, Maxima, and Julia are the most significant examples. By the time of the Fall of Rome, Olissipo had become a notable Christian center.
Following the disintegration of the Western Roman Empire there were barbarian invasions; between 409 and 429 the city was occupied successively by Sarmatians, Alans and Vandals. The Germanic Suebi, who established a kingdom in Gallaecia (modern Galicia and northern Portugal), with its capital in "Bracara Augusta", also controlled the region of Lisbon until 585. In 585, the Suebi Kingdom was integrated into the Germanic Visigothic Kingdom of Toledo, which comprised all of the Iberian Peninsula: Lisbon was then called "Ulishbona".
On 6 August 711, Lisbon was taken by Muslim forces. These conquerors, who were mostly Berbers and Arabs from North Africa and the Middle East, built many mosques and houses, rebuilt the city wall (known as the "Cerca Moura") and established administrative control, while permitting the diverse population (Muladi, Mozarabs, Berbers, Arabs, Jews, "Zanj" and "Saqaliba") to maintain their socio-cultural lifestyles. Mozarabic was the native language spoken by most of the Christian population although Arabic was widely known as spoken by all religious communities. Islam was the official religion practised by the Arabs, Berbers, Zanj, Saqaliba and Muladi (muwalladun).
The Muslim influence is still visible in the Alfama district, an old quarter of Lisbon that survived the 1755 Lisbon earthquake: many place-names are derived from Arabic and the Alfama (the oldest existing district of Lisbon) was derived from the Arabic ""al-hamma".
For a brief time Lisbon was an independent Muslim kingdom known as the Taifa of Lisbon (1022–1094), before being conquered by the larger Taifa of Badajoz.
In 1108 Lisbon was raided and occupied by Norwegian crusaders led by Sigurd I on their way to the Holy Land as part of the Norwegian Crusade and occupied by crusader forces for three years. It was taken by the Moorish Almoravids in 1111.
In 1147, as part of the "Reconquista", crusader knights led by Afonso I of Portugal besieged and reconquered Lisbon. The city, with around 154,000 residents at the time, was returned to Christian rule. The reconquest of Portugal and re-establishment of Christianity is one of the most significant events in Lisbon's history, described in the chronicle "Expugnatione Lyxbonensi", which describes, among other incidents, how the local bishop was killed by the crusaders and the city's residents prayed to the Virgin Mary as it happened. Some of the Muslim residents converted to Roman Catholicism and most of those who did not convert fled to other parts of the Islamic world, primarily Muslim Spain and North Africa. All mosques were either completely destroyed or converted into churches. As a result of the end of Muslim rule, spoken Arabic quickly lost its place in the everyday life of the city and disappeared altogether.
With its central location, Lisbon became the capital city of the new Portuguese territory in 1255.
The first Portuguese university was founded in Lisbon in 1290 by King Denis I; for many years the "Studium Generale" ("General Study") was transferred intermittently to Coimbra, where it was installed permanently in the 16th century as the University of Coimbra.
In 1384, the city was besieged by King Juan I of Castille, as a part of the ongoing 1383–1385 Crisis. The result of the siege was a victory for the Portuguese led by Nuno Álvares Pereira.
During the last centuries of the Middle Ages, the city expanded substantially and became an important trading post with both Northern European and Mediterranean cities.
Most of the Portuguese expeditions of the Age of Discovery left Lisbon during the period from the end of the 15th century to the beginning of the 17th century, including Vasco da Gama's expedition to India in 1498. In 1506, 3,000 Jews were massacred in Lisbon. The 16th century was Lisbon's golden era: the city was the European hub of commerce between Africa, India, the Far East and later, Brazil, and acquired great riches by exploiting the trade in spices, slaves, sugar, textiles and other goods. This period saw the rise of the exuberant Manueline style in architecture, which left its mark in many 16th-century monuments (including Lisbon's Belém Tower and Jerónimos Monastery, which were declared UNESCO World Heritage Sites). A description of Lisbon in the 16th century was written by Damião de Góis and published in 1554.
The succession crisis of 1580, initiated a sixty-year period of dual monarchy in Portugal and Spain under the Spanish Habsburgs. This is referred to as the "Philippine Dominion" ("Domínio Filipino"), since all three Spanish kings during that period were called Philip ("Filipe"). In 1589 Lisbon was the target of an incursion by the English Armada led by Francis Drake, while Queen Elizabeth supported a Portuguese pretender in Antonio, Prior of Crato, but support for Crato was lacking and the expedition was a failure. The Portuguese Restoration War, which began with a coup d'état organised by the nobility and bourgeoisie in Lisbon and executed on 1 December 1640, restored Portuguese independence. The period from 1640 to 1668 was marked by periodic skirmishes between Portugal and Spain, as well as short episodes of more serious warfare until the Treaty of Lisbon was signed in 1668.
In the early 18th century, gold from Brazil allowed King John V to sponsor the building of several Baroque churches and theatres in the city. Prior to the 18th century, Lisbon had experienced several significant earthquakes – eight in the 14th century, five in the 16th century (including the 1531 earthquake that destroyed 1,500 houses and the 1597 earthquake in which three streets vanished), and three in the 17th century.
On 1 November 1755, the city was destroyed by another devastating earthquake, which killed an estimated 30,000 to 40,000 Lisbon residents of a population estimated at between 200,000 and 275,000, and destroyed 85 percent of the city's structures. Among several important buildings of the city, the Ribeira Palace and the Hospital Real de Todos os Santos were lost. In coastal areas, such as Peniche, situated about north of Lisbon, many people were killed by the following tsunami.
By 1755, Lisbon was one of the largest cities in Europe; the catastrophic event shocked the whole of Europe and left a deep impression on its collective psyche. Voltaire wrote a long poem, "Poême sur le désastre de Lisbonne", shortly after the quake, and mentioned it in his 1759 novel "Candide" (indeed, many argue that this critique of optimism was inspired by that earthquake). Oliver Wendell Holmes, Sr. also mentions it in his 1857 poem, "The Deacon's Masterpiece, or The Wonderful One-Hoss Shay."
After the 1755 earthquake, the city was rebuilt largely according to the plans of Prime Minister Sebastião José de Carvalho e Melo, the 1st Marquis of Pombal; the lower town began to be known as the "Baixa Pombalina" (Pombaline central district). Instead of rebuilding the medieval town, Pombal decided to demolish what remained after the earthquake and rebuild the city centre in accordance with principles of modern urban design. It was reconstructed in an open rectangular plan with two great squares: the "Praça do Rossio" and the "Praça do Comércio". The first, the central commercial district, is the traditional gathering place of the city and the location of the older cafés, theatres and restaurants; the second became the city's main access to the River Tagus and point of departure and arrival for seagoing vessels, adorned by a triumphal arch (1873) and monument to King Joseph I.
In the first years of the 19th century, Portugal was invaded by the troops of Napoléon Bonaparte, forcing Queen Maria I and Prince-Regent John (future John VI) to flee temporarily to Brazil. By the time the new King returned to Lisbon, many of the buildings and properties were pillaged, sacked or destroyed by the invaders.
During the 19th century, the Liberal movement introduced new changes into the urban landscape. The principal areas were in the "Baixa" and along the "Chiado" district, where shops, tobacconists shops, cafés, bookstores, clubs and theatres proliferated. The development of industry and commerce determined the growth of the city, seeing the transformation of the Passeio Público, a Pombaline era park, into the Avenida da Liberdade, as the city grew farther from the Tagus.
Lisbon was the site of the regicide of Carlos I of Portugal in 1908, an event which culminated two years later in the establishment of the First Republic.
The city refounded its university in 1911 after centuries of inactivity in Lisbon, incorporating reformed former colleges and other non-university higher education schools of the city (such as the "Escola Politécnica" – now "Faculdade de Ciências"). Today there are two public universities in the city (University of Lisbon and New University of Lisbon), a public university institute (ISCTE - Lisbon University Institute) and a polytechnic institute (IPL – Instituto Politécnico de Lisboa).
During World War II, Lisbon was one of the very few neutral, open European Atlantic ports, a major gateway for refugees to the U.S. and a haven for spies. More than 100,000 refugees were able to flee Nazi Germany via Lisbon.
During the Estado Novo regime (1926–1974), Lisbon was expanded at the cost of other districts within the country, resulting in nationalist and monumental projects. New residential and public developments were constructed; the zone of Belém was modified for the 1940 Portuguese Exhibition, while along the periphery new districts appeared to house the growing population. The inauguration of the bridge over the Tagus allowed rapid connection between both sides of the river.
Lisbon was the site of three revolutions in the 20th century. The first, the 5 October 1910 revolution, brought an end to the Portuguese monarchy and established the highly unstable and corrupt Portuguese First Republic. The 6 June 1926 revolution would see the end of that first republic and firmly establish the Estado Novo, or the Portuguese Second Republic, as the ruling regime.
The Carnation Revolution, which took place on 25 April 1974, ended the right-wing Estado Novo regime and reformed the country to become as it is today, the Portuguese Third Republic.
In the 1990s, many of the districts were renovated and projects in the historic quarters were established to modernise those areas, for instance, architectural and patrimonial buildings were renovated, the northern margin of the Tagus was re-purposed for leisure and residential use, the Vasco da Gama Bridge was constructed and the eastern part of the municipality was re-purposed for Expo '98 to commemorate the 500th anniversary of Vasco da Gama's sea voyage to India, a voyage that would bring immense riches to Lisbon and cause many of Lisbon's landmarks to be built.
In 1988, a fire in the historical district of Chiado saw the destruction of many 18th-century Pombaline style buildings. A series of restoration works has brought the area back to its former self and made it a high-scale shopping district.
The Lisbon Agenda was a European Union agreement on measures to revitalise the EU economy, signed in Lisbon in March 2000. In October 2007 Lisbon hosted the 2007 EU Summit, where an agreement was reached regarding a new EU governance model. The resulting Treaty of Lisbon was signed on 13 December 2007 and came into force on 1 December 2009.
Lisbon has been the site for many international events and programmes. In 1994, Lisbon was the European Capital of Culture. On 3 November 2005, Lisbon hosted the MTV European Music Awards. On 7 July 2007, Lisbon held the ceremony of the "New 7 Wonders Of The World" election, in the Luz Stadium, with live transmission for millions of people all over the world. Every two years, Lisbon hosts the Rock in Rio Lisboa Music Festival, one of the largest in the world.
Lisbon hosted the "NATO summit" (19–20 November 2010), a summit meeting that is regarded as a periodic opportunity for Heads of State and Heads of Government of NATO member states to evaluate and provide strategic direction for Alliance activities. The city hosts the Web Summit and is the head office for the Group of Seven Plus (G7+). In 2018 it hosted the Eurovision Song Contest for the first time as well as the Michelin Gala.
Lisbon is located at , situated at the mouth of the Tagus River and is the westernmost capital of a mainland European country.
The westernmost part of Lisbon is occupied by the Monsanto Forest Park, a urban park, one of the largest in Europe, and occupying 10% of the municipality.
The city occupies an area of , and its city boundaries, unlike those of most major cities, coincide with those of the municipality. The rest of the urbanised area of the Lisbon urban area, known generically as Greater Lisbon () includes several administratively defined cities and municipalities, in the north bank of the Tagus River. The larger Lisbon metropolitan area includes the Setúbal Peninsula to the south.
Lisbon has a Mediterranean climate (Köppen: "Csa") with mild, rainy winters and warm to hot, dry summers. The average annual temperature is , during the day and at night.
In the coldest month – January – the highest temperature during the day typically ranges from , the lowest temperature at night ranges from and the average sea temperature is . In the warmest month – August – the highest temperature during the day typically ranges from , the lowest temperature at night ranges from and the average sea temperature is .
Among European cities with a population above 500,000, Lisbon ranks among those with the warmest winters (below Valencia or Málaga) and the mildest night time temperatures, with an average of in the coldest month, and in the warmest month. The minimum temperature recorded in Lisbon was in February 1956 and in January 1985. The maximum temperature recorded in Lisbon was on 4 August 2018.
Sunshine hours are 2,806 per year, from an average of 4.6 hours of sunshine duration per day in December to an average of 11.4 hours of sunshine duration per day in July. The annual average rainfall is , with November being the wettest month.
The municipality of Lisbon included 53 "freguesias" (civil parishes) until November 2012. A new law ("Lei n.º 56/2012") reduced the number of "freguesias" to the following 24:
Locally, Lisbon's inhabitants may commonly refer to the spaces of Lisbon in terms of historic "Bairros de Lisboa" (neighbourhoods). These communities have no clearly defined boundaries and represent distinctive quarters of the city that have in common a historical culture, similar living standards, and identifiable architectural landmarks, as exemplified by the "Bairro Alto", "Alfama", "Chiado", and so forth.
Although today it is quite central, it was once a mere suburb of Lisbon, comprising mostly farms and country estates of the nobility with their palaces. In the 16th century, there was a brook there which the nobles used to promenade in their boats. During the late 19th century, Alcântara became a popular industrial area, with lots of small factories and warehouses.
In the early 1990s, Alcântara began to attract youth because of the number of pubs and discothèques. This was mainly due to its outer area of mostly commercial buildings, which acted as barriers to the noise-generating nightlife (which acted as a buffer to the residential communities surrounding it). In the meantime, some of these areas began to become gentrified, attracting loft developments and new flats, which have profited from its river views and central location.
The riverfront of Alcântara is known for its nightclubs and bars. The area is commonly known as "docas" (docks), since most of the clubs and bars are housed in converted dock warehouses.
The oldest district of Lisbon, it spreads down the southern slope from the Castle of São Jorge to the River Tagus. Its name, derived from the Arabic "Al-hamma", means fountains or baths. During the Islamic invasion of Iberia, the Alfama constituted the largest part of the city, extending west to the Baixa neighbourhood. Increasingly, the Alfama became inhabited by fishermen and the poor: its fame as a poor neighbourhood continues to this day. While the 1755 Lisbon earthquake caused considerable damage throughout the capital, the Alfama survived with little damage, thanks to its compact labyrinth of narrow streets and small squares.
It is a historical quarter of mixed-use buildings occupied by Fado bars, restaurants, and homes with small shops downstairs. Modernising trends have invigorated the district: old houses have been re-purposed or remodeled, while new buildings have been constructed. Fado, the typically Portuguese style of melancholy music, is common (but not obligatory) in the restaurants of the district.
The Mouraria, or Moorish quarter, is one of the most traditional neighborhoods of Lisbon, although most of its old buildings were demolished by the Estado Novo between the 1930s and the 1970s. It takes its name from the fact that after the reconquest of Lisbon, the Muslims who remained were confined to this part of the city. In turn, the Jews were confined to three neighbourhoods called "Judiarias"
Bairro Alto (literally "the upper quarter" in Portuguese) is an area of central Lisbon that functions as a residential, shopping and entertainment district; it is the center of the Portuguese capital's nightlife, attracting hipster youth and members of various music subcultures. Lisbon's Punk, Gay, Metal, Goth, Hip Hop and Reggae scenes all find a home in the "Bairro" with its many clubs and bars that cater to them. The crowds in the Bairro Alto are a multicultural mix of people representing a broad cross-section of modern Portuguese society, many of them being entertainment seekers and devotees of various music genres outside the mainstream, Fado, Portugal's national music, still survives in the midst of the new nightlife.
The heart of the city is the "Baixa" or city centre; the Pombaline Baixa is an elegant district, primarily constructed after the 1755 Lisbon earthquake, taking its name from its benefactor, Sebastião José de Carvalho e Melo, 1st Marquis of Pombal, who was the minister of Joseph I of Portugal (1750–1777) and a key figure during the Portuguese Enlightenment. Following the 1755 disaster, Pombal took the lead in rebuilding Lisbon, imposing strict conditions and guidelines on the construction of the city, and transforming the organic street plan that characterised the district before the earthquake into its current grid pattern. As a result, the Pombaline Baixa is one of the first examples of earthquake-resistant construction. Architectural models were tested by having troops march around them to simulate an earthquake. Notable features of Pombaline structures include the "Pombaline cage", a symmetrical wood-lattice framework aimed at distributing earthquake forces, and inter-terrace walls that were built higher than roof timbers to inhibit the spread of fires.
The parish of Beato (Lisbon) stands out for the new cultural dynamics it has been experiencing in recent years. The manufacturing districts and the industrial facilities by the riverside docks are currently the place of choice for contemporary art galleries, iconic bars, and gourmet restaurants that simmer in the streets. This reality has not gone unnoticed by the national press, and Visão, TimeOut, or Jornal de Negócios have already made notice of this parish that hides treasures such as the National Museum of the Azulejo or the Palacio do Grilo.
Belém is famous as the place from which many of the great Portuguese explorers set off on their voyages of discovery. In particular, it is the place from which Vasco da Gama departed for India in 1497 and Pedro Álvares Cabral departed for Brazil in 1499. It is also a former royal residence and features the 17th – 18th-century Belém Palace, a former royal residence now occupied by the President of Portugal, and the Ajuda Palace, begun in 1802 but never completed.
Perhaps Belém's most famous feature is its tower, Torre de Belém, whose image is much used by Lisbon's tourist board. The tower was built as a fortified lighthouse late in the reign of Dom Manuel l (1515–1520) to guard the entrance to the port. It stood on a little island in right side of the Tagus, surrounded by water. Belém's other major historical building is the "Mosteiro dos Jerónimos" (Jerónimos Monastery), which the Torre de Belém was built partly to defend. Belém's most notable modern feature is the Padrão dos Descobrimentos (Monument to the Discoveries) built for the Portuguese World Fair in 1940. In the heart of Belém is the "Praça do Império": gardens centred upon a large fountain, laid out during World War II. To the west of the gardens lies the "Centro Cultural de Belém". Belém is one of the most visited Lisbon districts. Here is located the Estádio do Restelo, house of Belenenses.
The Chiado is a traditional shopping area that mixes old and modern commercial establishments, concentrated specially in the Rua do Carmo and the Rua Garrett. Locals as well as tourists visit the Chiado to buy books, clothing and pottery as well as to have a cup of coffee. The most famous café of Chiado is "A Brasileira", famous for having had poet Fernando Pessoa among its customers. The Chiado is also an important cultural area, with several museums and theatres, including the opera. Several buildings of the Chiado were destroyed in a fire in 1988, an event that deeply shocked the country. Thanks to a renovation project that lasted more than 10 years, coordinated by celebrated architect Siza Vieira, the affected area has now virtually recovered.
The ornate, late 18th-century Estrela Basilica is the main attraction of this district. The church with its large dome is located on a hill in what was at the time the western part of Lisbon and can be seen from great distances. The style is similar to that of the Mafra National Palace, late baroque and neoclassical. The façade has twin bell towers and includes statues of saints and some allegorical figures. São Bento Palace, the seat of the Portuguese parliament and the official residences of the Prime Minister of Portugal and the President of the Assembly of the Republic of Portugal, are in this district. Also in this district is Estrela Park, a favorite with families. There are exotic plants and trees, a duck pond, various sculptures, a children's playground, and many cultural events going on through the year, including outdoor cinema, markets, and music festivals.
Parque das Nações (Park of Nations) is the newest district in Lisbon; it emerged from an urban renewal program to host the 1998 World Exhibition of Lisbon, also known as Expo'98. The area suffered massive changes giving Parque das Nações a futuristic look. A long lasting legacy of the same, the area has become another commercial and higher-end residential area for the city.
Central in the area is the Gare do Oriente (Orient railway station), one of the main transport hubs of Lisbon for trains, buses, taxis, and the metro. Its glass and steel columns are inspired by Gothic architecture, lending the whole structure a visual fascination (especially in sunlight or when illuminated at night). It was designed by the architect Santiago Calatrava from Valencia, Spain. The Parque das Nações is across the street.
The area is pedestrian-friendly with new buildings, restaurants, gardens, the Casino Lisbon, the FIL building (International Exhibition and Fair), the Camões Theatre and the "Oceanário de Lisboa" (Lisbon Oceanarium), which is the second largest in the world. The district's Altice Arena has become Lisbon's "jack-of-all-trades" performance arena. Seating 20,000, it has staged events from concerts to basketball tournaments.
Fernando Medina is the current and 77th Mayor of Lisbon.
The city of Lisbon is rich in architecture; Romanesque, Gothic, Manueline, Baroque, Modern and Postmodern constructions can be found all over Lisbon. The city is also crossed by historical boulevards and monuments along the main thoroughfares, particularly in the upper districts; notable among these are the "Avenida da Liberdade" (Avenue of Liberty), "Avenida Fontes Pereira de Melo", "Avenida Almirante Reis" and "Avenida da República" (Avenue of the Republic).
Lisbon is home to numerous prominent museums and art collections, from all around the world. The National Museum of Ancient Art, which has one of the largest art collections in the world, and the National Coach Museum, which has the world's largest collection of royal coaches and carriages, are the two most visited museums in the city. Other notable national museums include the National Museum of Archaeology, the Museum of Lisbon, the National Azulejo Museum, the National Museum of Contemporary Art, and the National Museum of Natural History & Science.
Prominent private museums and galleries include the Gulbenkian Museum (run by the Calouste Gulbenkian Foundation, one of the wealthiest foundations in the world), which houses one of the largest private collections of antiquaries and art in the world, the Berardo Collection Museum, which houses the private collection of Portuguese billionaire Joe Berardo, the Museum of Art, Architecture and Technology, and the Museum of the Orient. Other popular museums include the Electricity Museum, the Ephemeral Museum, the Museu da Água, and the Museu Benfica, among many others.
Lisbon's Opera House, the "Teatro Nacional de São Carlos", hosts a relatively active cultural agenda, mainly in autumn and winter. Other important theatres and musical houses are the "Centro Cultural de Belém", the "Teatro Nacional D. Maria II", the Gulbenkian Foundation, and the "Teatro Camões".
The monument to "Christ the King" (Cristo-Rei) stands on the southern bank of the Tagus River, in Almada. With open arms, overlooking the whole city, it resembles the Corcovado monument in Rio de Janeiro, and was built after World War II, as a memorial of thanksgiving for Portugal's being spared the horrors and destruction of the war.
13 June is Lisbon´s holiday in honour of the city's saint, Anthony of Lisbon (). Saint Anthony, also known as "Saint Anthony of Padua", was a wealthy Portuguese bohemian who was canonised and made Doctor of the Church after a life preaching to the poor. Although Lisbon’s patron saint is Saint Vincent of Saragossa, whose remains are housed in the Sé Cathedral, there are no festivities associated with this saint.
Eduardo VII Park, the second largest park in the city following the "Parque Florestal de Monsanto" (Monsanto Forest Park), extends down the main avenue (Avenida da Liberdade), with many flowering plants and greenspaces, that includes the permanent collection of subtropical and tropical plants in the winter garden (). Originally named "Parque da Liberdade", it was renamed in honour of Edward VII of England who visited Lisbon in 1903.
Lisbon is home every year to the Lisbon Gay & Lesbian Film Festival, the Lisboarte, the DocLisboa – Lisbon International Documentary Film Festival, the Festival Internacional de Máscaras e Comediantes, the Lisboa Mágica – Street Magic World Festival, the Monstra – Animated Film Festival, the Lisbon Book Fair, the Peixe em Lisboa – Lisbon Fish and Flavours, and many others.
Lisbon has two sites listed by UNESCO as a World Heritage Site: Belém Tower and Jerónimos Monastery. Furthermore, in 1994, Lisbon was the European Capital of Culture and, in 1998, organised the Expo '98 ("1998 Lisbon World Exposition").
Lisbon is also home to the Lisbon Architecture Triennial, the Moda Lisboa (Fashion Lisbon), ExperimentaDesign – Biennial of Design and LuzBoa – Biennial of Light.
In addition, the mosaic Portuguese pavement ("Calçada Portuguesa") was born in Lisbon, in the mid-1800s. The art has since spread to the rest of the Portuguese Speaking world. The city remains one of the most expansive examples of the technique, nearly all walkways and even many streets being created and maintained in this style.
In May 2018, the city hosted the 63rd edition of the Eurovision Song Contest, after the victory of Salvador Sobral with the song ""Amar pelos dois"" in Kiev on 13 May 2017.
The historical population of the city was around 35,000 in 1300 AD. Up to 60,000 in 1400 AD, and rising to 70,000 in 1500 AD. Between 1528–1590 the population went from 70,000 to 120,000. The population was about 150,000 in 1600 AD, and almost 200,000 in 1700 AD.
The Lisbon metropolitan area incorporates two NUTS III (European statistical subdivisions): "Grande Lisboa" (Greater Lisbon), along the northern bank of the Tagus River, and "Península de Setúbal" (Setúbal Peninsula), along the southern bank. These two subdivisions make for the "Região de Lisboa" (Lisbon Region). The population density of the city itself is .
Lisbon has 552,700 inhabitants within the administrative center on the area of only 100.05 km2 Administratively defined cities that exist in the vicinity of the capital are in fact part of the metropolitan perimeter of Lisbon. The urban area has a population of 2,666,000 inhabitants, being the eleventh largest urban area in the European Union after Paris, London, Ruhr area, Madrid, Milan, Barcelona, Berlin, Rome, Naples and Athens. The whole metropolis of Lisbon (metropolitan area) has about 3 million inhabitants. According to official government data, the Lisbon metropolitan area has 3,121,876 inhabitants. Other sources also show a similar number, according to the Organisation for Economic Co-operation and Development – 2,797,612 inhabitants; according to the Department of Economic and Social Affairs of the United Nations – 2,890,000; according to the European Statistical Office Eurostat – 2,839,908; according to the Brookings Institution has 2,968,600 inhabitants.
The Lisbon region is the wealthiest region in Portugal and it is well above the European Union's GDP per capita average – it produces 45% of the Portuguese GDP. Lisbon's economy is based primarily on the tertiary sector. Most of the headquarters of multinationals operating in Portugal are concentrated in the Grande Lisboa Subregion, specially in the Oeiras municipality. The Lisbon metropolitan area is heavily industrialized, especially the south bank of the Tagus river (Rio Tejo).
The Lisbon region is rapidly growing, with GDP (PPP) per capita calculated for each year as follows: €22,745 (2004) – €23,816 (2005) – €25,200 (2006) – €26,100 (2007). The Lisbon metropolitan area had a GDP amounting to $96.3 billion, and $32,434 per capita.
The country's chief seaport, featuring one of the largest and most sophisticated regional markets on the Iberian Peninsula, Lisbon and its heavily populated surroundings are also developing as an important financial centre and a dynamic technological hub. Automobile manufacturers have erected factories in the suburbs, for example, AutoEuropa.
Lisbon has the largest and most developed mass media sector of Portugal, and is home to several related companies ranging from leading television networks and radio stations to major newspapers.
The Euronext Lisbon stock exchange, part of the pan-European Euronext system together with the stock exchanges of Amsterdam, Brussels and Paris, is tied with the New York Stock Exchange since 2007, forming the multinational NYSE Euronext group of stock exchanges.
Lisbonite industry has very large sectors in oil, as refineries are found just across the Tagus, textile mills, shipyards and fishing.
Before Portugal's sovereign debt crisis and an EU-IMF rescue plan, for the decade of 2010 Lisbon was expecting to receive many state funded investments, including building a new airport, a new bridge, an expansion of the Lisbon Metro underground, the construction of a mega-hospital (or central hospital), the creation of two lines of a TGV to join Madrid, Porto, Vigo and the rest of Europe, the restoration of the main part of the town (between the Marquês de Pombal roundabout and Terreiro do Paço), the creation of a large number of bike lanes, as well as modernization and renovation of various facilities.
Lisbon was the 18th most "livable city" in the world in 2015 according to lifestyle magazine "Monocle."
Tourism is also a significant industry; a 2018 report stated that the city receives an average of 4.5 million tourists per year. Hotel revenues alone generated €714.8 million in 2017, an increase of 18.7% over 2016.
"Lisboa" was elected the "World's Leading City Destination and World's Leading City Break Destination 2018".
The Lisbon Metro connects the city centre with the upper and eastern districts, and also reaches some suburbs that are part of the Lisbon metropolitan area, such as Amadora and Loures. It is the fastest way to get around the city and it provides a good number of interchanging stations with other types of transportation. From the Lisbon Airport station to the city centre it may take roughly 25 mins. As of 2018, the Lisbon Metro comprises four lines, identified by individual colours (blue, yellow, green and red) and 56 stations, with a total length of 44.2 km. Several expansion projects have been proposed, being the most recent the transformation of the Green Line into a circular line and the creation of two more stations (Santos and Estrela).
A traditional form of public transport in Lisbon is the tram. Introduced in 1901, electric trams were originally imported from the US, and called the "americanos". The earliest trams can still be seen in the Museu da Carris (the Public Transport Museum). Other than on the modern Line 15, the Lisbon tramway system still employs small (four wheel) vehicles of a design dating from the early twentieth century. These distinctive yellow trams are one of the tourist icons of modern Lisbon, and their size is well suited to the steep hills and narrow streets of the central city.
There are four commuter train lines departing from Lisbon: the Cascais, Sintra and Azambuja lines (operated by CP – Comboios de Portugal), as well as a fourth line to Setúbal (operated by Fertagus), which crosses the Tagus river via the 25 de Abril Bridge. The major railway stations are Santa Apolónia, Rossio, Gare do Oriente, Entrecampos, and Cais do Sodré.
The local bus service within Lisbon is operated by Carris.
There are other commuter bus services from the city (connecting cities outside Lisbon, and connecting these cities to Lisbon): Vimeca, Rodoviária de Lisboa, Transportes Sul do Tejo, Boa Viagem, Barraqueiro are the main ones, operating from different terminals in the city.
Lisbon is connected to its suburbs and throughout Portugal by an extensive motorway network. There are three circular motorways around the city; the 2ª Circular, the IC17 (CRIL), and the A9 (CREL).
The city is connected to the far side of the Tagus by two important bridges:
The foundations for a third bridge across the Tagus have already been laid, but the overall project has been postponed due to the economic crisis in Portugal and all of Europe.
Another way of crossing the river is by taking the ferry. The operator is Transtejo & Soflusa, which runs from different locations within the city: Cacilhas, Seixal, Montijo, Porto Brandão and Trafaria under the brand Transtejo and to Barreiro under the brand Soflusa.
Humberto Delgado Airport is located within the city limits. It is the headquarters and hub for TAP Portugal as well as a hub for Easyjet, Azores Airlines, Ryanair, EuroAtlantic Airways, White Airways, and Hi Fly. A second airport has been proposed, but the project has been put on hold because of the Portuguese and European economic crisis, and also because of the long discussion on whether a new airport is needed. However, the last proposal is military air base in Montijo that would be replaced by a civil airport. So, Lisbon would have two airports, the current airport in north and a new in the south of the city.
Cascais Aerodrome, 20 km West of the city centre, in Cascais, offers commercial domestic flights.
The average amount of time people spend commuting with public transit in Lisbon, for example to and from work, on a weekday is 59 min. 11.5% of public transit riders, ride for more than 2 hours every day. The average amount of time people wait at a stop or station for public transit is 14 min, while 23.1% of riders wait for over 20 minutes on average every day. The average distance people usually ride in a single trip with public transit is 6 km, while 10% travel for over 12 km in a single direction.
In Greater Lisbon area, particularly in the Portuguese Riviera, an area popular with expats and foreign nationals, there are numerous international schools, including the Carlucci American International School of Lisbon (only American school in Portugal), Saint Julian's School (British), Saint Dominic's International School (British), Deutsche Schule Lissabon (German), Instituto Español Giner de los Ríos (Spanish), and Lycée Français Charles Lepierre (French).
In the city, there are three public universities and a university institute. The University of Lisbon, which is the largest university in Portugal, was created in 2013 with the union of the Technical University of Lisbon and the Classical University of Lisbon (which was known as the University of Lisbon). The New University of Lisbon, founded in 1973, is another public university in Lisbon and is known internationally by its Nova School of Business and Economics (Nova SBE),its economics and management faculty. The third public university is Universidade Aberta. Additionally, there's ISCTE - Lisbon University Institute (founded in 1972), a university institute that provides degrees in all academic disciplines.
Major private institutions of higher education include the Portuguese Catholic University, focused on law and management, as well as the Lusíada University, the Universidade Lusófona, and the Universidade Autónoma de Lisboa, among others.
The total number of enrolled students in higher education in Lisbon was, for the 2007–2008 school year, of 125,867 students, of whom 81,507 in the Lisbon's public institutions.
Lisbon is home to Biblioteca Nacional de Portugal, the Portuguese national library, which has over 3 million books and manuscripts. The library has some rare books and manuscripts, such as an original Gutenberg Bible and original books by Erasmus, Christophe Platin and Aldus Manutius. Another relevant library is the Torre do Tombo National Archive, one of the most important archives in the world, with over 600 years and one of the oldest active Portuguese institutions. There are, among several others, the Arquivo Histórico Ultramarino and the Arquivo Histórico Militar.
Lisbon has a long tradition in sports. It hosted several matches, including the final, of the UEFA Euro 2004 championship. The city also played host to the final of the 2001 IAAF World Indoor Championships and the European Fencing Championships in 1983 and 1992, as well as the 2003 World Men's Handball Championship, and the 2008 European Judo Championships. From 2006 to 2008, Lisbon was the starting point for the Dakar Rally. The city hosted the 2014 UEFA Champions League Final. In 2008 and 2016, the city hosted the European Triathlon Championships. Lisbon has a leg at the Volvo Ocean Race.
The city hosts three association football clubs in Portugal's highest league, the Primeira Liga. Sport Lisboa e Benfica, commonly known as simply "Benfica", has won 37 league titles in addition to two European Cups. Lisbon's second-most successful club is Sporting Clube de Portugal (commonly known as "Sporting" and often referred to as "Sporting Lisbon" abroad to prevent confusion with other teams with the same name), winner of 18 league titles and the UEFA Cup Winners' Cup. A third club, C.F. Os Belenenses (commonly "Belenenses" or "Belenenses Lisbon"), based in the Belém quarter, has solely won one league title. Other major clubs in Lisbon include Atlético, Casa Pia, and Oriental.
Lisbon has two UEFA category four stadiums; Benfica's Estádio da Luz ("Stadium of Light"), with a capacity of over 65,000 and Sporting's Estádio José Alvalade, with a capacity of over 50,000. There is also Belenenses' Estádio do Restelo, with a capacity of over 30,000. The Estádio Nacional, in nearby Oeiras, has a capacity of 37,000 and was used exclusively for Portuguese international football matches and cup finals until the construction of larger stadia in the city. It held the 1967 European Cup Final.
Other sports, such as basketball, futsal, handball, roller hockey, rugby union and volleyball are also popular; the latter's national stadium is in Lisbon. There are many other sport facilities in Lisbon, ranging from athletics, sailing, golfing to mountain-biking. Lisboa and Troia golf course are two of many stunning golf courses located in Lisbon. Every March the city hosts the Lisbon Half Marathon, while in September the Portugal Half Marathon.
Lisbon is part of the Union of Luso-Afro-Americo-Asiatic Capital Cities from 28 June 1985, establishing brotherly relations with the following cities:
Lisbon is part of the Union of Ibero-American Capital Cities from 12 October 1982 establishing brotherly relations with the following cities:
Lisbon has additional cooperation agreements with the following cities: | https://en.wikipedia.org/wiki?curid=18091 |
Local Group
The Local Group is the galaxy group that includes the Milky Way.
It has a total diameter of roughly , and a total mass of the order of .
It consists of two clusters of galaxies in a "dumbbell" shape: the Milky Way and its satellites form one lobe, and the Andromeda Galaxy and its satellites constitute the other. The two clusters are separated by about and move towards one another with a velocity of . The group itself is a part of the larger Virgo Supercluster, which may be a part of the Laniakea Supercluster.
The total number of galaxies in the Local Group is unknown as some are occluded by the Milky Way; however, at least 80 such objects are known (most of which are dwarf galaxies).
The two largest members, the Andromeda Galaxy and the Milky Way, are both spiral galaxies with masses of about solar masses each, and each have their own system of satellite galaxies:
The Triangulum Galaxy is the third-largest member of the Local Group, with a mass of approximately , and is the third spiral galaxy. It is unclear whether the Triangulum Galaxy is a companion of the Andromeda Galaxy, although the two galaxies experienced a close passage 2–4 billion years ago which triggered star formation across Andromeda's disk. The Pisces Dwarf Galaxy is equidistant from the Andromeda Galaxy and the Triangulum Galaxy, so it may be a satellite of either.
The membership of NGC 3109, with its companions Sextans A and the Antlia Dwarf Galaxy, is uncertain due to extreme distances from the center of the Local Group.
The other members of the group are likely gravitationally secluded from these large subgroups: IC 10, IC 1613, Phoenix Dwarf Galaxy, Leo A, Tucana Dwarf Galaxy, Cetus Dwarf Galaxy, Pegasus Dwarf Irregular Galaxy, Wolf–Lundmark–Melotte, Aquarius Dwarf Galaxy, and Sagittarius Dwarf Irregular Galaxy.
The term "The Local Group" was introduced by Edwin Hubble in Chapter VI of his 1936 book "The Realm of the Nebulae". There, he described it as "a typical small group of nebulae which is isolated in the general field" and delineated, by decreasing luminosity, its members to be M31, Milky Way, M33, Large Magellanic Cloud, Small Magellanic Cloud, M32, NGC 205, NGC 6822, NGC 185, IC 1613 and NGC 147. He also identified IC 10 as a possible part of the Local Group.
By 2003, the number of known Local Group members had increased from his initial 12 to 36.
Image:Local_Group.svg|frame|center|Local Group (clickable map)
circle 180 27 20 Sextans B
circle 130 36 23 Sextans A
circle 318 239 20 Milky Way
circle 289 197 16 Leo I (dwarf galaxy)
circle 334 201 15 Canes Dwarf
rect 303 185 318 215 Leo II (dwarf galaxy)
circle 357 289 28 NGC 6822
circle 288 323 24 Phoenix Dwarf
circle 248 391 35 Tucana Dwarf
circle 363 416 20 Wolf-Lundmark-Melotte
circle 363 383 17 Cetus Dwarf
circle 369 346 11 IC 1613
rect 381 335 393 357 SagDIG
rect 393 335 406 356 Aquarius Dwarf
circle 417 304 17 Triangulum Galaxy
circle 417 254 15 NGC 185
rect 432 237 447 260 NGC 147
circle 461 229 17 IC 10
poly 440 282 455 260 511 259 493 285 Andromeda Galaxy
poly 450 264 434 265 431 280 442 280 M110
circle 295 110 20 Leo A
circle 84 128 20 NGC 3109
circle 109 149 14 Antlia Dwarf
circle 412 332 12 LGS 3
circle 460 361 21 Pegasus Dwarf
circle 394 272 14 Andromeda II
rect 427 279 438 294 Andromeda III
rect 438 282 450 294 Andromeda I
desc bottom-left | https://en.wikipedia.org/wiki?curid=18093 |
Litre
The litre (British and Commonwealth spelling) or liter (American spelling) (SI symbols L and l, other symbol used: ℓ) is a non-SI unit of volume. It is equal to 1 cubic decimetre (dm3), 1000 cubic centimetres (cm3) or 0.001 cubic metre. A cubic decimetre (or litre) occupies a volume of (see figure) and is thus equal to one-thousandth of a cubic metre.
The original French metric system used the litre as a base unit. The word "litre" is derived from an older French unit, the "litron", whose name came from Greek—where it was a unit of weight, not volume—via Latin, and which equalled approximately 0.831 litres. The litre was also used in several subsequent versions of the metric system and is accepted for use with the SI, although not an SI unit—the SI unit of volume is the cubic metre (m3). The spelling used by the International Bureau of Weights and Measures is "litre", a spelling which is shared by almost all English-speaking countries. The spelling "liter" is predominantly used in American English.
One litre of liquid water has a mass of almost exactly one kilogram, because the kilogram was originally defined in 1795 as the mass of one cubic decimetre of water at the temperature of melting ice (). Subsequent redefinitions of the metre and kilogram mean that this relationship is no longer exact.
A litre is a cubic decimetre, or 10 centimetres × 10 centimetres × 10 centimetres, (1 L ≡ 1 dm3 ≡ 1000 cm3). Hence 1 L ≡ 0.001 m3 ≡ 1000 cm3, and 1 m3 (i.e. a cubic metre, which is the SI unit for volume) is exactly 1000 L.
From 1901 to 1964, the litre was defined as the volume of one kilogram of pure water at maximum density (+4 °C) and standard pressure. The kilogram was in turn specified as the mass of the International Prototype of the Kilogram (a specific platinum/iridium cylinder) and was intended to be of the same mass as the 1 litre of water referred to above. It was subsequently discovered that the cylinder was around 28 parts per million too large and thus, during this time, a litre was about 1.000028 dm3. Additionally, the mass–volume relationship of water (as with any fluid) depends on temperature, pressure, purity and isotopic uniformity. In 1964, the definition relating the litre to mass was superseded by the current one. Although the litre is not an SI unit, it is accepted by the CGPM (the standards body that defines the SI) for use with the SI. CGPM defines the litre and its acceptable symbols.
A litre is equal in volume to the millistere, an obsolete non-SI metric unit customarily used for dry measure.
Litres are most commonly used for items (such as fluids and solids that can be poured), which are measured by the capacity or size of their container, whereas cubic metres (and derived units) are most commonly used for items measured either by their dimensions or their displacements. The litre is often also used in some calculated measurements, such as density (kg/L), allowing an easy comparison with the density of water.
One litre of water has a mass of almost exactly one kilogram when measured at its maximal density, which occurs at about 4 °C. It follows, therefore, that 1000th of a litre, known as one millilitre (1 mL) of water has a mass of about 1 g; 1000 litres of water has a mass of about 1000 kg (1 tonne). This relationship holds because the gram was originally defined as the mass of 1 mL of water; however, this definition was abandoned in 1799 because the density of water changes with temperature and, very slightly, with pressure.
It is now known that the density of water also depends on the isotopic ratios of the oxygen and hydrogen atoms in a particular sample. Modern measurements of Vienna Standard Mean Ocean Water, which is pure distilled water with an isotopic composition representative of the average of the world's oceans, show that it has a density of at its point of maximum density (3.984 °C) under one standard atmosphere (760 Torr = 101.325 kPa) of pressure.
The litre, though not an official SI unit, may be used with SI prefixes. The most commonly used derived unit is the millilitre, defined as one-thousandth of a litre, and also often referred to by the SI derived unit name "cubic centimetre". It is a commonly used measure, especially in medicine, cooking and automotive engineering. Other units may be found in the table below, where the more often used terms are in bold. However, some authorities advise against some of them; for example, in the United States, NIST advocates using the millilitre or litre instead of the centilitre. There are two international standard symbols for the litre: L and l. In the United States the former is preferred because of the risk that (in some fonts) the letter and the digit may be confused.
One litre is slightly larger than a US liquid quart and slightly less than an imperial quart or one US dry quart. A mnemonic for its volume relative to an imperial pint is "a litre of water's a pint and three quarters"; this is very close, as a litre is actually 1.75975399 pints.
A litre is the volume of a cube with sides of 10 cm, which is slightly less than a cube of sides 4 inches (one-third of a foot). One cubic foot would contain exactly 27 such cubes (four inches on each side), making one cubic foot approximately equal to 27 litres. One cubic foot has an exact volume of 28.316846592 litres, which is 4.88% higher than the 27-litre approximation.
A litre of liquid water has a mass almost exactly equal to one kilogram. An early definition of the kilogram was set as the mass of one litre of water. Because volume changes with temperature and pressure, and pressure uses units of mass, the definition of a kilogram was changed. At standard pressure, one litre of water has a mass of 0.999975 kg at 4 °C, and 0.997 kg at 25 °C.
Originally, the only symbol for the litre was l (lowercase letter L), following the SI convention that only those unit symbols that abbreviate the name of a person start with a capital letter. In many English-speaking countries, however, the most common shape of a handwritten Arabic digit 1 is just a vertical stroke; that is, it lacks the upstroke added in many other cultures. Therefore, the digit "1" may easily be confused with the letter "l". In some computer typefaces, the two characters are barely distinguishable. This caused some concern, especially in the medical community.
As a result, L (uppercase letter L) was adopted as an alternative symbol for litre in 1979. The United States National Institute of Standards and Technology now recommends the use of the uppercase letter L, a practice that is also widely followed in Canada and Australia. In these countries, the symbol L is also used with prefixes, as in mL and μL, instead of the traditional ml and μl used in Europe. In the UK and Ireland, as well as the rest of Europe, lowercase "l" is used with prefixes, though whole litres are often written in full (so, "750 ml" on a wine bottle, but often "1 litre" on a juice carton). In 1990, the International Committee for Weights and Measures stated that it was too early to choose a single symbol for the litre.
Prior to 1979, the symbol came into common use in some countries; for example, it was recommended by South African Bureau of Standards publication M33 and Canada in the 1970s. This symbol can still be encountered occasionally in some English-speaking and European countries like Germany, and its use is ubiquitous in Japan and South Korea.
Fonts covering the CJK characters usually include not only the script small but also four precomposed characters: ㎕, ㎖, ㎗ and ㎘ for the microlitre, millilitre, decilitre and kilolitre.
The first name of the litre was "cadil"; standards are shown at the Musée des Arts et Métiers in Paris.
The litre was introduced in France in 1795 as one of the new "republican units of measurement" and defined as one cubic decimetre.
One litre of liquid water has a mass of almost exactly one kilogram, due to the gram being defined in 1795 as one cubic centimetre of water at the temperature of melting ice.
The original decimetre length was 44.344 "lignes", which was revised in 1798 to 44.3296 "lignes". This made the original litre of today's cubic decimetre. It was against this litre that the kilogram was constructed.
In 1879, the CIPM adopted the definition of the litre, with the symbol l (lowercase letter L).
In 1901, at the 3rd CGPM conference, the litre was redefined as the space occupied by 1 kg of pure water at the temperature of its maximum density (3.98 °C) under a pressure of 1 atm. This made the litre equal to about (earlier reference works usually put it at ).
In 1964, at the 12th CGPM conference, the original definition was reverted to, and thus the litre was once again defined in exact relation to the metre, as another name for the cubic decimetre, that is, exactly 1 dm3.
In 1979, at the 16th CGPM conference, the alternative symbol L (uppercase letter L) was adopted. It also expressed a preference that in the future only one of these two symbols should be retained, but in 1990 said it was still too early to do so.
In spoken English, the symbol "mL" (for millilitre) can be pronounced as "mil". This can potentially cause confusion with some other measurement words such as:
However the context is usually sufficient hint — "mL" is a unit of volume; whereas the others are units of linear or angular measurement.
The abbreviation "cc" (for cubic centimetre, equal to a millilitre, or mL) is a unit of the cgs system, which preceded the MKS system, which later evolved into the SI system. The abbreviation "cc" is still commonly used in many fields, including medical dosage and sizing for combustion engine displacement.
The microlitre (μL) has been known in the past as the lambda (λ), but this usage is now discouraged. In the medical field the microlitre is sometimes abbreviated as mcL on test results.
In the SI system, use of prefixes for powers of 1000 is preferred and all other multiples discouraged. In countries where the metric system was established well before the adoption of the SI standard, other multiples were already established and their use remains common. In particular, use of the "centi" (10−2), "deci" (10−1), "deca" (10+1) and "hecto" (10+2) prefixes are still common. For example, in many European countries, the hectolitre is the typical unit for production and export volumes of beverages (milk, beer, soft drinks, wine, etc.) and for measuring the size of the catch and quotas for fishing boats; decilitres are common in Switzerland and Scandinavia and sometimes found in cookbooks; centilitres indicate the capacity of drinking glasses and of small bottles. In colloquial Dutch in Belgium, a "" and a "" (literally "twenty-fiver" and "thirty-threer") are the common beer glasses, the corresponding bottles mention 25 cL and 33 cL. Bottles may also be 75 cL or half size at 37.5 cL for "artisanal" brews or 70 cL for wines or spirits. Cans come in 25 cL, 33 cL and 50 cL.
In countries where the metric system was adopted as the official measuring system after the SI standard was established, common usage more closely follows contemporary SI conventions. For example, in Canada, Australia, and New Zealand, consumer beverages are labelled almost exclusively using litres and millilitres. Hectolitres sometimes appear in industry, but centilitres and decilitres are rarely, if ever, used. An exception is in pathology, where for instance blood lead level may be measured in micrograms per decilitre. Larger volumes are usually given in cubic metres (equivalent to 1 kL), or thousands or millions of cubic metres.
Although kilolitres, megalitres, and gigalitres are commonly used for measuring water consumption, reservoir capacities and river flows, for larger volumes of fluids, such as annual consumption of tap water, lorry (truck) tanks, or swimming pools, the cubic metre is the general unit. It is also generally for all volumes of a non-liquid nature. | https://en.wikipedia.org/wiki?curid=18094 |
Lavr Kornilov
Lavr Georgiyevich Kornilov (, ; – 13 April 1918) was a Russian military intelligence officer, explorer, and general of Siberian Cossack origin in the Imperial Russian Army during World War I and the ensuing Russian Civil War. He is today best remembered for the Kornilov Affair, an unsuccessful endeavor in August/September 1917 that was intended to strengthen Alexander Kerensky's Provisional Government, but which led to Kerensky eventually having Kornilov arrested and charged with attempting a coup d'état, and ultimately undermined Kerensky's rule.
Kornilov escaped from jail in November 1917, and subsequently became the military commander of the anti-Bolshevik Volunteer Army which took the charge of anti- Bolshevik opposition in the south of Russia. He and his troops were badly outnumbered in many of their encounters, and he was killed by a shell on 13 April 1918 while laying siege to Ekaterinodar, the capital of the Kuban Soviet Republic.
One story relates how Kornilov was originally born as a Don Cossack Kalmyk named Lorya Dildinov and adopted in Ust-Kamenogorsk, Russian Turkestan (now Kazakhstan) by the family of his mother's brother, the Russian Cossack Khorunzhiy Georgy Nikolayevich Kornilov, whose wife was of Kazakh origin. But his sister wrote that he had not been adopted, had not been a Don Cossack, and that their mother had Polish and Altai Oirot descent. (Though their language was not a Kalmyk/Mongolian one, but because of their Asian race and their history in the Jungar Oirot (Kalmyk) state, Altai Oirots were called Altai Kalmyks by Russians. They were not Muslims or Kazakhs.) But Boris Shaposhnikov, who served with Pyotr Kornilov, the brother of Lavr, in 1903, mentioned the "Kyrgyz" ancestry of their mother - this name was usually used in reference to Kazakhs in 1903. Kornilov's Siberian Cossack father was a friend of Potanin (1835-1920), a prominent figure in the Siberian autonomy movement.
Kornilov entered military school in Omsk in 1885 and went on to study at the Mikhailovsky Artillery School in St. Petersburg in 1889. In August 1892 he was assigned as a lieutenant to the Turkestan Military District, where he led several exploration missions in Eastern Turkestan, Afghanistan and Persia, learned several Central Asian languages, and wrote detailed reports about his observations.
Kornilov returned to St. Petersburg to attend the Mykolayiv General Staff Academy and graduated as a captain in 1897. Again refusing a posting at St. Peterburg, he returned to the Turkestan Military District, where he resumed his duties as a military intelligence officer.
During the Russo-Japanese War of 1904-1905 Kornilov became the Chief of staff of the 1st Infantry Brigade, and was heavily involved in the Battle of Sandepu (January 1905) and the Battle of Mukden (February/March 1905). He was awarded the Order of St. George (4th class) for bravery and promoted to the rank of colonel.
Following the end of the war, Kornilov served as military attache in China from 1907 to 1911. He studied the Chinese language, travelled extensively (researching data on the history, traditions and customs of the Chinese, which he intended to use as material for a book about life in contemporary China), and regularly sent detailed reports to the General Staff and Foreign Ministry. Kornilov paid much attention to the prospects of cooperation between Russia and China in the Far East and met with the future president of China, Chiang Kai-shek. In 1910 Kornilov was recalled from Beijing but remained in St. Petersburg for only five months before departing for western Mongolia and Kashgar to examine the military situation along China's border with Russia. On 2 February 1911 he became Commander of the 8th Infantry Regiment of Estonia and was later appointed commander of the 9th Siberian Rifle Division, stationed in Vladivostok.
In 1914, at the start of World War I, Kornilov was appointed commander of the 48th Infantry Division, which saw combat in Galicia and the Carpathians. In 1915, he was promoted to the rank of major general. During heavy fighting, he was captured by the Austrians in April 1915, when his division became isolated from the rest of the Russian forces. After his capture, Field Marshal Conrad, the commander of the Austro-Hungarian Army, made a point of meeting him in person. As a major general, he was a high-value prisoner of war, but in July 1916 Kornilov managed to escape back to Russia and return to duty.
After the abdication of Tsar Nicholas II, he was given command of the Petrograd Military District in March 1917. On 8 March, Kornilov placed the Empress Alexandra and her children under house arrest at the Alexander Palace (Nicholas was still held at Stavka), replacing the Tsar's Escort and Combined Regiments of the Imperial Guard with 300 revolutionary troops. In July, after commanding the only successful front in the disastrous Russian offensive of June 1917, he became Supreme Commander-in-Chief of the Provisional Government's armed forces.
In the mass discontent following the July Days, the Russian populace grew highly skeptical about the Provisional Government's abilities to alleviate the economic distress and social resentment among the lower classes. Pavel Milyukov, the Kadet leader, describes the situation in Russia in late July as, "Chaos in the army, chaos in foreign policy, chaos in industry and chaos in the nationalist questions".
Kornilov, appointed commander-in-chief of the Russian army in July 1917, considered the Petrograd Soviet responsible for the breakdown in the military in recent times and believed that the Provisional Government lacked the power and confidence to dissolve the Petrograd Soviet. Following several ambiguous correspondences between Kornilov and Alexander Kerensky, Kornilov commanded an assault on the Petrograd Soviet.
Because the Petrograd Soviet was able to quickly gather a powerful army of workers and soldiers in defence of the Revolution, Kornilov's coup was an abysmal failure, and he was placed under arrest. The Kornilov Affair resulted in significantly increased distrust among Russians towards the Provisional Government.
After the alleged coup collapsed as his troops disintegrated, Kornilov and his fellow conspirators were placed under arrest in the Bykhov jail. On 19 November, a few weeks after the proclamation of Soviet power in Petrograd, they escaped from their confinement (eased by the fact that the jail was guarded by Kornilov's supporters) and made their way to the Don region, which was controlled by the Don Cossacks. Here they linked up with General Mikhail Alekseev. Kornilov became the military commander of the anti-Bolshevik Volunteer Army with Alekseev as the political chief.
The Kornilov Shock Detachment of the 8th Army was the most famous and longest-lived volunteer unit in the Russian Imperial Army. It was also the last regiment of the Russian Imperial Army and the first of the Volunteer Army. In late 1917, the Kornilov Shock Regiment, one of the crack units of the Volunteer Army, was named after him, as well as many other autonomous White Army formations, such as the Kuban Cossack Kornilov Horse Regiment. Kornilov's forces became recognizable for their Totenkopf insignia, which appeared on the regiment's flags, pennants, and soldiers' sleeve patches.
Even before the Red Army was formed, Lavr Kornilov promised, "the greater the terror, the greater our victories." He vowed that the goals of his forces must be fulfilled even if it was needed "to set fire to half the country and shed the blood of three-quarters of all Russians." In the Don region village of Lezhanka alone, bands of Kornilov's officers killed more than 500 people. On the other hand, Kornilov's adjutant recalled, that the general "loved only the [Russia] itself" and served it for all his life, having no time to think about political systems. The Bolsheviks for him were dangerous traitors, who ruined Russia's unity and had to be stopped.
On 24 February 1918, as Rostov and the Don Cossack capital of Novocherkassk fell to the Bolsheviks, Kornilov led the Volunteer Army on the epic 'Ice March' into the empty steppe towards the Kuban. Although badly outnumbered, he escaped destruction from pursuing Bolshevik forces and laid siege to Ekaterinodar, the capital of the Kuban Soviet Republic, on 10 April. However, in the early morning of 13 April, a Soviet shell landed on his farmhouse headquarters and killed him. He was buried in a nearby village.
A few days later, when the Bolsheviks gained control of the village, they unearthed Kornilov's coffin, dragged his corpse to the main square and burnt his remains on the local rubbish dump. | https://en.wikipedia.org/wiki?curid=18096 |
Linear map
In mathematics, a linear map (also called a linear mapping, linear transformation or, in some contexts, linear function) is a mapping between two modules (for example, two vector spaces) that preserves (in the sense defined below) the operations of addition and scalar multiplication. If a linear map is a bijection then it is called a linear isomorphism.
An important special case is when , in which case a linear map is called a (linear) endomorphism of . Sometimes the term linear operator refers to this case. In another convention, "linear operator" allows and to differ, while requiring them to be real vector spaces. Sometimes the term "linear function" has the same meaning as "linear map", while in analytic geometry it does not.
A linear map always maps linear subspaces onto linear subspaces (possibly of a lower dimension); for instance it maps a plane through the origin to a plane, straight line or point. Linear maps can often be represented as matrices, and simple examples include rotation and reflection linear transformations.
In the language of abstract algebra, a linear map is a module homomorphism. In the language of category theory, it is a morphism in the category of modules over a given ring.
Let formula_8 and formula_9 be vector spaces over the same field formula_10 A function formula_11 is said to be a "linear map" if for any two vectors formula_12 and any scalar formula_13 the following two conditions are satisfied:
Thus, a linear map is said to be "operation preserving". In other words, it does not matter whether the linear map is applied before (the right hand sides of the above examples) or after (the left hand sides of the examples) the operations of addition and scalar multiplication.
By the associativity of the addition operation denoted as +, for any vectors formula_14 and scalars formula_15 the following equality holds:
Denoting the zero elements of the vector spaces formula_8 and formula_9 by formula_31 and formula_32 respectively, it follows that formula_33 Let formula_34 and formula_35 in the equation for homogeneity of degree 1:
Occasionally, formula_8 and formula_9 can be vector spaces over different fields. It is then necessary to specify which of these ground fields is being used in the definition of "linear". If formula_8 and formula_9 are spaces over the same field formula_41 as above, then we talk about formula_41-linear maps. For example, the conjugation of complex numbers is an formula_43-linear map formula_44, but it is not formula_45-linear, where formula_43 and formula_45 are symbols representing the sets of real numbers and complex numbers, respectively.
A linear map formula_48 with formula_41 viewed as a one-dimensional vector space over itself is called a linear functional.
These statements generalize to any left-module formula_50 over a ring formula_51 without modification, and to any right-module upon reversing of the scalar multiplication.
If "V" and "W" are finite-dimensional vector spaces and a basis is defined for each vector space, then every linear map from "V" to "W" can be represented by a matrix. This is useful because it allows concrete calculations. Matrices yield examples of linear maps: if "A" is a real matrix, then describes a linear map (see Euclidean space).
Let {v1, …, v"n"} be a basis for "V". Then every vector v in "V" is uniquely determined by the coefficients "c"1, …, "c""n" in the field R:
If is a linear map,
which implies that the function "f" is entirely determined by the vectors "f"(v1), …, "f"(v"n"). Now let be a basis for "W". Then we can represent each vector "f"(v"j") as
Thus, the function "f" is entirely determined by the values of "a""ij". If we put these values into an matrix "M", then we can conveniently use it to compute the vector output of "f" for any vector in "V". To get "M", every column "j" of "M" is a vector
corresponding to "f"(v"j") as defined above. To define it more clearly, for some column "j" that corresponds to the mapping "f"(v"j"),
where M is the matrix of "f". In other words, every column has a corresponding vector "f"(v"j") whose coordinates "a"1"j", …, "a""mj" are the elements of column "j". A single linear map may be represented by many matrices. This is because the values of the elements of a matrix depend on the bases chosen.
The matrices of a linear transformation can be represented visually:
Such that starting in the bottom left corner formula_86 and looking for the bottom right corner formula_87, one would left-multiply—that is, formula_88. The equivalent method would be the "longer" method going clockwise from the same point such that formula_86 is left-multiplied with formula_90, or formula_91.
In two-dimensional space R2 linear maps are described by 2 × 2 real matrices. These are some examples:
The composition of linear maps is linear: if and are linear, then so is their composition . It follows from this that the class of all vector spaces over a given field "K", together with "K"-linear maps as morphisms, forms a category.
The inverse of a linear map, when defined, is again a linear map.
If and are linear, then so is their pointwise sum (which is defined by .
If is linear and "a" is an element of the ground field "K", then the map "af", defined by , is also linear.
Thus the set of linear maps from "V" to "W" itself forms a vector space over "K", sometimes denoted . Furthermore, in the case that , this vector space (denoted End("V")) is an associative algebra under composition of maps, since the composition of two linear maps is again a linear map, and the composition of maps is always associative. This case is discussed in more detail below.
Given again the finite-dimensional case, if bases have been chosen, then the composition of linear maps corresponds to the matrix multiplication, the addition of linear maps corresponds to the matrix addition, and the multiplication of linear maps with scalars corresponds to the multiplication of matrices with scalars.
A linear transformation "f": "V" → "V" is an endomorphism of "V"; the set of all such endomorphisms End("V") together with addition, composition and scalar multiplication as defined above forms an associative algebra with identity element over the field "K" (and in particular a ring). The multiplicative identity element of this algebra is the identity map id: "V" → "V".
An endomorphism of "V" that is also an isomorphism is called an automorphism of "V". The composition of two automorphisms is again an automorphism, and the set of all automorphisms of "V" forms a group, the automorphism group of "V" which is denoted by Aut("V") or GL("V"). Since the automorphisms are precisely those endomorphisms which possess inverses under composition, Aut("V") is the group of units in the ring End("V").
If "V" has finite dimension "n", then End("V") is isomorphic to the associative algebra of all "n" × "n" matrices with entries in "K". The automorphism group of "V" is isomorphic to the general linear group GL("n", "K") of all "n" × "n" invertible matrices with entries in "K".
If "f" : "V" → "W" is linear, we define the kernel and the image or range of "f" by
ker("f") is a subspace of "V" and im("f") is a subspace of "W". The following dimension formula is known as the rank–nullity theorem:
The number dim(im("f")) is also called the "rank of f" and written as rank("f"), or sometimes, ρ("f"); the number dim(ker("f")) is called the "nullity of f" and written as null("f") or ν("f"). If "V" and "W" are finite-dimensional, bases have been chosen and "f" is represented by the matrix "A", then the rank and nullity of "f" are equal to the rank and nullity of the matrix "A", respectively.
A subtler invariant of a linear transformation formula_104 is the "co"kernel, which is defined as
This is the "dual" notion to the kernel: just as the kernel is a "sub"space of the "domain," the co-kernel is a "quotient" space of the "target."
Formally, one has the exact sequence
These can be interpreted thus: given a linear equation "f"(v) = w to solve,
The dimension of the co-kernel and the dimension of the image (the rank) add up to the dimension of the target space. For finite dimensions, this means that the dimension of the quotient space "W"/"f"("V") is the dimension of the target space minus the dimension of the image.
As a simple example, consider the map "f": R2 → R2, given by "f"("x", "y") = (0, "y"). Then for an equation "f"("x", "y") = ("a", "b") to have a solution, we must have "a" = 0 (one constraint), and in that case the solution space is ("x", "b") or equivalently stated, (0, "b") + ("x", 0), (one degree of freedom). The kernel may be expressed as the subspace ("x", 0) < "V": the value of "x" is the freedom in a solution – while the cokernel may be expressed via the map "W" → R, formula_107 given a vector ("a", "b"), the value of "a" is the "obstruction" to there being a solution.
An example illustrating the infinite-dimensional case is afforded by the map "f": R∞ → R∞, formula_108 with "b"1 = 0 and "b""n" + 1 = "an" for "n" > 0. Its image consists of all sequences with first element 0, and thus its cokernel consists of the classes of sequences with identical first element. Thus, whereas its kernel has dimension 0 (it maps only the zero sequence to the zero sequence), its co-kernel has dimension 1. Since the domain and the target space are the same, the rank and the dimension of the kernel add up to the same sum as the rank and the dimension of the co-kernel ( formula_109 ), but in the infinite-dimensional case it cannot be inferred that the kernel and the co-kernel of an endomorphism have the same dimension (0 ≠ 1). The reverse situation obtains for the map "h": R∞ → R∞, formula_110 with "cn" = "a""n" + 1. Its image is the entire target space, and hence its co-kernel has dimension 0, but since it maps all sequences in which only the first element is non-zero to the zero sequence, its kernel has dimension 1.
For a linear operator with finite-dimensional kernel and co-kernel, one may define "index" as:
namely the degrees of freedom minus the number of constraints.
For a transformation between finite-dimensional vector spaces, this is just the difference dim("V") − dim("W"), by rank–nullity. This gives an indication of how many solutions or how many constraints one has: if mapping from a larger space to a smaller one, the map may be onto, and thus will have degrees of freedom even without constraints. Conversely, if mapping from a smaller space to a larger one, the map cannot be onto, and thus one will have constraints even without degrees of freedom.
The index of an operator is precisely the Euler characteristic of the 2-term complex 0 → "V" → "W" → 0. In operator theory, the index of Fredholm operators is an object of study, with a major result being the Atiyah–Singer index theorem.
No classification of linear maps could hope to be exhaustive. The following incomplete list enumerates some important classifications that do not require any additional structure on the vector space.
Let "V" and "W" denote vector spaces over a field, "F". Let "T": "V" → "W" be a linear map.
Given a linear map which is an endomorphism whose matrix is "A", in the basis "B" of the space it transforms vector coordinates [u] as [v] = "A"[u]. As vectors change with the inverse of "B" (vectors are contravariant) its inverse transformation is [v] = "B"[v'].
Substituting this in the first expression
hence
Therefore, the matrix in the new basis is "A′" = "B"−1"AB", being "B" the matrix of the given basis.
Therefore, linear maps are said to be 1-co- 1-contra-variant objects, or type (1, 1) tensors.
A "linear transformation" between topological vector spaces, for example normed spaces, may be continuous. If its domain and codomain are the same, it will then be a continuous linear operator. A linear operator on a normed linear space is continuous if and only if it is bounded, for example, when the domain is finite-dimensional. An infinite-dimensional domain may have discontinuous linear operators.
An example of an unbounded, hence discontinuous, linear transformation is differentiation on the space of smooth functions equipped with the supremum norm (a function with small values can have a derivative with large values, while the derivative of 0 is 0). For a specific example, sin("nx")/"n" converges to 0, but its derivative cos("nx") does not, so differentiation is not continuous at 0 (and by a variation of this argument, it is not continuous anywhere).
A specific application of linear maps is for geometric transformations, such as those performed in computer graphics, where the translation, rotation and scaling of 2D or 3D objects is performed by the use of a transformation matrix. Linear mappings also are used as a mechanism for describing change: for example in calculus correspond to derivatives; or in relativity, used as a device to keep track of the local transformations of reference frames.
Another application of these transformations is in compiler optimizations of nested-loop code, and in parallelizing compiler techniques. | https://en.wikipedia.org/wiki?curid=18102 |
Leyden jar
A Leyden jar (or Leiden jar) is an antique electrical component which stores a high-voltage electric charge (from an external source) between electrical conductors on the inside and outside of a glass jar. It typically consists of a glass jar with metal foil cemented to the inside and the outside surfaces, and a metal terminal projecting vertically through the jar lid to make contact with the inner foil. It was the original form of the capacitor (also called "condenser").
Its invention was a discovery made independently by German cleric Ewald Georg von Kleist on 11 October 1745 and by Dutch scientist Pieter van Musschenbroek of Leiden (Leyden) in 1745–1746. The invention was named after the city.
The Leyden jar was used to conduct many early experiments in electricity, and its discovery was of fundamental importance in the study of electrostatics. It was the first means of accumulating and preserving electric charge in large quantities that could be discharged at the experimenter's will, thus overcoming a significant limit to early research into electrical conduction. Leyden jars are still used in education to demonstrate the principles of electrostatics.
The Ancient Greeks already knew that pieces of amber could attract lightweight particles after being rubbed. The amber becomes electrified by triboelectric effect, mechanical separation of charge in a dielectric. The Greek word for amber is ἤλεκτρον ("ēlektron") and is the origin of the word "electricity".
Around 1650, Otto von Guericke built a crude electrostatic generator: a sulphur ball that rotated on a shaft. When Guericke held his hand against the ball and turned the shaft quickly, a static electric charge built up. This experiment inspired the development of several forms of "friction machines", that greatly helped in the study of electricity.
The Leyden jar was effectively discovered independently by two parties: German deacon Ewald Georg von Kleist, who made the first discovery, and Dutch scientists Pieter van Musschenbroek and Andreas Cunaeus, who figured out how it worked only when held in the hand.
The Leyden jar is a high voltage device; it is estimated that at a maximum the early Leyden jars could be charged to 20,000 to 60,000 volts. The center rod electrode has a metal ball on the end to prevent leakage of the charge into the air by corona discharge. It was first used in electrostatics experiments, and later in high voltage equipment such as spark gap radio transmitters and electrotherapy machines.
Ewald Georg von Kleist discovered the immense storage capability of the Leyden jar while working under a theory that saw electricity as a fluid, and hoped a glass jar filled with alcohol would "capture" this fluid. He was the deacon at the cathedral of Camin in Pomerania.
In October 1745 von Kleist tried to accumulate electricity in a small medicine bottle filled with alcohol with a nail inserted in the cork. He was following up on an experiment developed by Georg Matthias Bose where electricity had been sent through water to set alcoholic spirits alight. He attempted to charge the bottle from a large prime conductor (invented by Bose) suspended above his friction machine.
Kleist was convinced that a substantial electric charge could be collected and held within the glass which he knew would provide an obstacle to the escape of the 'fluid'. He received a significant shock from the device when he accidentally touched the nail through the cork while still cradling the bottle in his other hand. He communicated his results to at least five different electrical experimenters, in several letters from November 1745 to March 1746, but did not receive any confirmation that they had repeated his results, until April 1746. Daniel Gralath learned about Kleist's experiment from seeing the letter to Paul Swietlicki, written in November 1745. After Gralath's failed first attempt to reproduce the experiment in December 1745, he wrote to Kleist for more information (and was told that the experiment would work better if the tube half-filled with alcohol was used). Gralath (in collaboration with ) succeeded in getting the intended effect on 5 March 1746, holding a small glass medicine bottle with a nail inside in one hand, moving it close to an electrostatic generator, and then moving the other hand close to the nail. Kleist didn't understand the significance of his conducting hand holding the bottle—and both he and his correspondents were loath to hold the device when told that the shock could throw them across the room. It took some time before Kleist's student associates at Leyden worked out that the hand provided an essential element.
The Leyden jar's invention was long credited to Pieter van Musschenbroek, the physics professor at University of Leiden, who also ran a family foundry which cast brass cannonettes, and a small business ("De Oosterse Lamp" – "The Eastern Lamp") which made scientific and medical instruments for the new university courses in physics and for scientific gentlemen keen to establish their own 'cabinets' of curiosities and instruments.
Like Kleist, Musschenbroek was also interested in and attempting to repeat Bose's experiment. During this time, Andreas Cunaeus, a lawyer, came to learn about this experiment from visiting Musschenbroek's laboratory and Cunaeus attempted to duplicate the experiment at home with household items. Using a glass of beer, Cunaeus was unable to make it work. Cunaeus was the first to discover that the experimental setup could deliver a severe shock when he held his jar in his hand while charging it rather than placing it on an insulated stand, not realising that was the standard practice, thus making himself part of the circuit. He reported his procedure and experience to Allamand, Musschenbroek's colleague. Allamand and Musschenbroek also received severe shocks. Musschenbroek communicated the experiment in a letter from 20 January 1746 to René Antoine Ferchault de Réaumur, who was Musschenbroek's appointed correspondent at the Paris Academy. Abbé Nollet read this report, confirmed the experiment, and then read Musschenbroek's letter in a public meeting of the Paris Academy in April 1746 (translating from Latin to French).
Musschenbroek's outlet in France for the sale of his company's 'cabinet' devices was the Abbé Nollet (who started building and selling duplicate instruments in 1735). Nollet then gave the electrical storage device the name "Leyden jar" and promoted it as a special type of flask to his market of wealthy men with scientific curiosity.
The "Kleistian jar" was therefore promoted as the "Leyden jar", and as having been discovered by Pieter van Musschenbroek and his acquaintance Andreas Cunaeus. Musschenbroek, however, never claimed that he had invented it, and some think that Cunaeus was mentioned only to diminish credit to him.
Within months after Musschenbroek's report about how to reliably create a Leyden jar, other electrical researchers were making and experimenting with their own Leyden jars. One interest was to see if the total possible charge could be increased. Johann Heinrich Winckler, whose first experience with a single Leyden jar was reported in a letter to the Royal Society on 29 May 1746, had connected three Leyden jars together in a kind of electrostatic battery on 28 July 1746. Daniel Gralath reported in 1747 that in 1746 he had conducted experiments with connecting two or three jars, probably in series. In 1748, Benjamin Franklin developed a system involving 11 panes of glass with thin lead plates glued on each side, and then connected together. He used the term "electrical battery" to describe his electrostatic battery in a 1749 letter about his electrical research in 1748. It is possible that Franklin's choice of the word "battery" was inspired by the humorous wordplay at the conclusion of his letter, where he wrote, among other things, about a salute to electrical researchers from a battery of guns. This is the first recorded use of the term "electrical battery". The multiple and rapid developments for connecting Leyden jars during the period 1746–1748 resulted in a variety of divergent accounts in secondary literature about who made the first "battery" by connecting Leyden jars, whether they were in series or parallel, and who first used the term "battery". The term was later used for combinations of multiple electrochemical cells, the modern meaning of the term "battery".
Starting in late 1756, Franz Aepinus, in a complicated interaction of cooperation and independent work with Johan Wilcke, developed an "air condenser", a variation on the Leyden jar, by using air rather than glass as the dielectric. This functioning apparatus, without glass, created a problem for Benjamin Franklin's explanation of the Leyden jar, which maintained that the charge was located in the glass.
Beginning in the late 18th century it was used in the Victorian medical field of electrotherapy to treat a variety of diseases by electric shock. By the middle of the 19th century, the Leyden jar had become common enough for writers to assume their readers knew of and understood its basic operation. Around the turn of the century it began to be widely used in spark-gap transmitters and medical electrotherapy equipment. By the early 20th century, improved dielectrics and the need to reduce their size and undesired inductance and resistance for use in the new technology of radio caused the Leyden jar to evolve into the modern compact form of capacitor.
A typical design consists of a glass jar with conducting tin foil coating the inner and outer surfaces. The foil coatings stop short of the mouth of the jar, to prevent the charge from arcing between the foils. A metal rod electrode projects through the stopper at the mouth of the jar, electrically connected by some means (usually a hanging chain) to the inner foil, to allow it to be charged. The jar is charged by an electrostatic generator, or other source of electric charge, connected to the inner electrode while the outer foil is grounded. The inner and outer surfaces of the jar store equal but opposite charges.
The original form of the device is just a glass bottle partially filled with water, with a metal wire passing through a cork closing it. The role of the outer plate is provided by the hand of the experimenter. Soon John Bevis found (in 1747) that it was possible to coat the exterior of the jar with metal foil, and he also found that he could achieve the same effect by using a plate of glass with metal foil on both sides. These developments inspired William Watson in the same year to have a jar made with a metal foil lining both inside and outside, dropping the use of water.
Early experimenters (such as Benjamin Wilson in 1746) reported that the thinner the dielectric and the greater the surface, the greater the charge that could be accumulated.
Further developments in electrostatics revealed that the dielectric material was not essential, but increased the storage capability (capacitance) and prevented arcing between the plates. Two plates separated by a small distance also act as a capacitor, even in a vacuum.
It was initially believed that the charge was stored in the water in early Leyden jars. In the 1700s American statesman and scientist Benjamin Franklin performed extensive investigations of both water-filled and foil Leyden jars, which led him to conclude that the charge was stored in the glass, not in the water. A popular experiment, due to Franklin, which seems to demonstrate this involves taking a jar apart after it has been charged and showing that little charge can be found on the metal plates, and therefore it must be in the dielectric. The first documented instance of this demonstration is in a 1749 letter by Franklin. Franklin designed a "dissectible" Leyden jar "(right)", which was widely used in demonstrations. The jar is constructed out of a glass cup nested between two fairly snugly fitting metal cups. When the jar is charged with a high voltage and carefully dismantled, it is discovered that all the parts may be freely handled without discharging the jar. If the pieces are re-assembled, a large spark may still be obtained from it.
This demonstration appears to suggest that capacitors store their charge inside their dielectric. This theory was taught throughout the 1800s. However, this phenomenon is a special effect caused by the high voltage on the Leyden jar. In the dissectible Leyden jar, charge is transferred to the surface of the glass cup by corona discharge when the jar is disassembled; this is the source of the residual charge after the jar is reassembled. Handling the cup while disassembled does not provide enough contact to remove all the surface charge. Soda glass is hygroscopic and forms a partially conductive coating on its surface, which holds the charge. Addenbrooke (1922) found that in a dissectible jar made of paraffin wax, or glass baked to remove moisture, the charge remained on the metal plates. Zeleny (1944) confirmed these results and observed the corona charge transfer.
Originally, the amount of capacitance was measured in number of 'jars' of a given size, or through the total coated area, assuming reasonably standard thickness and composition of the glass. A typical Leyden jar of one pint size has a capacitance of about 1 nF.
If a charged Leyden jar is discharged by shorting the inner and outer coatings and left to sit for a few minutes, the jar will recover some of its previous charge, and a second spark can be obtained from it. Often this can be repeated, and a series of 4 or 5 sparks, decreasing in length, can be obtained at intervals. This effect is caused by dielectric absorption.
In 1747–1748, Benjamin Franklin experimented with charging Leyden jars in series. | https://en.wikipedia.org/wiki?curid=18103 |
Lennon Wall
The Lennon Wall or John Lennon Wall is a wall in Prague, Czechia. Since the 1980s this once typical wall has been filled with John Lennon-inspired graffiti, lyrics from Beatles' songs, and designs relating to local and global causes.
Located in a small and secluded square across from the French Embassy, the wall had been decorated by love poems and short messages against the regime since 1960s. It received its first decoration connected to John Lennon, a symbol of freedom, western culture, and political struggle, following the 1980 assassination of John Lennon when an unknown artist painted a single image of the singer-songwriter and some lyrics.
In 1988, the wall was a source of irritation for Gustáv Husák's communist regime. Following a short-lived era of democratization and political liberalization known as the Prague Spring, the newly-installed communist government dismantled the reforms, inspiring anger and resistance. Young Czechs wrote their grievances on the wall and, according to a report of the time, this led to a clash between hundreds of students and security police on the nearby Charles Bridge. The liberalization movement these students followed was described as "Lennonism" (not to be confused with "Leninism"), and Czech authorities described participants variously as alcoholic, mentally deranged, sociopathic, and agents of Western free market capitalism.
The wall continuously undergoes change, and the original portrait of Lennon is long lost under layers of new paint. Even when the wall was repainted by authorities, by the next day it was again full of poems and flowers. Today, the wall represents a symbol of global ideals such as love and peace.
The wall is owned by the Sovereign Military Order of Malta, which allowed the graffiti, and is located at "Velkopřevorské náměstí" (Grand Priory Square), Malá Strana.
On 17 November 2014, the 25th anniversary of the Velvet Revolution, the wall was painted over in pure white by a group of art students, leaving only the text "wall is over" . The Knights of Malta initially filed a criminal complaint for vandalism against the students, which they later retracted after contacting them.
The wall mural is still there as of 23 July 2017. And the "Wall is Over" bit has been changed to "War Is Over", a song.
On 22 April 2019, Earth Day, the action group Extinction Rebellion repainted the entire wall with slogans demanding action from the Czech government on climate change. "KLIMATICKÁ NOUZE" was painted in large block print letters, which reads "climate emergency" in the Czech language. Members of the public were encouraged to add their own messages during the process, resulting in calls for action painted in several languages. A giant image of a skull was also painted. The repaint was carried out in a manner which allowed some of the existing artwork to be included on the new wall.
In July 2019, artists painted a memorial on the wall for Hong Kong democracy activist Marco Leung Ling-kit, who became known as a martyr and a symbol of hope for the 2019 anti-extradition bill protest movement. The image on the wall depicts the yellow raincoat he was wearing during the banner drop that eventually led to a fall from the building, along with some words of solidarity: "Hong Kong, Add oil."
On 4 August 2019 it was reported that the wall will be put under CCTV surveillance to block "unlawful graffiti" and combat the swaths of tourists that pass by it every day.
In October 2019, the Sovereign Military Order of Malta together with Prague 1 started the reconstruction of the Lennon Wall which lasted until November. They reacted thus to the recent situation of vandalism on the Wall and its surroundings connected to the overtourism which became unbearable this summer. The place should regain its respectable form which was going to be introduced on the occasion of the 30th anniversary of the Velvet Revolution in November as an open-air gallery with new rules. On 7 November 2019, the new face of the Lennon Wall as an open-air gallery was created and introduced to the public. Over 30 Czech and foreign professional artists gathered by the Czech designer Pavel Šťastný painted on the Wall. New rules of the Wall makes spraying no longer allowed, people can leave their messages connected to freedom and love only in the white free zones andy in more sensitive materials than sprays, e.g. pencil, marker or chalk. Cameras and police will monitor the wall to ensure the artistic portion is not defaced.
During the 2014 democracy protests in Hong Kong, a similar Lennon Wall appeared along the staircase outside of the Hong Kong Central Government Offices. Inspired by the original in Prague, many thousands of people posted colourful post-it notes expressing democratic wishes for Hong Kong. The wall was one of the major arts of the Umbrella Movement. Throughout the several months of occupations and protest, many efforts were made by different groups to ensure physical and digital preservation of the Hong Kong Lennon Wall.
Five years later, during the 2019 Hong Kong anti-extradition bill protests, the same wall was again created, with new post-it notes. Within days, dozens of post-it-note Lennon Walls had "blossomed everywhere" (遍地開花) throughout Hong Kong, including on Hong Kong Island itself, Kowloon, the New Territories, and on the many outlying islands. There are even some Lennon Walls located inside government offices, including RTHK and the Policy Innovation and Co-ordination Office. According to a crowd-sourced map of Hong Kong, there are over 150 Lennon Walls throughout the region.
On 21 September 2019, police in Hong Kong began tearing down Lennon Walls across the city to remove anti-government statements.
Lennon Walls have also appeared outside of Hong Kong in the cities of: Toronto, Vancouver BC, Calgary, Seoul, Tokyo, Berlin, London, Sydney, Manchester, Melbourne, Taipei, and Auckland. | https://en.wikipedia.org/wiki?curid=18109 |
Los Angeles
Los Angeles (; ; ), officially the City of Los Angeles and often known by its initials L.A., is the largest city in California. With an estimated population of nearly four million people, it is the second-most populous city in the United States (after New York City) and the third-most populous city in North America (after Mexico City and New York City). Los Angeles is known for its Mediterranean climate, ethnic diversity, Hollywood entertainment industry, and its sprawling metropolis.
Los Angeles lies in a basin in Southern California, adjacent to the Pacific Ocean, with mountains as high as , and deserts. The city, which covers about , is the seat of Los Angeles County, the most populous county in the United States. The Los Angeles metropolitan area (MSA) is home to 13.1 million people, making it the second-largest metropolitan area in the nation after New York. Greater Los Angeles includes metro Los Angeles as well as the Inland Empire and Ventura County. It is the second-most populous U.S. combined statistical area, also after New York, with a 2015 estimate of 18.7 million people.
Home to the Chumash and Tongva, Los Angeles was claimed by Juan Rodríguez Cabrillo for Spain in 1542. The city was founded on September 4, 1781, by Spanish governor Felipe de Neve. It became a part of Mexico in 1821 following the Mexican War of Independence. In 1848, at the end of the Mexican–American War, Los Angeles and the rest of California were purchased as part of the Treaty of Guadalupe Hidalgo, and thus became part of the United States. Los Angeles was incorporated as a municipality on April 4, 1850, five months before California achieved statehood. The discovery of oil in the 1890s brought rapid growth to the city. The city was further expanded with the completion of the Los Angeles Aqueduct in 1913, which delivers water from Eastern California.
Los Angeles has a diverse economy and hosts businesses in a broad range of professional and cultural fields. It also has the busiest container port in the Americas. A global city, it has been ranked 7th in the Global Cities Index and 9th in the Global Economic Power Index. The Los Angeles metropolitan area also has a gross metropolitan product of $1.0 trillion (), making it the third-largest city by GDP in the world, after the Tokyo and New York City metropolitan areas. Los Angeles hosted the 1932 and 1984 Summer Olympics and will host the 2028 Summer Olympics.
The Los Angeles coastal area was settled by the Tongva ("Gabrieleños") and Chumash tribes. A Gabrieleño settlement in the area was called "iyáangẚ" (written "Yang-na" by the Spanish), meaning "poison oak place".
Maritime explorer Juan Rodríguez Cabrillo claimed the area of southern California for the Spanish Empire in 1542 while on an official military exploring expedition moving north along the Pacific coast from earlier colonizing bases of New Spain in Central and South America. Gaspar de Portolà and Franciscan missionary Juan Crespí, reached the present site of Los Angeles on August 2, 1769.
In 1771, Franciscan friar Junípero Serra directed the building of the Mission San Gabriel Arcángel, the first mission in the area. On September 4, 1781, a group of forty-four settlers known as "Los Pobladores" founded the pueblo they called . The present-day city has the largest Roman Catholic Archdiocese in the United States. Two-thirds of the Mexican or (New Spain) settlers were mestizo or mulatto, a mixture of African, indigenous and European ancestry. The settlement remained a small ranch town for decades, but by 1820, the population had increased to about 650 residents. Today, the pueblo is commemorated in the historic district of Los Angeles Pueblo Plaza and Olvera Street, the oldest part of Los Angeles.
New Spain achieved its independence from the Spanish Empire in 1821, and the pueblo continued as a part of Mexico. During Mexican rule, Governor Pío Pico made Los Angeles Alta California's regional capital.
Mexican rule ended during the Mexican–American War: Americans took control from the Californios after a series of battles, culminating with the signing of the Treaty of Cahuenga on January 13, 1847.
Railroads arrived with the completion of the transcontinental Southern Pacific line to Los Angeles in 1876 and the Santa Fe Railroad in 1885. Petroleum was discovered in the city and surrounding area in 1892, and by 1923, the discoveries had helped California become the country's largest oil producer, accounting for about one-quarter of the world's petroleum output.
By 1900, the population had grown to more than 102,000, putting pressure on the city's water supply. The completion of the Los Angeles Aqueduct in 1913, under the supervision of William Mulholland, assured the continued growth of the city. Because of clauses in the city's charter that prevented the City of Los Angeles from selling or providing water from the aqueduct to any area outside its borders, many adjacent cities and communities felt compelled to annex themselves into Los Angeles.
Los Angeles created the first municipal zoning ordinance in the United States. On September 14, 1908, the Los Angeles City Council promulgated residential and industrial land use zones. The new ordinance established three residential zones of a single type, where industrial uses were prohibited. The proscriptions included barns, lumber yards, and any industrial land use employing machine-powered equipment. These laws were enforced against industrial properties after-the-fact. These prohibitions were in addition to existing activities that were already regulated as nuisances. These included explosives warehousing, gas works, oil-drilling, slaughterhouses, and tanneries. Los Angeles City Council also designated seven industrial zones within the city. However, between 1908 and 1915, Los Angeles City Council created various exceptions to the broad proscriptions that applied to these three residential zones, and as a consequence, some industrial uses emerged within them. There are two differences from the 1908 Residence District Ordinance and later zoning laws in the United States. First, the 1908 laws did not establish a comprehensive zoning map as the 1916 New York City Zoning Ordinance did. Second, the residential zones did not distinguish types of housing; it treated apartments, hotels, and detached-single-family housing equally.
In 1910, Hollywood merged into Los Angeles, with 10 movie companies already operating in the city at the time. By 1921, more than 80 percent of the world's film industry was concentrated in LA. The money generated by the industry kept the city insulated from much of the economic loss suffered by the rest of the country during the Great Depression.
By 1930, the population surpassed one million. In 1932, the city hosted the Summer Olympics.
During World War II, Los Angeles was a major center of wartime manufacturing, such as shipbuilding and aircraft. Calship built hundreds of Liberty Ships and Victory Ships on Terminal Island, and the Los Angeles area was the headquarters of six of the country's major aircraft manufacturers (Douglas Aircraft Company, Hughes Aircraft, Lockheed, North American Aviation, Northrop Corporation, and Vultee). During the war, more aircraft were produced in one year than in all the pre-war years since the Wright brothers flew the first airplane in 1903, combined. Manufacturing in Los Angeles skyrocketed, and as William S. Knudsen, of the National Defense Advisory Commission put it, "We won because we smothered the enemy in an avalanche of production, the like of which he had never seen, nor dreamed possible."
Following the end of World War II, Los Angeles grew more rapidly than ever, sprawling into the San Fernando Valley. The expansion of the Interstate Highway System during the 1950s and 1960s helped propel suburban growth and signaled the demise of the city's electrified rail system, once the world's largest.
Previous to the 1950s, Los Angeles' name had multiple pronunciations, but the soft "G" pronunciation is universal today. Some early movies or video shows it pronounced with a hard "G" (). Sam Yorty was one of the last public figures who still used the hard "G" pronunciation.
The 1960s saw race relations boil over into the Watts riots of 1965, which resulted in 34 deaths and over 1,000 injuries. In 1969, Los Angeles became the birthplace of the Internet, as the first ARPANET transmission was sent from the University of California, Los Angeles (UCLA) to SRI in Menlo Park.
In 1973, Tom Bradley was elected as the city's first African American mayor, serving for five terms until retiring in 1993. Other events in the city during the 1970s included the Symbionese Liberation Army's South Central standoff in 1974, the Hillside Stranglers murder cases in 1977–1978, Daryl Gates becoming Los Angeles Police Department's 49th-and as outspoken as his predecessor Edward Davis-police chief in 1978 and also in 1978 the 50th Academy Awards ceremony at the city's Dorothy Chandler Pavilion and Jimmy Carter's first ever presidential visit and in 1979, the decade ending with the 50th anniversary of the Academy Awards (51st ever ceremony, also at the Dorothy Chandler Pavilion), President Carter's second visit to the city and the City Council's and Bradley's respective passing and signing of the city's first homosexual rights bill.
In 1984, the city hosted the Summer Olympic Games for the second time. Despite being boycotted by 14 Communist countries, the 1984 Olympics became more financially successful than any previous, and the second Olympics to turn a profit until then–the other, according to an analysis of contemporary newspaper reports, being the 1932 Summer Olympics, also held in Los Angeles.
Racial tensions erupted on April 29, 1992, with the acquittal by a Simi Valley jury of four Los Angeles Police Department (LAPD) officers captured on videotape beating Rodney King, culminating in large-scale riots.
In 1994, the 6.7 Northridge earthquake shook the city, causing $12.5 billion in damage and 72 deaths. The century ended with the Rampart scandal, one of the most extensive documented cases of police misconduct in American history.
In 2002, Mayor James Hahn led the campaign against secession, resulting in voters defeating efforts by the San Fernando Valley and Hollywood to secede from the city.
Los Angeles will host the 2028 Summer Olympics and Paralympic Games, making Los Angeles the third city to host the Olympics three times.
The city of Los Angeles covers a total area of , comprising of land and of water. The city extends for north-south and for east-west. The perimeter of the city is .
Los Angeles is both flat and hilly. The highest point in the city proper is Mount Lukens at , located at the northeastern end of the San Fernando Valley. The eastern end of the Santa Monica Mountains stretches from Downtown to the Pacific Ocean and separates the Los Angeles Basin from the San Fernando Valley. Other hilly parts of Los Angeles include the Mt. Washington area north of Downtown, eastern parts such as Boyle Heights, the Crenshaw district around the Baldwin Hills, and the San Pedro district.
Surrounding the city are much higher mountains. Immediately to the north lie the San Gabriel Mountains, which is a popular recreation area for Angelenos. Its high point is Mount San Antonio, locally known as Mount Baldy, which reaches . Further afield, the highest point in the Greater Los Angeles area is San Gorgonio Mountain, with a height of .
The Los Angeles River, which is largely seasonal, is the primary drainage channel. It was straightened and lined in of concrete by the Army Corps of Engineers to act as a flood control channel. The river begins in the Canoga Park district of the city, flows east from the San Fernando Valley along the north edge of the Santa Monica Mountains, and turns south through the city center, flowing to its mouth in the Port of Long Beach at the Pacific Ocean. The smaller Ballona Creek flows into the Santa Monica Bay at Playa del Rey.
Los Angeles is rich in native plant species partly because of its diversity of habitats, including beaches, wetlands, and mountains. The most prevalent plant communities are coastal sage scrub, chaparral shrubland, and riparian woodland. Native plants include: the California poppy, matilija poppy, toyon, Ceanothus, Chamise, Coast Live Oak, sycamore, willow and Giant Wildrye. Many of these native species, such as the Los Angeles sunflower, have become so rare as to be considered endangered. Although it is not native to the area, the official tree of Los Angeles is the Coral Tree ("Erythrina caffra") and the official flower of Los Angeles is the Bird of Paradise ("Strelitzia reginae"). Mexican Fan Palms, Canary Island Palms, Queen Palms, Date Palms, and California Fan Palms are common in the Los Angeles area, although only the last is native.
Los Angeles is subject to earthquakes because of its location on the Pacific Ring of Fire. The geologic instability has produced numerous faults, which cause approximately 10,000 earthquakes annually in Southern California, though most of them are too small to be felt. The strike-slip San Andreas Fault system is at the boundary between the Pacific Plate and the North American Plate, and is vulnerable to the "big one", a potentially large and damaging event after the San Francisco earthquake in 1906. The Los Angeles basin and metropolitan area are also at risk from blind thrust earthquakes. Major earthquakes that have hit the Los Angeles area include the 1933 Long Beach, 1971 San Fernando, 1987 Whittier Narrows, and the 1994 Northridge events. All but a few are of low intensity and are not felt. The USGS has released the UCERF California earthquake forecast, which models earthquake occurrence in California. Parts of the city are also vulnerable to tsunamis; harbor areas were damaged by waves from Aleutian Islands earthquake in 1946, Valdivia earthquake in 1960, Alaska earthquake in 1964, Chile earthquake in 2010 and Japan earthquake in 2011.
The city is divided into many different districts and neighborhoods, some of which were incorporated cities that merged with Los Angeles. These neighborhoods were developed piecemeal, and are well-defined enough that the city has signage marking nearly all of them.
The city's street patterns generally follow a grid plan, with uniform block lengths and occasional roads that cut across blocks. However, this is complicated by rugged terrain, which has necessitated having different grids for each of the valleys that Los Angeles covers. Major streets are designed to move large volumes of traffic through many parts of the city, many of which are extremely long; Sepulveda Boulevard is long, while Foothill Boulevard is over long, reaching as far east as San Bernardino. Drivers in Los Angeles suffer from one of the worst rush hour periods in the world, according to an annual traffic index by navigation system maker, TomTom. LA drivers spend an additional 92 hours in traffic each year. During the peak rush hour there is 80% congestion, according to the index.
Los Angeles is often characterized by the presence of low-rise buildings. Outside of a few centers such as Downtown, Warner Center, Century City, Koreatown, Miracle Mile, Hollywood, and Westwood, skyscrapers and high-rise buildings are not common. The few skyscrapers built outside of those areas often stand out above the rest of the surrounding landscape. Most construction is done in separate units, rather than wall-to-wall. That being said, Downtown Los Angeles itself has many buildings over 30 stories, with fourteen over 50 stories, and two over 70 stories, the tallest of which is the Wilshire Grand Center. Also, Los Angeles is increasingly becoming a city of apartments rather than single family dwellings, especially in the dense inner city and Westside neighborhoods.
Important landmarks in Los Angeles include the Hollywood Sign, Walt Disney Concert Hall, Capitol Records Building, the Cathedral of Our Lady of the Angels, Angels Flight, Grauman's Chinese Theatre, Dolby Theatre, Griffith Observatory, Getty Center, Getty Villa, Stahl House, the Los Angeles Memorial Coliseum, L.A. Live, the Los Angeles County Museum of Art, the Venice Canal Historic District and boardwalk, Theme Building, Bradbury Building, U.S. Bank Tower, Wilshire Grand Center, Hollywood Boulevard, Los Angeles City Hall, Hollywood Bowl, Battleship , Watts Towers, Staples Center, Dodger Stadium, and Olvera Street.
Los Angeles has a Mediterranean climate (Köppen "Csb" on the coast and most of downtown, "Csa" near the metropolitan region to the west), and receives just enough annual precipitation to avoid semi-arid climate ("BSh)", making the myth that the city has been built in a desert not be completely incorrect. Daytime temperatures are generally temperate all year round. In winter, they average around giving it a tropical feel although it is a few degrees too cool to be a true tropical climate on average due to cool night temperatures. Los Angeles has plenty of sunshine throughout the year, with an average of only 35 days with measurable precipitation annually. The coastal region around Los Angeles has a climate comparable to coastal areas of southeastern Spain such as Alicante or Elche, in temperature range and variation, in sunshine hours and as well as annual precipitation levels.
Temperatures in the coastal basin exceed on a dozen or so days in the year, from one day a month in April, May, June and November to three days a month in July, August, October and to five days in September. Temperatures in the San Fernando and San Gabriel Valleys are considerably warmer. Temperatures are subject to substantial daily swings; in inland areas the difference between the average daily low and the average daily high is over . The average annual temperature of the sea is , from in January to in August. Hours of sunshine total more than 3,000 per year, from an average of 7 hours of sunshine per day in December to an average of 12 in July.
The Los Angeles area is also subject to phenomena typical of a microclimate, causing extreme variations in temperature in close physical proximity to each other. For example, the average July maximum temperature at the Santa Monica Pier is whereas it is in Canoga Park, away. The city, like much of the southern California coast, is subject to a late spring/early summer weather phenomenon called "June Gloom". This involves overcast or foggy skies in the morning that yield to sun by early afternoon.
Downtown Los Angeles averages of precipitation annually, mainly occurring between November and March, generally in the form of moderate rain showers, but sometimes as heavy rainfall during winter storms. Rainfall is usually higher in the hills and coastal slopes of the mountains because of orographic uplift. Summer days are usually rainless. Rarely, an incursion of moist air from the south or east can bring brief thunderstorms in late summer, especially to the mountains. The coast gets slightly less rainfall, while the inland and mountain areas get considerably more. Years of average rainfall are rare. The usual pattern is year to year variability, with a short string of dry years of rainfall, followed by one or two wet years with more than . Wet years are usually associated with warm water El Niño conditions in the Pacific, dry years with cooler water La Niña episodes. A series of rainy days can bring floods to the lowlands and mudslides to the hills, especially after wildfires have denuded the slopes.
Both freezing temperatures and snowfall are extremely rare in the city basin and along the coast, with the last occurrence of a reading at the downtown station being January 29, 1979; freezing temperatures occur nearly every year in valley locations while the mountains within city limits typically receive snowfall every winter. The greatest snowfall recorded in downtown Los Angeles was on January 15, 1932. While the most recent snowfall occurred in February 2019, the first snowfall since 1962. At the official downtown station, the highest recorded temperature is on September 27, 2010, while the lowest is , on January 4, 1949. During autumn and winter, Santa Ana winds sometimes bring much warmer and drier conditions to Los Angeles, and raise wildfire risk.
A Gabrielino settlement in the area was called "iyáangẚ" (written "Yang-na" by the Spanish), which has been translated as "poison oak place". "Yang-na" has also been translated as "the valley of smoke". Owing to geography, heavy reliance on automobiles, and the Los Angeles/Long Beach port complex, Los Angeles suffers from air pollution in the form of smog. The Los Angeles Basin and the San Fernando Valley are susceptible to atmospheric inversion, which holds in the exhausts from road vehicles, airplanes, locomotives, shipping, manufacturing, and other sources. The percentage of small particle pollution (the kind that penetrates into the lungs) coming from vehicles in the city can get as high as 55 percent.
The smog season lasts from approximately May to October. While other large cities rely on rain to clear smog, Los Angeles gets only of rain each year: pollution accumulates over many consecutive days. Issues of air quality in Los Angeles and other major cities led to the passage of early national environmental legislation, including the Clean Air Act. When the act was passed, California was unable to create a State Implementation Plan that would enable it to meet the new air quality standards, largely because of the level of pollution in Los Angeles generated by older vehicles. More recently, the state of California has led the nation in working to limit pollution by mandating low-emission vehicles. Smog is expected to continue to drop in the coming years because of aggressive steps to reduce it, which include electric and hybrid cars, improvements in mass transit, and other measures.
The number of Stage 1 smog alerts in Los Angeles has declined from over 100 per year in the 1970s to almost zero in the new millennium. Despite improvement, the 2006 and 2007 annual reports of the American Lung Association ranked the city as the most polluted in the country with short-term particle pollution and year-round particle pollution. In 2008, the city was ranked the second most polluted and again had the highest year-round particulate pollution. The city met its goal of providing 20 percent of the city's power from renewable sources in 2010. The American Lung Association's 2013 survey ranks the metro area as having the nation's worst smog, and fourth in both short-term and year-round pollution amounts.
Los Angeles is also home to the nation's largest urban oil field. There are more than 700 active oil wells within 1,500 feet of homes, churches, schools and hospitals in the city, a situation about which the EPA has voiced serious concerns.
The 2010 United States Census reported Los Angeles had a population of 3,792,621. The population density was 8,092.3 people per square mile (2,913.0/km²). The age distribution was 874,525 people (23.1%) under 18, 434,478 people (11.5%) from 18 to 24, 1,209,367 people (31.9%) from 25 to 44, 877,555 people (23.1%) from 45 to 64, and 396,696 people (10.5%) who were 65 or older. The median age was 34.1 years. For every 100 females, there were 99.2 males. For every 100 females age 18 and over, there were 97.6 males.
There were 1,413,995 housing units—up from 1,298,350 during 2005–2009—at an average density of 2,812.8 households per square mile (1,086.0/km²), of which 503,863 (38.2%) were owner-occupied, and 814,305 (61.8%) were occupied by renters. The homeowner vacancy rate was 2.1%; the rental vacancy rate was 6.1%. 1,535,444 people (40.5% of the population) lived in owner-occupied housing units and 2,172,576 people (57.3%) lived in rental housing units.
According to the 2010 United States Census, Los Angeles had a median household income of $49,497, with 22.0% of the population living below the federal poverty line.
According to the 2010 Census, the racial makeup of Los Angeles included: 1,888,158 Whites (49.8%), 365,118 African Americans (9.6%), 28,215 Native Americans (0.7%), 426,959 Asians (11.3%), 5,577 Pacific Islanders (0.1%), 902,959 from other races (23.8%), and 175,635 (4.6%) from two or more races. Hispanics or Latinos of any race were 1,838,822 persons (48.5%). Los Angeles is home to people from more than 140 countries speaking 224 different identified languages. Ethnic enclaves like Chinatown, Historic Filipinotown, Koreatown, Little Armenia, Little Ethiopia, Tehrangeles, Little Tokyo, Little Bangladesh, and Thai Town provide examples of the polyglot character of Los Angeles.
Non-Hispanic whites were 28.7% of the population in 2010, compared to 86.3% in 1940. The majority of the Non-Hispanic white population is living in areas along the Pacific coast as well as in neighborhoods near and on the Santa Monica Mountains from the Pacific Palisades to Los Feliz.
Mexican ancestry make up the largest ethnic group of Hispanics at 31.9% of the city's population, followed by those of Salvadoran (6.0%) and Guatemalan (3.6%) heritage. The Hispanic population has a long established Mexican-American and Central American community and is spread well-nigh throughout the entire city of Los Angeles and its metropolitan area. It is most heavily concentrated in regions around Downtown as East Los Angeles, Northeast Los Angeles and Westlake. Furthermore, a vast majority of residents in neighborhoods in eastern South Los Angeles towards Downey are of Hispanic origin.
The largest Asian ethnic groups are Filipinos (3.2%) and Koreans (2.9%), which have their own established ethnic enclaves−Koreatown in the Wilshire Center and Historic Filipinotown. Chinese people, which make up 1.8% of Los Angeles's population, reside mostly outside of Los Angeles city limits and rather in the San Gabriel Valley of eastern Los Angeles County, but make a sizable presence in the city, notably in Chinatown. Chinatown and Thaitown are also home to many Thais and Cambodians, which make up 0.3% and 0.1% of Los Angeles's population, respectively. The Japanese comprise 0.9% of LA's population, and have an established Little Tokyo in the city's downtown, and another significant community of Japanese Americans is in the Sawtelle district of West Los Angeles. Vietnamese make up 0.5% of Los Angeles's population. Indians make up 0.9% of the city's population.
The Los Angeles metropolitan area is home to a large population of Armenians, Assyrians, and Iranians, many of whom live in enclaves like Little Armenia and Tehrangeles.
African Americans have been the predominant ethnic group in South Los Angeles, which has emerged as the largest African American community in the western United States since the 1960s. The neighborhoods of South Los Angeles with highest concentration of African Americans include Crenshaw, Baldwin Hills, Leimert Park, Hyde Park, Gramercy Park, Manchester Square and Watts. Apart from South Los Angeles, neighborhoods in the Central region of Los Angeles, as Mid-City and Mid-Wilshire have a moderate concentration of African Americans as well.
According to a 2014 study by the Pew Research Center, Christianity is the most prevalently practiced religion in Los Angeles (65%). Perhaps owing to the fact of its founding by Franciscan friars of Roman Catholicism, the Roman Catholic Archbishop of Los Angeles leads the largest archdiocese in the country. Cardinal Roger Mahony oversaw construction of the Cathedral of Our Lady of the Angels, which opened in September 2002 in Downtown Los Angeles. Construction of the cathedral marked a coming of age of the city's Catholic, heavily Latino community. There are numerous Catholic churches and parishes throughout Los Angeles.
In 2011, the once common, but ultimately lapsed, custom of conducting a procession and Mass in honor of Nuestra Señora de los Ángeles, in commemoration of the founding of the City of Los Angeles in 1781, was revived by the Queen of Angels Foundation and its founder Mark Albert, with the support and approbation of the Archdiocese of Los Angeles as well as several civic leaders. The recently revived custom is a continuation of the original processions and Masses that commenced on the first anniversary of the founding of Los Angeles in 1782 and continued for nearly a century thereafter.
With 621,000 Jews in the metropolitan area (490,000 in city proper), the region has the second-largest population of Jews in the United States. Many of Los Angeles's Jews now live on the Westside and in the San Fernando Valley, though Boyle Heights once had a large Jewish population prior to World War II due to restrictive housing covenants. Major Orthodox Jewish neighborhoods include Hancock Park, Pico-Robertson, and Valley Village, while Jewish Israelis are well represented in the Encino and Tarzana neighborhoods, and Persian Jews in Beverly Hills. Many varieties of Judaism are represented in the greater Los Angeles area, including Reform, Conservative, Orthodox, and Reconstructionist. The Breed Street Shul in East Los Angeles, built in 1923, was the largest synagogue west of Chicago in its early decades; it is no longer in daily use as a synagogue and is being converted to a museum and community center. The Kabbalah Centre also has a presence in the city.
The International Church of the Foursquare Gospel was founded in Los Angeles by Aimee Semple McPherson in 1923 and remains headquartered there to this day. For many years, the church convened at Angelus Temple, which, when built, was one of the largest churches in the country.
Los Angeles has had a rich and influential Protestant tradition. The first Protestant service in Los Angeles was a Methodist meeting held in a private home in 1850 and the oldest Protestant church still operating, First Congregational Church, was founded in 1867. In the early 1900s the Bible Institute Of Los Angeles published the founding documents of the Christian Fundamentalist movement and the Azusa Street Revival launched Pentecostalism. The Metropolitan Community Church also had its origins in the Los Angeles area. Important churches in the city include First Presbyterian Church of Hollywood, Bel Air Presbyterian Church, First African Methodist Episcopal Church of Los Angeles, West Angeles Church of God in Christ, Second Baptist Church, Crenshaw Christian Center, McCarty Memorial Christian Church, and First Congregational Church.
The Los Angeles California Temple, the second-largest temple operated by The Church of Jesus Christ of Latter-day Saints, is on Santa Monica Boulevard in the Westwood neighborhood of Los Angeles. Dedicated in 1956, it was the first temple of The Church of Jesus Christ of Latter-day Saints built in California and it was the largest in the world when completed.
The Hollywood region of Los Angeles also has several significant headquarters, churches, and the Celebrity Center of Scientology.
Because of Los Angeles's large multi-ethnic population, a wide variety of faiths are practiced, including Buddhism, Hinduism, Islam, Zoroastrianism, Sikhism, Bahá'í, various Eastern Orthodox churches, Sufism, Shintoism, Taoism, Confucianism, Chinese folk religion and countless others. Immigrants from Asia for example, have formed a number of significant Buddhist congregations making the city home to the greatest variety of Buddhists in the world. The first Buddhist joss house was founded in the city in 1875. Atheism and other secular beliefs are also common, as the city is the largest in the Western U.S. Unchurched Belt.
The economy of Los Angeles is driven by international trade, entertainment (television, motion pictures, video games, music recording, and production), aerospace, technology, petroleum, fashion, apparel, and tourism. Other significant industries include finance, telecommunications, law, healthcare, and transportation. In the 2017 Global Financial Centres Index, Los Angeles was ranked as having the 19th most competitive financial center in the world, and sixth most competitive in United States (after New York City, San Francisco, Chicago, Boston, and Washington, D.C.).
One of the five major film studios, Paramount Pictures, is within the city limits, its location being part of the so-called "Thirty-Mile Zone" of entertainment headquarters in Southern California.
Los Angeles is the largest manufacturing center in the United States. The contiguous ports of Los Angeles and Long Beach together comprise the busiest port in the United States by some measures and the fifth-busiest port in the world, vital to trade within the Pacific Rim.
The Los Angeles metropolitan area has a gross metropolitan product of $1.0 trillion (), making it the third-largest economic metropolitan area in the world, after Tokyo and New York. Los Angeles has been classified an "alpha world city" according to a 2012 study by a group at Loughborough University.
, Los Angeles is home to three Fortune 500 companies: AECOM, CBRE Group, and Reliance Steel & Aluminum Co.
The Department of Cannabis Regulation enforces cannabis legislation after the legalization of the sale and distribution of cannabis in 2016. Companies must be licensed by the local agency to grow, test, or sell cannabis. Each jurisdiction (cities and counties) may license none or only some of these activities. Local governments may not prohibit adults from growing, using or transporting marijuana for personal use. , more than 300 existing cannabis businesses (both retailers and their suppliers) have been granted approval to operate in what is considered the nation's largest market. The city has also developed a social equity program to help communities disproportionately affected by the criminalization of marijuana.
Los Angeles is often billed as the "Creative Capital of the World", because one in every six of its residents works in a creative industry and there are more artists, writers, filmmakers, actors, dancers and musicians living and working in Los Angeles than any other city at any other time in history.
The city's Hollywood neighborhood has become recognized as the center of the motion picture industry and the Los Angeles area is also associated as being the center of the television industry. The city is home to the major film studios as well as major record labels. Los Angeles plays host to the annual Academy Awards, the Primetime Emmy Awards, the Grammy Awards as well as many other entertainment industry awards shows. Los Angeles is the site of the USC School of Cinematic Arts, the oldest film school in the United States.The performing arts play a major role in Los Angeles's cultural identity. According to the USC Stevens Institute for Innovation, "there are more than 1,100 annual theatrical productions and 21 openings every week." The Los Angeles Music Center is "one of the three largest performing arts centers in the nation", with more than 1.3 million visitors per year. The Walt Disney Concert Hall, centerpiece of the Music Center, is home to the prestigious Los Angeles Philharmonic. Notable organizations such as Center Theatre Group, the Los Angeles Master Chorale, and the Los Angeles Opera are also resident companies of the Music Center. Talent is locally cultivated at premier institutions such as the Colburn School and the USC Thornton School of Music.
There are 841 museums and art galleries in Los Angeles County, more museums per capita than any other city in the U.S. Some of the notable museums are the Los Angeles County Museum of Art (the largest art museum in the Western United States), the Getty Center (part of the J. Paul Getty Trust, the world's wealthiest art institution), the Petersen Automotive Museum, the Huntington Library, the Natural History Museum, the Battleship Iowa, and the Museum of Contemporary Art. A significant number of art galleries are on Gallery Row, and tens of thousands attend the monthly Downtown Art Walk there.
The city of Los Angeles and its metropolitan area are the home of eleven top level professional sports teams, several of which play in neighboring communities but use Los Angeles in their name. These teams include the Los Angeles Dodgers and Los Angeles Angels of Major League Baseball (MLB), the Los Angeles Rams and Los Angeles Chargers of the National Football League (NFL), the Los Angeles Lakers and Los Angeles Clippers of the National Basketball Association (NBA), the Los Angeles Kings and Anaheim Ducks of the National Hockey League (NHL), the Los Angeles Galaxy and Los Angeles Football Club of Major League Soccer (MLS), and the Los Angeles Sparks of the Women's National Basketball Association (WNBA).
Other notable sports teams include the UCLA Bruins and the USC Trojans in the National Collegiate Athletic Association (NCAA), both of which are Division I teams in the Pac-12 Conference.
Los Angeles is the second-largest city in the United States but hosted no NFL team between 1995 and 2015. At one time, the Los Angeles area hosted two NFL teams: the Rams and the Raiders. Both left the city in 1995, with the Rams moving to St. Louis, and the Raiders moving back to their original home of Oakland. After 21 seasons in St. Louis, on January 12, 2016, the NFL announced the Rams would be moving back to Los Angeles for the 2016 NFL season. SoFi Stadium in Inglewood, California is under construction and will be completed by the 2020 season. Prior to 1995, the Rams played their home games in the Los Angeles Memorial Coliseum from 1946 to 1979 and the Raiders played their home games at the Los Angeles Memorial Coliseum from 1982 to 1994. The San Diego Chargers announced on January 12, 2017 they would relocate to Los Angeles and become the Los Angeles Chargers beginning in the 2017 NFL season and played at Dignity Health Sports Park in Carson, California for three seasons prior to the completion of SoFi Stadium.
Los Angeles has twice hosted the Summer Olympic Games: in 1932 and in 1984, and will host the games for a third time in 2028. Los Angeles will be the third city after London (1908, 1948 and 2012) and Paris (1900, 1924 and 2024) to host the Olympic Games three times. When the tenth Olympic Games were hosted in 1932, the former 10th Street was renamed Olympic Blvd. Super Bowls I and VII were also held in the city, as well as multiple FIFA World Cup games at the Rose Bowl in 1994, including the final. Los Angeles also hosted the Deaflympics in 1985 and Special Olympics World Summer Games in 2015.
Los Angeles boasts a number of sports venues, including Dodger Stadium, the Los Angeles Memorial Coliseum, Banc of California Stadium and the Staples Center. The Forum, SoFi Stadium, Dignity Health Sports Park, and the Rose Bowl are also in adjacent cities. The Los Angeles Wildcats (XFL) are tenants of Dignity Health Sports Park
Los Angeles is one of six North American cities to have won championships in all five of its major leagues (MLB, NFL, NHL, NBA and MLS), having completed the feat with the Kings' 2012 Stanley Cup title.
Los Angeles is a charter city as opposed to a general law city. The current charter was adopted on June 8, 1999, and has been amended many times. The elected government consists of the Los Angeles City Council and the mayor of Los Angeles, which operate under a mayor–council government, as well as the city attorney (not to be confused with the district attorney, a county office) and controller. The mayor is Eric Garcetti. There are 15 city council districts.
The city has many departments and appointed officers, including the Los Angeles Police Department (LAPD), the Los Angeles Board of Police Commissioners, the Los Angeles Fire Department (LAFD), the Housing Authority of the City of Los Angeles (HACLA), the Los Angeles Department of Transportation (LADOT), and the Los Angeles Public Library (LAPL).
The charter of the City of Los Angeles ratified by voters in 1999 created a system of advisory neighborhood councils that would represent the diversity of stakeholders, defined as those who live, work or own property in the neighborhood. The neighborhood councils are relatively autonomous and spontaneous in that they identify their own boundaries, establish their own bylaws, and elect their own officers. There are about 90 neighborhood councils.
Residents of Los Angeles elect supervisors for the 1st, 2nd, 3rd, and 4th supervisorial districts.
In the California State Assembly, Los Angeles is split between fourteen districts. In the California State Senate, the city is split between eight districts. In the United States House of Representatives, it is split among ten congressional districts.
In 1992, the city of Los Angeles recorded 1,092 murders. Los Angeles experienced a significant decline in crime in the 1990s and late 2000s and reached a 50-year low in 2009 with 314 homicides. This is a rate of 7.85 per 100,000 population—a major decrease from 1980 when a homicide rate of 34.2 per 100,000 was reported. This included 15 officer-involved shootings. One shooting led to the death of a SWAT team member, Randal Simmons, the first in LAPD's history. Los Angeles in the year of 2013 totaled 251 murders, a decrease of 16 percent from the previous year. Police speculate the drop resulted from a number of factors, including young people spending more time online.
In 2015, it was revealed that the LAPD had been under-reporting crime for eight years, making the crime rate in the city appear much lower than it really is.
The Dragna crime family and the Cohen crime family dominated organized crime in the city during the Prohibition era and reached its peak during the 1940s and 1950s with the battle of Sunset Strip as part of the American Mafia, but has gradually declined since then with the rise of various black and Hispanic gangs in the late 1960s and early 1970s.
According to the Los Angeles Police Department, the city is home to 45,000 gang members, organized into 450 gangs. Among them are the Crips and Bloods, which are both African American street gangs that originated in the South Los Angeles region. Latino street gangs such as the Sureños, a Mexican American street gang, and Mara Salvatrucha, which has mainly members of Salvadoran descent, all originated in Los Angeles. This has led to the city being referred to as the "Gang Capital of America".
There are three public universities within the city limits: California State University, Los Angeles (CSULA), California State University, Northridge (CSUN) and University of California, Los Angeles (UCLA).
Private colleges in the city include:
The community college system consists of nine campuses governed by the trustees of the Los Angeles Community College District:
There are numerous additional colleges and universities outside the city limits in the Greater Los Angeles area, including the Claremont Colleges consortium, which includes the most selective liberal arts colleges in the U.S., and the California Institute of Technology (Caltech), one of the top STEM-focused research institutions in the world.
Los Angeles Unified School District serves almost all of the city of Los Angeles, as well as several surrounding communities, with a student population around 800,000. After Proposition 13 was approved in 1978, urban school districts had considerable trouble with funding. LAUSD has become known for its underfunded, overcrowded and poorly maintained campuses, although its 162 Magnet schools help compete with local private schools.
Several small sections of Los Angeles are in the Las Virgenes Unified School District. The Los Angeles County Office of Education operates the Los Angeles County High School for the Arts. The Los Angeles Public Library system operates 72 public libraries in the city. Enclaves of unincorporated areas are served by branches of the County of Los Angeles Public Library, many of which are within walking distance to residents.
The Los Angeles metro area is the second-largest broadcast designated market area in the U.S. (after New York) with 5,431,140 homes (4.956% of the U.S.), which is served by a wide variety of local AM and FM radio and television stations. Los Angeles and New York City are the only two media markets to have seven VHF allocations assigned to them.As part of the region's aforementioned creative industry, the Big Four major broadcast television networks, ABC, CBS, FOX, and NBC, all have production facilities and offices throughout various areas of Los Angeles. All four major broadcast television networks, plus major Spanish-language networks Telemundo and Univision, also own and operate stations that both serve the Los Angeles market and serve as each network's West Coast flagship station: ABC's KABC-TV (Channel 7), CBS's KCBS-TV (Channel 2), Fox's KTTV-TV (Channel 11), NBC's KNBC-TV (Channel 4), MyNetworkTV's KCOP-TV (Channel 13), Telemundo's KVEA-TV (Channel 52), and Univision's KMEX-TV (Channel 34). The region also has three PBS stations, as well as KCET (Channel 28), the nation's largest independent public television station. KTBN (Channel 40) is the flagship station of the religious Trinity Broadcasting Network, based out of Santa Ana. A variety of independent television stations, such as KCAL-TV (Channel 9) and KTLA-TV (Channel 5), also operate in the area.
The major daily English-language newspaper in the area is the "Los Angeles Times". "La Opinión" is the city's major daily Spanish-language paper. "The Korea Times" is the city's major daily Korean language paper while "The World Journal" is the city and county's major Chinese newspaper. The "Los Angeles Sentinel" is the city's major African-American weekly paper, boasting the largest African-American readership in the Western United States. "Investor's Business Daily" is distributed from its LA corporate offices, which are headquartered in Playa del Rey.
There are also a number of smaller regional newspapers, alternative weeklies and magazines, including the "Los Angeles Register", Los Angeles Community News, (which focuses on coverage of the greater Los Angeles area), "Los Angeles Daily News" (which focuses coverage on the San Fernando Valley), "LA Weekly", "L.A. Record" (which focuses coverage on the music scene in the Greater Los Angeles Area), "Los Angeles Magazine", the "Los Angeles Business Journal", the "Los Angeles Daily Journal" (legal industry paper), "The Hollywood Reporter", "Variety" (both entertainment industry papers), and "Los Angeles Downtown News". In addition to the major papers, numerous local periodicals serve immigrant communities in their native languages, including Armenian, English, Korean, Persian, Russian, Chinese, Japanese, Hebrew, and Arabic. Many cities adjacent to Los Angeles also have their own daily newspapers whose coverage and availability overlaps into certain Los Angeles neighborhoods. Examples include "The Daily Breeze" (serving the South Bay), and "The Long Beach Press-Telegram".
Los Angeles arts, culture and nightlife news is also covered by a number of local and national online guides like "Time Out Los Angeles", "Thrillist", "Kristin's List", "DailyCandy", "Diversity News Magazine", "LAist", and "Flavorpill".
The city and the rest of the Los Angeles metropolitan area are served by an extensive network of freeways and highways. The Texas Transportation Institute, which publishes an annual Urban Mobility Report, ranked Los Angeles road traffic as the most congested in the United States in 2005 as measured by annual delay per traveler. The average traveler in Los Angeles experienced 72 hours of traffic delay per year according to the study. Los Angeles was followed by San Francisco/Oakland, Washington, D.C. and Atlanta, (each with 60 hours of delay). Despite the congestion in the city, the mean travel time for commuters in Los Angeles is shorter than other major cities, including New York City, Philadelphia and Chicago. Los Angeles's mean travel time for work commutes in 2006 was 29.2 minutes, similar to those of San Francisco and Washington, D.C.
Among the major highways that connect LA to the rest of the nation include Interstate 5, which runs south through San Diego to Tijuana in Mexico and north through Sacramento, Portland, and Seattle to the Canada–US border; Interstate 10, the southernmost east–west, coast-to-coast Interstate Highway in the United States, going to Jacksonville, Florida; and U.S. Route 101, which heads to the California Central Coast, San Francisco, the Redwood Empire, and the Oregon and Washington coasts.
The LA County Metropolitan Transportation Authority (LA County Metro) and other agencies operate an extensive system of bus lines, as well as subway and light rail lines across Los Angeles County, with a combined monthly ridership (measured in individual boardings) of 38.8 million . The majority of this (30.5 million) is taken up by the city's bus system, the second busiest in the country. The subway and light rail combined average the remaining roughly 8.2 million boardings per month. LA County Metro recorded over 397 million boardings for the 2017 calendar year, including about 285 million bus riders and about 113 million riding on rail transit. For the first quarter of 2018, there were just under 95 million system-wide boardings, down from about 98 million in 2017, and about 105 million in 2016. In 2005, 10.2% of Los Angeles commuters rode some form of public transportation. According to the 2016 American Community Survey, 9.2% of working Los Angeles (city) residents made the journey to work via public transportation.
The city's subway system is the ninth busiest in the United States and its light rail system is the country's busiest. The rail system includes the B and D subway lines, as well as the A, C, E, and L light rail lines. In 2016, the E Line was extended to the Pacific Ocean at Santa Monica. The Metro G and J lines are bus rapid transit lines with stops and frequency similar to those of light rail. , the total number of light rail stations is 93. The city is also central to the commuter rail system Metrolink, which links Los Angeles to all neighboring counties as well as many suburbs.
Besides the rail service provided by Metrolink and the Los Angeles County Metropolitan Transportation Authority, Los Angeles is served by inter-city passenger trains from Amtrak. The main rail station in the city is Union Station just north of Downtown.
In addition, the city directly contracts for local and commuter bus service through the Los Angeles Department of Transportation, or LADOT.
The main international and domestic airport serving Los Angeles is Los Angeles International Airport , commonly referred to by its airport code, LAX.
Other major nearby commercial airports include:
One of the world's busiest general-aviation airports is also in Los Angeles, Van Nuys Airport .
The Port of Los Angeles is in San Pedro Bay in the San Pedro neighborhood, approximately south of Downtown. Also called Los Angeles Harbor and WORLDPORT LA, the port complex occupies of land and water along of waterfront. It adjoins the separate Port of Long Beach.
The sea ports of the Port of Los Angeles and Port of Long Beach together make up the "Los Angeles/Long Beach Harbor". Together, both ports are the fifth busiest container port in the world, with a trade volume of over 14.2 million TEU's in 2008. Singly, the Port of Los Angeles is the busiest container port in the United States and the largest cruise ship center on the West Coast of the United States – The Port of Los Angeles's World Cruise Center served about 590,000 passengers in 2014.
There are also smaller, non-industrial harbors along Los Angeles's coastline. The port includes four bridges: the Vincent Thomas Bridge, Henry Ford Bridge, Gerald Desmond Bridge, and Commodore Schuyler F. Heim Bridge. Passenger ferry service from San Pedro to the city of Avalon on Santa Catalina Island is provided by Catalina Express.
As of January 2019, there are 36,300 homeless people in the City of Los Angeles, comprising roughly 62% of the homeless population of LA County. This is an increase of 16% over the previous year (12% in LA County as a whole). The epicenter of homelessness in Los Angeles is the Skid Row neighborhood, which contains 8,000 homeless people, one of the largest stable populations of homeless people in the United States. The increased homeless population in Los Angeles has been attributed largely to lack of housing affordability.
As home to Hollywood and its entertainment industry, numerous singers, actors, celebrities and other entertainers live in various districts of Los Angeles.
Los Angeles has 25 sister cities, listed chronologically by year joined:
In addition, Los Angeles has the following "friendship cities":
General
Architecture and urban theory
Race relations
LGBT
Environment
Art and literature | https://en.wikipedia.org/wiki?curid=18110 |
Lepus (constellation)
Lepus (, ) is a constellation lying just south of the celestial equator. Its name is Latin for hare. It is located below—immediately south—of Orion (the hunter), and is sometimes represented as a hare being chased by Orion or by Orion's hunting dogs.
Although the hare does not represent any particular figure in Greek mythology, Lepus was one of the 48 constellations listed by the 2nd-century astronomer Ptolemy, and it remains one of the 88 modern constellations.
Lepus is most often represented as a hare being hunted by Orion, whose hunting dogs (Canis Major and Canis Minor) pursue it. The constellation is also associated with the Moon rabbit.
Four stars of this constellation (α, β, γ, δ Lep) form a quadrilateral and are known as "‘Arsh al-Jawzā'", "the Throne of Jawzā'" or "Kursiyy al-Jawzā' al-Mu'akhkhar", "the Hindmost Chair of Jawzā'" and "al-Nihāl", "the Camels Quenching Their Thirst" in Arabic.
There are a fair number of bright stars, both single and double, in Lepus. Alpha Leporis, the brightest star of Lepus, is a white supergiant of magnitude 2.6, 1300 light-years from Earth. Its traditional name, Arneb ("أرنب" "’arnab"), means "hare" in Arabic. Beta Leporis, traditionally known as Nihal (Arabic for "quenching their thirst"), is a yellow giant of magnitude 2.8, 159 light-years from Earth. Gamma Leporis is a double star divisible in binoculars. The primary is a yellow star of magnitude 3.6, 29 light-years from Earth. The secondary is an orange star of magnitude 6.2. Delta Leporis is a yellow giant of magnitude 3.8, 112 light-years from Earth. Epsilon Leporis is an orange giant of magnitude 3.2, 227 light-years from Earth. Kappa Leporis is a double star divisible in medium aperture amateur telescopes, 560 light-years from Earth. The primary is a blue-white star of magnitude 4.4 and the secondary is a star of magnitude 7.4.
There are several variable stars in Lepus. R Leporis is a Mira variable star. It is also called "Hind's Crimson Star" for its striking red color and because it was named for John Russell Hind. It varies in magnitude from a minimum of 9.8 to a maximum of 7.3, with a period of 420 days. R Leporis is at a distance of 1500 light-years. The color intensifies as the star brightens. It can be as dim as magnitude 12 and as bright as magnitude 5.5. T Leporis is also a Mira variable observed in detail by ESO's Very Large Telescope Interferometer. RX Leporis is a semi-regular red giant that has a period of 2 months. It has a minimum magnitude of 7.4 and a maximum magnitude of 5.0.
There is one Messier object in Lepus, M79. It is a globular cluster of magnitude 8.0, 42,000 light-years from Earth. One of the few globular clusters visible in the Northern Celestial Hemisphere's winter, it is a Shapley class V cluster, which means that it has an intermediate concentration towards its center. It is often described as having a "starfish" shape.
M79 was discovered in 1780 by Pierre Méchain. | https://en.wikipedia.org/wiki?curid=18111 |
Lupus (constellation)
Lupus is a constellation located in the deep Southern Sky. Its name is Latin for wolf. Lupus was one of the 48 constellations listed by the 2nd-century astronomer Ptolemy, and it remains one of the 88 modern constellations, although it was previously an asterism associated with the neighboring constellation Centaurus.
In ancient times, the constellation was considered an asterism within Centaurus, and was considered to have been an arbitrary animal, killed, or about to be killed, on behalf of, or for, Centaurus. An alternative visualization, attested by Eratosthenes, saw this constellation as a wineskin held by Centaurus. It was not separated from Centaurus until Hipparchus of Bithynia named it "Therion" (meaning beast) in the 2nd century BC.
The Greek constellation is probably based on the Babylonian figure known as the Mad Dog (UR.IDIM). This was a strange hybrid creature that combined the head and torso of a man with the legs and tail of a lion (the cuneiform sign 'UR' simply refers to a large carnivore; lions, wolves and dogs are all included). It is often found in association with the sun god and another mythical being called the Bison-man, which is supposedly related to the Greek constellation of Centaurus.
In Arab Folk Astronomy, Lupus, together with Centaurus were collectively called الشماريخ "al-Shamareekh", meaning the dense branches of the date palm's fruit.
Later, in Islamic Medieval astronomy, it was named السبع "al-Sab'", which is a term used for any predatory wild beast (same as the Greek "Therion"), as a separate constellation, but drawn together with Centaurus. In some manuscripts of Al-Sufi's Book of Fixed Stars and celestial globes, it was drawn as a lion; in others, it is drawn as a wolf, both conforming to the "Sab"' name.
In Europe, no particular animal was associated with it until the Latin translation of Ptolemy's work identified it with the wolf.
Lupus is bordered by six different constellations, although one of them (Hydra) merely touches at the corner. The other five are Scorpius (the scorpion), Norma (the right angle), Circinus (the compass), Libra (the balance scale), and Centaurus (the centaur). Covering 333.7 square degrees and 0.809% of the night sky, it ranks 46th of the 88 modern constellations. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Lup". The official constellation boundaries are defined by a twelve-sided polygon ("illustrated in infobox"). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and, while the declination coordinates are between −29.83° and −55.58°. The whole constellation is visible to observers south of latitude 34°N.
Overall, there are 127 stars within the constellation's borders brighter than or equal to apparent magnitude 6.5. In his book "Star Names and Their Meanings", R. H. Allen gave the names Yang Mun for Alpha Lupi, the brightest star in Lupus, and KeKwan for the blue giant Beta Lupi, both from Chinese. However, the first name is in error; both stars were part of a large Chinese constellation known in modern transliteration as Qíguān, the Imperial Guards.
Most of the brightest stars in Lupus are massive members of the nearest OB association, Scorpius–Centaurus.
Alpha Lupi is an ageing blue giant star of spectral type B1.5 III that is 460 ± 10 light-years distant from Earth. It is a Beta Cephei variable, pulsating in brightness by 0.03 of a magnitude every 7 hours and 6 minutes.
Towards the north of the constellation are globular clusters NGC 5824 and NGC 5986, and close by the dark nebula B 228. To the south are two open clusters, NGC 5822 and NGC 5749, as well as globular cluster NGC 5927 on the eastern border with Norma. On the western border are two spiral galaxies and the Wolf–Rayet planetary nebula IC 4406, containing some of the hottest stars in existence. IC 4406, also called the Retina Nebula, is a cylindrical nebula at a distance of 5,000 light-years. It has dust lanes throughout its center. Another planetary nebula, NGC 5882, is towards the center of the constellation. The transiting exoplanet Lupus-TR-3b lies in this constellation. The historic supernova SN 1006 is described by various sources as appearing on April 30 to May 1, 1006, in the constellation of Lupus.
ESO 274-1 is a spiral galaxy seen from edge-on that requires an amateur telescope with at least 12 inches of aperture to view. It can be found by using Lambda Lupi and Mu Lupi as markers, and can only be seen under very dark skies. It is 9 arcminutes by 0.7 arcminutes with a small, elliptical nucleus. | https://en.wikipedia.org/wiki?curid=18112 |
Lyra
Lyra (; Latin for lyre, from Greek "λύρα") is a small constellation. It is one of 48 listed by the 2nd century astronomer Ptolemy, and is one of the 88 constellations recognized by the International Astronomical Union. Lyra was often represented on star maps as a vulture or an eagle carrying a lyre, and hence is sometimes referred to as Vultur Cadens or Aquila Cadens ("Falling Vulture" or "Falling Eagle"), respectively. Beginning at the north, Lyra is bordered by Draco, Hercules, Vulpecula, and Cygnus. Lyra is nearly overhead in temperate northern latitudes shortly after midnight at the start of summer. From the equator to about the 40th parallel south it is visible low in the northern sky during the same (thus winter) months.
Vega, Lyra's brightest star, is one of the brightest stars in the night sky, and forms a corner of the famed Summer Triangle asterism. Beta Lyrae is the prototype of a class of binary star known as Beta Lyrae variables. These binary stars are so close to each other that they become egg-shaped and material flows from one to the other. Epsilon Lyrae, known informally as the Double, is a complex multiple star system. Lyra also hosts the Ring Nebula, the second-discovered and best-known planetary nebula.
In Greek mythology, Lyra represents the lyre of Orpheus. Made by Hermes from a tortoise shell, given to Apollo as a bargain, it was said to be the first lyre ever produced. Orpheus's music was said to be so great that even inanimate objects such as trees, streams, and rocks could be charmed. Joining Jason and the Argonauts, his music was able to quell the voices of the dangerous Sirens, who sang tempting songs to the Argonauts.
At one point, Orpheus married Eurydice, a nymph. While fleeing from an attack by Aristaeus, she stepped on a snake that bit her, killing her. To reclaim her, Orpheus entered the Underworld, where the music from his lyre charmed Hades. Hades relented and let Orpheus bring Eurydice back, on the condition that he never once look back until outside. Unfortunately, near the very end, Orpheus faltered and looked back, causing Eurydice to be left in the Underworld forever. Orpheus spent the rest of his life strumming his lyre while wandering aimlessly through the land, rejecting all marriage offers from women.
There are two competing myths relating to the death of Orpheus. According to Eratosthenes, Orpheus failed to make a necessary sacrifice to Dionysus due to his regard for Apollo as the supreme deity instead. Dionysus then sent his followers to rip Orpheus apart. Ovid tells a rather different story, saying that women, in retribution for Orpheus's rejection of marriage offers, ganged up and threw stones and spears. At first, his music charmed them as well, but eventually their numbers and clamor overwhelmed his music and he was hit by the spears. Both myths then state that his lyre was placed in the sky by Zeus, and Orpheus' bones buried by the muses.
Vega and its surrounding stars are also treated as a constellation in other cultures. The area corresponding to Lyra was seen by the Arabs as a vulture or an eagle carrying a lyre, either enclosed in its wings, or in its beak. In Wales, Lyra is known as King Arthur's Harp ("Talyn Arthur"), and King David's harp. The Persian Hafiz called it the Lyre of Zurah.
It has been called the Manger of the Infant Saviour, Praesepe Salvatoris. In Australian Aboriginal astronomy, Lyra is known by the Boorong people in Victoria as the Malleefowl constellation. Lyra was known as Urcuchillay by the Incas and was worshipped as an animal deity.
Lyra is bordered by Vulpecula to the south, Hercules to the east, Draco to the north, and Cygnus to the west. Covering 286.5 square degrees, it ranks 52nd of the 88 modern constellations in size. It appears prominently in the northern sky during the Northern Hemisphere's summer, and the whole constellation is visible for at least part of the year to observers north of latitude 42°S. Its main asterism consists of six stars, and 73 stars in total are brighter than magnitude 6.5. The constellation's boundaries, as set by Eugène Delporte in 1930, are defined by a 17-sided polygon. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between and . The International Astronomical Union (IAU) adopted the three-letter abbreviation "Lyr" for the constellation in 1922.
German cartographer Johann Bayer used the Greek letters alpha through nu to label the most prominent stars in the constellation. English astronomer John Flamsteed observed and labelled two stars each as delta, epsilon, zeta and nu. He added pi and rho, not using xi and omicron as Bayer used these letters to denote Cygnus and Hercules on his map.
The brightest star in the constellation is Vega (Alpha Lyrae), a main-sequence star of spectral type A0Va. Only 7.7 parsecs distant, Vega is a Delta Scuti variable, varying between magnitudes −0.02 and 0.07 over 0.2 days. On average, it is the second-brightest star of a northern hemisphere (after Arcturus) and the fifth-brightest star in all, surpassed only by Arcturus, Alpha Centauri, Canopus, and Sirius. Vega was the pole star in the year 12,000 BCE, and will again become the pole star around 14,000 CE.
Vega is one of the most-magnificent of all stars, and has been called "arguably the next most important star in the sky after the Sun". Vega was the first star other than the Sun to be photographed, as well as the first to have a clear spectrum recorded, showing absorption lines for the first time. The star was the first single main-sequence star other than the Sun to be known to emit X-rays, and is surrounded by a circumstellar debris disk, similar to the Kuiper Belt. Vega forms one corner of the famous Summer Triangle asterism; along with Altair and Deneb, these three stars form a prominent triangle during the northern hemisphere summer.
Vega also forms one vertex of a much smaller triangle, along with Epsilon and Zeta Lyrae. Zeta forms a wide binary star visible in binoculars, consisting of an Am star and an F-type subgiant. The Am star has an additional close companion, bringing the total number of stars in the system to three. Epsilon is a more famous wide binary that can even be separated by the naked eye under good conditions. Both components are themselves close binaries which can be seen with telescopes to consist of A- and F-type stars, and a faint star was recently found to orbit component C as well, for a total of five stars.
In contrast to Zeta and Epsilon Lyrae, Delta Lyrae is an optical double, with the two stars simply lying along the same line of sight east of Zeta. The brighter and closer of the two, Delta2 Lyrae, is a 4th-magnitude red bright giant that varies semiregularly by around 0.2 magnitudes with a dominant period of 79 days, while the fainter Delta1 Lyrae is a spectroscopic binary consisting of a B-type primary and an unknown secondary. Both systems, however, have very similar radial velocities, and are the two brightest members of a sparse open cluster known as the Delta Lyrae cluster. South of Delta is Gamma Lyrae, a blue giant and the second-brightest star in the constellation. Around 190 parsecs distant, it has been referred to as a "superficially normal" star.
The final star forming the lyre's figure is Beta Lyrae, also a binary composed of a blue bright giant and an early B-type star. In this case, the stars are so close together that the larger giant is overflowing its Roche lobe and transferring material to the secondary, forming a semidetached system. The secondary, originally the less massive of the two, has accreted so much mass that it is now substantially more massive, albeit smaller, than the primary, and is surrounded by a thick accretion disk. The plane of the orbit is aligned with Earth and the system thus shows eclipses, dropping nearly a full magnitude from its 3rd-magnitude baseline every 13 days, although its period is increasing by around 19 seconds per year. It is the prototype of the Beta Lyrae variables, eclipsing semidetached binaries of early spectral types in which there are no exact onsets of eclipses, but rather continuous changes in brightness.
Another easy-to-spot variable is the bright R Lyrae, north of the main asterism. Also known as 13 Lyrae, it is a 4th-magnitude red giant semiregular variable that varies by several tenths of a magnitude. Its periodicity is complex, with several different periods of varying lengths, most notably one of 46 days and one of 64 days. Even further north is FL Lyrae, a much fainter 9th-magnitude Algol variable that drops by half a magnitude every 2.18 days during the primary eclipse. Both components are main-sequence stars, the primary being late F-type and the secondary late G-type. The system was one of the first main-sequence eclipsing binaries containing G-type star to have its properties known as well as the better-studied early-type eclipsing binaries.
At the very northernmost edge of the constellation is the even fainter V361 Lyrae, an eclipsing binary that does not easily fall into one of the traditional classes, with features of Beta Lyrae, W Ursae Majoris, and cataclysmic variables. It may be a representative of a very brief phase in which the system is transitioning into a contact binary. It can be found less than a degree away from the naked-eye star 16 Lyrae, a 5th-magnitude A-type subgiant located around 37 parsecs distant.
The brightest star not included in the asterism and the westernmost cataloged by Bayer or Flamsteed is Kappa Lyrae, a typical red giant around 73 parsecs distant. Similar bright orange or red giants include the 4th-magnitude Theta Lyrae, Lambda Lyrae, and HD 173780. Lambda is located just south of Gamma, Theta is positioned in the east, and HD 173780, the brightest star in the constellation with no Bayer or Flamsteed designation, is more southernly. Just north of Theta and of almost exactly the same magnitude is Eta Lyrae, a blue subgiant with a near-solar metal abundance. Also nearby is the faint HP Lyrae, a post-asymptotic giant branch (AGB) star that shows variability. The reason for its variability is still a mystery: first cataloged as an eclipsing binary, it was theorized to be an RV Tauri variable in 2002, but if so, it would be by far the hottest such variable discovered.
In the extreme east is RR Lyrae, the prototype of the large class of variables known as RR Lyrae variables, which are pulsating variables similar to Cepheids, but are evolved population II stars of spectral types A and F. Such stars are usually not found in a galaxy's thin disk, but rather in the galactic halo. Such stars serve as standard candles, and thus are a reliable way to calculate distances to the globular clusters in which they reside. RR Lyrae itself varies between magnitudes 7 and 8 while exhibiting the Blazhko effect. The easternmost star designated by Flamsteed, 19 Lyrae, is also a small-amplitude variable, an Alpha2 Canum Venaticorum variable with a period of just over one day.
Another evolved star is the naked-eye variable XY Lyrae, a red bright giant just north of Vega that varies between 6th and 7th magnitudes over a period of 120 days. Also just visible to the naked eye is the peculiar classical Cepheid V473 Lyrae. It is unique in that it is the only known Cepheid in the Milky Way to undergo periodic phase and amplitude changes, analogous to the Blazhko effect in RR Lyrae stars. At 1.5 days, its period was the shortest known for a classical Cepheid at the time of its discovery. W and S Lyrae are two of the many Mira variables in Lyra. W varies between 7th and 12th magnitudes over approximately 200 days, while S, slightly fainter, is a silicate carbon star, likely of the . Another evolved star is EP Lyrae, a faint RV Tauri variable and an "extreme example" of a post-AGB star. It and a likely companion are surrounded by a circumstellar disk of material.
Rather close to Earth at a distance of only is Gliese 758. The sunlike primary star has a brown dwarf companion, the coldest to have been imaged around a sunlike star in thermal light when it was discovered in 2009. Only slightly farther away is V478 Lyrae, an eclipsing RS Canum Venaticorum variable whose primary star shows active starspot activity.
One of the most peculiar systems in Lyra is MV Lyrae, a nova-like star consisting of a red dwarf and a white dwarf. Originally classified as a VY Sculptoris star due to spending most time at maximum brightness, since around 1979 the system has been dominantly at minimum brightness, with periodic outbursts. Its nature is still not fully understood. Another outbursting star is AY Lyrae, an SU Ursae Majoris-type dwarf nova that has undergone several superoutbursts. Of the same type is V344 Lyrae, notable for an extremely short period between superoutbursts coupled with one of the highest amplitudes for such a period. The true nova HR Lyrae flared in 1919 to a maximum magnitude of 6.5, over 9.5 magnitudes higher than in quiescence. Some of its characteristics are similar to those of recurring novae.
M57, also known as the "Ring Nebula" and NGC 6720, has a diameter of one light-year and is at a distance of 2,000 light-years from Earth. It is one of the best known planetary nebulae and the second to be discovered; its integrated magnitude is 8.8. It was discovered in 1779 by Antoine Darquier, 15 years after Charles Messier discovered the Dumbbell Nebula. Astronomers have determined that it is between 6,000 and 8,000 years old; it is approximately one light-year in diameter. The outer part of the nebula appears red in photographs because of emission from ionized hydrogen. The middle region is colored green; doubly ionized oxygen emits greenish-blue light. The hottest region, closest to the central star, appears blue because of emission from helium. The central star itself is a white dwarf with a temperature of 120,000 kelvins. In telescopes, the nebula appears as a visible ring with a green tinge; it is slightly elliptical because its three-dimensional shape is a torus or cylinder seen from a slight angle. It can be found halfway between Gamma Lyrae and Beta Lyrae.
Another planetary nebula in Lyra is Abell 46. The central star, V477 Lyrae, is an eclipsing post-common-envelope binary, consisting of a white dwarf primary and an oversized secondary component due to recent accretion. The nebula itself is of relatively low surface brightness compared to the central star, and is undersized for the primary's mass for reasons not yet fully understood.
NGC 6791 is a cluster of stars in Lyra. It contains three age groups of stars: 4 billion year-old white dwarfs, 6 billion year-old white dwarfs and 8 billion year-old normal stars.
NGC 6745 is an irregular spiral galaxy in Lyra that is at a distance of 208 million light-years. Several million years ago, it collided with a smaller galaxy, which created a region filled with young, hot, blue stars. Astronomers do not know if the collision was simply a glancing blow or a prelude to a full-on merger, which would end with the two galaxies incorporated into one larger, probably elliptical galaxy.
A remarkable long-duration gamma-ray burst was GRB 050525A, which flared in 2005. The afterglow re-brightened at 33 minutes after the original burst, only the third found to exhibit such an effect in the timeframe, and unable to be completely explained by known phenomena. The light curve observed over the next 100 days was consistent with that of a supernova or even a hypernova, dubbed SN 2005nc. The host galaxy proved elusive to find at first, although it was subsequently identified.
In orbit around the orange subgiant star HD 177830 is one of the earliest exoplanets to be detected. A jovian-mass planet, it orbits in an eccentric orbit with a period of 390 days. A second planet closer to the star was discovered in 2011. Visible to the naked eye are HD 173416, a yellow giant hosting a planet over twice the mass of Jupiter discovered in 2009; and HD 176051, a low-mass binary star containing another high-mass planet. Just short of naked-eye visibility is HD 178911, a triple system consisting of a close binary and a visually separable sunlike star. The sunlike star has a planet with over 6 Jupiter masses discovered in 2001, the second found in a triple system after that of 16 Cygni.
One of the most-studied exoplanets in the night sky is TrES-1b, in orbit around the star GSC 02652-01324. Detected from a transit of its parent star, the planet has around 3/4 the mass of Jupiter, yet orbits its parent star in only three days. The transits have been reported to have anomalies multiple times. Originally thought to be possibly due to the presence of an Earth-like planet, it is now accepted that the irregularities are due to a large starspot. Also discovered by the transit method is WASP-3b, with 1.75 times the mass of Jupiter. At the time of its discovery, it was one of the hottest known exoplanets, in orbit around the F-type main-sequence star WASP-3. Similar to TrES-1b, irregularities in the transits had left open the possibility of a second planet, although this now appears unlikely as well.
Lyra is one of three constellations (along with neighboring Cygnus and Draco) to be in the Kepler Mission's field of view, and as such it contains many more known exoplanets than most constellations. One of the first discovered by the mission is Kepler-7b, an extremely low-density exoplanet with less than half the mass of Jupiter, yet nearly 1.5 times the radius. Almost as sparse is Kepler-8b, only slightly more massive and of a similar radius. The Kepler-20 system contains five known planets; three of them are only slightly smaller than Neptune, and two while the other two are some of the first Earth-sized exoplanets to be discovered. Kepler-37 is another star with an exoplanet discovered by Kepler; the planet is the smallest known extrasolar planet known as of February 2013.
In April 2013, it was announced that of the five planets orbiting Kepler-62, at least two—Kepler-62e and Kepler-62f—are within the boundaries of the habitable zone of that star, where scientists think liquid water could exist, and are both candidates for being a solid, rocky, earth-like planet. The exoplanets are 1.6 and 1.4 times the diameter of Earth respectively, with their star Kepler-62 at a distance of 1,200 light-years. | https://en.wikipedia.org/wiki?curid=18113 |
Legnica
Legnica (, , ) is a city in southwestern Poland, in the central part of Lower Silesia, on the Kaczawa River (left tributary of the Oder) and the Czarna Woda. Between 1 June 1975 and 31 December 1998 Legnica was the capital of the Legnica Voivodeship. It is currently the seat of the county and since 1992 the city has been the seat of a Diocese. As of 2019, Legnica had a population of 99,350 inhabitants.
The city was first referenced in chronicles dating from the year 1004, although previous settlements could be traced back to the 7th century. The name "Legnica" was mentioned in 1149 under High Duke of Poland Bolesław IV the Curly. Legnica was most likely the seat of Bolesław and it became the residence of the High Dukes that ruled the Duchy of Legnica from 1248 until 1675. Legnica is a city over which the Piast dynasty reigned the longest, for about 700 years, from the time of ruler Mieszko I of Poland after the creation of the Polish state in the 10th century, until 1675 and the death of the last Piast duke George William. Legnica is one of the historical burial sites of Polish monarchs and consorts.
Legnica became renowned for the fierce battle that took place at Legnickie Pole near the city on 9 April 1241 during the first Mongol invasion of Poland. The Christian coalition under the command of the Polish Duke Henry II the Pious, supported by nobles, knights, and mercenaries, was decisively defeated by the Mongols. This, however, was a turning point in the war as the Mongols, having killed Henry II, halted their advance into Europe and retreated to Hungary through Moravia.
During the High Middle Ages, Legnica was one of the most important cities of Central Europe, having a population of nearly 16,000 inhabitants. The city began to rapidly develop after the sudden discovery of gold in the Kaczawa River between Legnica and the town of Złotoryja. In 1675 it was incorporated into Habsburg ruled Kingdom of Bohemia. In 1742 the city was annexed by the Kingdom of Prussia after King Frederick the Great's victory over Austria in the War of the Austrian Succession. Subsequently, it was part of German Empire from 1871, and later Weimar Republic and Nazi Germany until the end of World War II, when majority of Lower Silesia east of the Neisse (Nysa), was transferred to Poland under border changes promulgated at the Potsdam Conference in 1945, when Poland was granted the Recovered Territories.
Legnica is an economic, cultural and academic centre in Lower Silesia, together with Wrocław. The city is renowned for its varied architecture, spanning from early medieval to modern period, and its preserved Old Town with the Piast Castle, one of the largest in Poland. According to the Foreign direct investment ranking (FDI) from 2016, Legnica is one of the most progressive high-income cities in the Silesian region.
Legnica has 102,708 inhabitants and is the third largest city in the voivodeship (after Wrocław and Wałbrzych) and 38th in Poland. It also constitutes the southernmost and the largest urban center of a copper deposit ("Legnicko-Głogowski Okręg Miedziowy") with agglomeration of 448,617 inhabitants. Legnica is the largest city of the conurbation and is a member of the Association of Polish Cities.
Archaeological research conducted in eastern Legnica in late 1970s, showed the existence of a bronze foundry and the graves of three metallurgists. The find indicates a time interval about year 1000 BC.
A settlement of the Lusatian culture people existed in the 8th century B.C. After invasions of Celts beyond upper Danube basin, the area of Legnica and north foothills of Sudetes was infiltrated by Celtic settlers and traders.
Tacitus and Ptolemy recorded the ancient nation of Lugii (Lygii) in the area, and mentioned their town of Lugidunum, which has been attributed to both Legnica and Głogów.
Slavic Lechitic tribes moved into the area in the 8th century.
The city was first officially mentioned in chronicles from 1004, although settlement dates to the 7th century. Dendrochronological research proves that during the reign of Mieszko I of Poland, a new fortified settlement was built here in a style typical of the early Piast dynasty. It is mentioned in 1149 when the High Duke of Poland Bolesław IV the Curly funded a chapel at the St. Benedict monastery. Legnica was the most likely place of residence for Bolesław and it became the residence of the High Dukes of Poland in 1163 and was the seat of a principality ruled from 1248 until 1675.
Legnica became famous for the battle that took place at Legnickie Pole near the city on 9 April 1241 during the First Mongol invasion of Poland. The Christian army of the Polish duke Henry II the Pious of Silesia, supported by feudal nobility, which included in addition to Poles, Bavarian miners and military orders and Czech troops, was decisively defeated by the Mongols. The Mongols killed Henry and destroyed his forces, then turned south to rejoin the rest of the Mongol armies, which were massing at the Plain of Mohi in Hungary via Moravia against a coalition of King Bela IV and his armies, and Bela's Kipchak allies.
After the war, nonetheless, the city was developing rapidly. In 1258 at the church of St. Peter, a parish school was established, probably the first of its kind in Poland. Around 1278 a Dominican monastery was founded by Bolesław II the Horned, who was buried there as the only monarch of Poland to be buried in Legnica. Already by 1300 there was a city council in Legnica. Duke Bolesław III the Generous granted new trade privileges in 1314 and 1318 and allowed the construction of a town hall, and in 1337 the first waterworks were built. In the years 1327–1380 a new Gothic church of Saint Peter (today's Cathedral) was erected in place of the old one, and is one of Legnica's landmarks since. Also by the 14th century the city walls were erected. In 1345 the first coins were produced in the local mint. In 1374, the potters' guild was founded, as one of the oldest in Silesia. Queen consort of Poland Hedwig of Sagan died in Legnica in 1390 and was buried in the local collegiate church, which has not survived to this day.
As the capital of the Duchy of Legnica at the beginning of the 14th century, Legnica was one of the most important cities of Central Europe, having a population of nearly 16,000 residents. The city began to expand quickly after the discovery of gold in the Kaczawa River between Legnica and Złotoryja (Goldberg).
Unfortunately, such a growth rate can not be maintained long. Shortly after the city reached its maximum population increase, wooden buildings which had been erected during this period of rapid growth were devastated by a huge fire. The fire decreased the number of inhabitants in the city and halted any significant further development for many decades.
Legnica, along with other Silesian duchies, became a vassal of the Kingdom of Bohemia during the 14th century and was included within the multi-ethnic Holy Roman Empire, however remained ruled by local dukes of the Polish Piast dynasty. In 1454, a local rebellion prevented Legnica from falling under direct rule of the Bohemian kings. In 1505, Duke Frederick II of Legnica met in Legnica with the Duke of nearby Głogów, Sigismund I the Old, the future king of Poland.
The Protestant Reformation was introduced in the duchy as early as 1522 and the population became Lutheran. In 1526, a Protestant university was established in Legnica, which, however, was closed in 1529. In 1528 the first printing house in Legnica was established. After the death of King Louis II of Hungary and Bohemia at Mohács in 1526, Legnica became a fief of the Habsburg Monarchy of Austria. The first map of Silesia was made by native son Martin Helwig. The city suffered during the Thirty Years' War. In 1633 a plague epidemic broke out, and in 1634 the Austrian army destroyed the suburbs.
In 1668 Duke of Legnica Christian presented his candidacy to the Polish throne, however, in the 1669 Polish–Lithuanian royal election he wasn't chose as King. In 1676, Legnica passed to direct Habsburg rule after the death of the last Silesian Piast duke and the last Piast duke overall, George William (son of Duke Christian), despite the earlier inheritance pact by Brandenburg and Silesia, by which it was to go to Brandenburg. The last Piast Duke was buried in the St. John's church in Legnica in 1676.
Silesian aristocracy was trained at the Liegnitz Ritter-Akademie, established in the early 18th century. One of two main routes connecting Warsaw and Dresden ran through the city in the 18th century and Kings Augustus II the Strong and Augustus III of Poland traveled that route many times. The postal milestone of King Augustus II comes from that period.
In 1742 most of Silesia, including Liegnitz, became part of the Kingdom of Prussia after King Frederick the Great's defeat of Austria in the War of the Austrian Succession. In 1760 during the Seven Years' War, Liegnitz was the site of the Battle of Liegnitz when Frederick's army defeated an Austrian army led by Laudon.
During the Napoleonic Wars and Polish national liberation fights, in 1807 Polish uhlans were stationed in the city, and in 1813, the Prussians, under Field Marshal Blücher, defeated the French forces of MacDonald in the Battle of Katzbach (Kaczawa) nearby. After the administrative reorganization of the Prussian state following the Congress of Vienna, Liegnitz and the surrounding territory ("Landkreis Liegnitz") were incorporated into the Regierungsbezirk (administrative district) of Liegnitz, within the Province of Silesia on 1 May 1816. Along with the rest of Prussia, the town became part of the German Empire in 1871 during the unification of Germany. On 1 January 1874 Liegnitz became the third city in Lower Silesia (after Breslau and Görlitz) to be raised to an urban district, although the district administrator of the surrounding "Landkreis" of Liegnitz continued to have his seat in the city. Its military garrison was home to Königsgrenadier-Regiment Nr. 7 a military unit formed almost exclusively out of Polish soldiers.
The census of 1910 gave Liegnitz's population as 95.86% German, 0.15% German and Polish, 1.27% Polish, 2.26% Wendish, and 0.19% Czech. On 1 April 1937 parts of the "Landkreis" of Liegnitz communities of Alt Beckern, Groß Beckern, Hummel, Liegnitzer Vorwerke, Pfaffendorf und Prinkendorf were incorporated into the city of Liegnitz. After the Treaty of Versailles following World War I, Liegnitz was part of the newly created Province of Lower Silesia from 1919 to 1938, then of the Province of Silesia from 1938 to 1941, and again of the Province of Lower Silesia from 1941 to 1945. After the Nazis came to power in Germany, as early as 1933, a boycott of local Jewish premises was ordered, and in 1938 the synagogue was burned down. During World War II, the Germans established two forced labour camps in the city, as well as two prisoner of war labor subcamps of the prisoner of war camp located in Żagań (then "Sagan").
After the defeat of Nazi Germany during World War II, Liegnitz and all of Silesia east of the Neisse was transferred to Poland following the Potsdam Conference in 1945. The German population was expelled from the city between 1945 and 1947 and it was repopulated with Poles, many of whom were expelled from pre-war eastern Poland after its annexation by the Soviet Union. Also Greeks, refugees of the Greek Civil War, settled in Legnica in 1950. As the medieval Polish name "Lignica" was considered archaic, the town was renamed Legnica. The transfer to Poland decided at Potsdam in 1945 was officially recognized by East Germany in 1950, by West Germany under Chancellor Willy Brandt in the Treaty of Warsaw signed in 1970, and finally by the reunited Germany by the Two Plus Four Agreement in 1990. By 1990 only a handful of Polonized Germans, prewar citizens of Liegnitz, remained of the pre-1945 German population. In 2010 the city celebrated the 65th anniversary of the "return of Legnica to Poland" and its liberation from the Nazis.
The city was only partly damaged in World War II. In June 1945 Legnica was briefly the capital of the Lower Silesian (Wrocław) Voivodship, after the administration was moved there from Trzebnica and before it was finally moved to Wrocław. In 1947, the Municipal Library was opened, in 1948 a piano factory was founded, and in the years 1951-1959 Poland's first copper smelter was built in Legnica. After 1965 most parts of the preserved old town with its town houses were demolished, the historical layout was abolished, and the city was rebuilt in modern form.
From 1945 to 1990, during the Cold War, the headquarters of the Soviet forces in Poland, the so-called Northern Group of Forces, was located in the city. This fact had a strong influence on the life of the city. For much of the period, the city was divided into Polish and Soviet areas, with the latter closed to the public. These were first established in July 1945, when the Soviets forcibly ejected newly arrived Polish inhabitants from the parts of the city they wanted for their own use. The ejection was perceived by some as a particularly brutal action, and rumours circulated exaggerating its severity, though no evidence of anyone being killed in the course of it has come to light. In April 1946 city officials estimated that there were 16,700 Poles, 12,800 Germans, and 60,000 Soviets in Legnica. In October 1956, the largest anti-Soviet demonstrations in Lower Silesia took place in Legnica. The last Soviet units left the city in 1993.
In 1992 the Roman Catholic Diocese of Legnica was established, Tadeusz Rybak became the first bishop of Legnica. New local newspapers and a radio station were founded in the 1990s. In 1997, Legnica was visited by Pope John Paul II. The city suffered in the 1997 Central European flood.
Legnica is a city with rich historical architecture, ranging from Romanesque and Gothic through the Renaissance and Baroque to Historicist styles. Among the landmarks of Legnica are:
There is also a monument of Pope John Paul II and a postal milestone of King Augustus II the Strong from 1725 in Legnica.
In the 1950s and 1960s the local copper and nickel industries became a major factor in the economic development of the area. Legnica houses industrial plants belonging to KGHM Polska Miedź, one of the largest producers of copper and silver in the world. The company owns a large copper mill on the western outskirts of town. There is a Special Economic Zone in Legnica, where Lenovo was going to open a factory in summer 2008.
Legnica is a regional academic center with seven universities enrolling approximately 16,000 students.
Legnica is noted for its parks and gardens, and has seven hundred hectares of green space, mostly along the banks of the Kaczawa; the Tarninow district is particularly attractive.
To the south of Legnica is the A4 motorway. Legnica has also a district, which is a part of national road no 3. The express road S3 building has been planned nearby.
In the city there are 20 regular bus lines, 1 belt-line, 2 night lines and 3 suburban.
The town has an airport (airport code EPLE) with a 1600-metre runway, the remains of a former Soviet air base, but it is () in a poor state and not used for commercial flights.
Until the winter of 2003, the longest train service in Poland ran from Katowice to Legnica (via Kędzierzyn-Koźle, Nysa, and Jaworzyna Śląska).
In recent years Legnica has been frequently used as a film set for the following films as a result of its well preserved German-built old town, proximity to Germany and low costs:
"Przebacz" (dir. M. Stacharski) – 2005
"Anonyma – Eine Frau in Berlin" (dir. M. Färberböck) – 2007
"Wilki" (dir. F. Fromm) – 2007
"Little Moscow" (dir. W. Krzystek) – 2008
"Moje życie" (dir. D. Zahavi) – 2008[2]
"Die Wölfe" (dir. F.Fromm) – 2009
Legnica tends to be a left-of-center town with a considerable influence of workers' unions. The Municipal Council of Legnica ("Rada miejska miasta Legnica") is the legislative branch of the local government and is composed of 25 members elected in local elections every five years. The mayor or town president ("Prezydent miasta") is the executive branch of the local government and is directly elected in the same municipal elections.
Members of Parliament (Sejm) elected from Legnica-Jelenia Gora constituency:
Legnica is twinned with:
Legnica and its then ruler Count Conrad figures prominently in the alternate history series "The Crosstime Engineer", set in the period of 1230 to 1270, by Leo Frankowski. | https://en.wikipedia.org/wiki?curid=18115 |
Liverpool F.C.
Liverpool Football Club is a professional football club in Liverpool, England, that competes in the Premier League, the top tier of English football. The club has won six European Cups, three UEFA Cups, four UEFA Super Cups, one FIFA Club World Cup, and has holds the English record of wins in each of these competitions. Domestically, the club has won nineteen League titles, seven FA Cups, a record eight League Cups and fifteen FA Community Shields.
Founded in 1892, the club joined the Football League the following year and has played at Anfield since its formation. Liverpool established itself as a major force in English and European football in the 1970s and 1980s, when Bill Shankly, Bob Paisley, Joe Fagan and Kenny Dalglish led the club to a combined eleven League titles and four European Cups. Under the management of Rafael Benítez and captained by homegrown player Steven Gerrard, Liverpool became European champions for the fifth time in 2005. Under Jürgen Klopp, Liverpool won a sixth European Cup in 2019 and in 2020 a nineteenth League title, the club's first in thirty years.
Liverpool was the seventh highest-earning football club in the world in 2019, with an annual revenue of €604 million, and the world's eighth most valuable football club in 2019, valued at $2.183 billion. The club is one of the most widely supported teams in the world. Liverpool has long-standing rivalries with Manchester United and Everton.
The club's supporters have been involved in two major tragedies: the Heysel Stadium disaster, where escaping fans were pressed against a collapsing wall at the 1985 European Cup Final in Brussels, with 39 people – mostly Italians and Juventus fans – dying, after which English clubs were given a five-year ban from European competition, and the Hillsborough disaster in 1989, where 96 Liverpool supporters died in a crush against perimeter fencing.
The team changed from red shirts and white shorts to an all-red home strip in 1964 which has been used ever since. The club's anthem is "You'll Never Walk Alone".
Liverpool F.C. was founded following a dispute between the Everton committee and John Houlding, club president and owner of the land at Anfield. After eight years at the stadium, Everton relocated to Goodison Park in 1892 and Houlding founded Liverpool F.C. to play at Anfield. Originally named "Everton F.C. and Athletic Grounds Ltd" (Everton Athletic for short), the club became Liverpool F.C. in March 1892 and gained official recognition three months later, after The Football Association refused to recognise the club as Everton. The team won the Lancashire League in its debut season, and joined the Football League Second Division at the start of the 1893–94 season. After finishing in first place the club was promoted to the First Division, which it won in 1901 and again in 1906.
Liverpool reached its first FA Cup Final in 1914, losing 1–0 to Burnley. It won consecutive League championships in 1922 and 1923, but did not win another trophy until the 1946–47 season, when the club won the First Division for a fifth time under the control of ex-West Ham Utd centre half George Kay. Liverpool suffered its second Cup Final defeat in 1950, playing against Arsenal. The club was relegated to the Second Division in the 1953–54 season. Soon after Liverpool lost 2–1 to non-league Worcester City in the 1958–59 FA Cup, Bill Shankly was appointed manager. Upon his arrival he released 24 players and converted a boot storage room at Anfield into a room where the coaches could discuss strategy; here, Shankly and other "Boot Room" members Joe Fagan, Reuben Bennett, and Bob Paisley began reshaping the team.
The club was promoted back into the First Division in 1962 and won it in 1964, for the first time in 17 years. In 1965, the club won its first FA Cup. In 1966, the club won the First Division but lost to Borussia Dortmund in the European Cup Winners' Cup final. Liverpool won both the League and the UEFA Cup during the 1972–73 season, and the FA Cup again a year later. Shankly retired soon afterwards and was replaced by his assistant, Bob Paisley. In 1976, Paisley's second season as manager, the club won another League and UEFA Cup double. The following season, the club retained the League title and won the European Cup for the first time, but it lost in the 1977 FA Cup Final. Liverpool retained the European Cup in 1978 and regained the First Division title in 1979. During Paisley's nine seasons as manager Liverpool won 20 trophies, including three European Cups, a UEFA Cup, six League titles and three consecutive League Cups; the only domestic trophy he did not win was the FA Cup.
Paisley retired in 1983 and was replaced by his assistant, Joe Fagan. Liverpool won the League, League Cup and European Cup in Fagan's first season, becoming the first English side to win three trophies in a season. Liverpool reached the European Cup final again in 1985, against Juventus at the Heysel Stadium. Before kick-off, Liverpool fans breached a fence which separated the two groups of supporters, and charged the Juventus fans. The resulting weight of people caused a retaining wall to collapse, killing 39 fans, mostly Italians. The incident became known as the Heysel Stadium disaster. The match was played in spite of protests by both managers, and Liverpool lost 1–0 to Juventus. As a result of the tragedy, English clubs were banned from participating in European competition for five years; Liverpool received a ten-year ban, which was later reduced to six years. Fourteen Liverpool fans received convictions for involuntary manslaughter.
Fagan had announced his retirement just before the disaster and Kenny Dalglish was appointed as player-manager. During his tenure, the club won another three league titles and two FA Cups, including a League and Cup "Double" in the 1985–86 season. Liverpool's success was overshadowed by the Hillsborough disaster: in an FA Cup semi-final against Nottingham Forest on 15 April 1989, hundreds of Liverpool fans were crushed against perimeter fencing. Ninety-four fans died that day; the 95th victim died in hospital from his injuries four days later and the 96th died nearly four years later, without regaining consciousness. After the Hillsborough disaster there was a government review of stadium safety. The resulting Taylor Report paved the way for legislation that required top-division teams to have all-seater stadiums. The report ruled that the main reason for the disaster was overcrowding due to a failure of police control.
Liverpool was involved in the closest finish to a league season during the 1988–89 season. Liverpool finished equal with Arsenal on both points and goal difference, but lost the title on total goals scored when Arsenal scored the final goal in the last minute of the season.
Dalglish cited the Hillsborough disaster and its repercussions as the reason for his resignation in 1991; he was replaced by former player Graeme Souness. Under his leadership Liverpool won the 1992 FA Cup Final, but their league performances slumped, with two consecutive sixth-place finishes, eventually resulting in his dismissal in January 1994. Souness was replaced by Roy Evans, and Liverpool went on to win the 1995 Football League Cup Final. While they made some title challenges under Evans, third-place finishes in 1996 and 1998 were the best they could manage, and so Gérard Houllier was appointed co-manager in the 1998–99 season, and became the sole manager in November 1998 after Evans resigned. In 2001, Houllier's second full season in charge, Liverpool won a "Treble": the FA Cup, League Cup and UEFA Cup. Houllier underwent major heart surgery during the 2001–02 season and Liverpool finished second in the League, behind Arsenal. They won a further League Cup in 2003, but failed to mount a title challenge in the two seasons that followed.
Houllier was replaced by Rafael Benítez at the end of the 2003–04 season. Despite finishing fifth in Benítez's first season, Liverpool won the 2004–05 UEFA Champions League, beating A.C. Milan 3–2 in a penalty shootout after the match ended with a score of 3–3. The following season, Liverpool finished third in the Premier League and won the 2006 FA Cup Final, beating West Ham United in a penalty shootout after the match finished 3–3. American businessmen George Gillett and Tom Hicks became the owners of the club during the 2006–07 season, in a deal which valued the club and its outstanding debts at £218.9 million. The club reached the 2007 UEFA Champions League Final against Milan, as it had in 2005, but lost 2–1. During the 2008–09 season Liverpool achieved 86 points, its highest Premier League points total, and finished as runners up to Manchester United.
In the 2009–10 season, Liverpool finished seventh in the Premier League and failed to qualify for the Champions League. Benítez subsequently left by mutual consent and was replaced by Fulham manager Roy Hodgson. At the start of the 2010–11 season Liverpool was on the verge of bankruptcy and the club's creditors asked the High Court to allow the sale of the club, overruling the wishes of Hicks and Gillett. John W. Henry, owner of the Boston Red Sox and of Fenway Sports Group, bid successfully for the club and took ownership in October 2010. Poor results during the start of that season led to Hodgson leaving the club by mutual consent and former player and manager Kenny Dalglish taking over. In the 2011–12 season, Liverpool secured a record 8th League Cup success and reached the FA Cup final, but finished in eighth position, the worst league finish in 18 years; this led to the sacking of Dalglish. He was replaced by Brendan Rodgers, whose Liverpool team in the 2013–14 season mounted an unexpected title charge to finish second behind champions Manchester City and subsequently return to the Champions League, scoring 101 goals in the process, the most since the 106 scored in the 1895–96 season. Following a disappointing 2014–15 season, where Liverpool finished sixth in the league, and a poor start to the following campaign, Rodgers was sacked in October 2015.
Rodgers was replaced by Jürgen Klopp. Liverpool reached the finals of the Football League Cup and UEFA Europa League in Klopp's first season, finishing as runner-up in both competitions. Liverpool finished second in the 2018–19 season with 97 points, losing only one game: a points record for a non-title winning side. Klopp took Liverpool to successive Champions League finals in 2018 and 2019, with the club defeating Tottenham Hotspur 2–0 to win the 2019 UEFA Champions League Final. Liverpool beat Flamengo of Brazil in the final 1–0 to win the FIFA Club World Cup for the first time. Liverpool then went on to win the 2019–20 Premier League, winning their first title in thirty years.
For much of Liverpool's history its home colours have been all red, but when the club was founded its kit was more like the contemporary Everton kit. The blue and white quartered shirts were used until 1894, when the club adopted the city's colour of red. The city's symbol of the liver bird was adopted as the club's badge in 1901, although it was not incorporated into the kit until 1955. Liverpool continued to wear red shirts and white shorts until 1964, when manager Bill Shankly decided to change to an all red strip. Liverpool played in all red for the first time against Anderlecht, as Ian St. John recalled in his autobiography:
The Liverpool away strip has more often than not been all yellow or white shirts and black shorts, but there have been several exceptions. An all grey kit was introduced in 1987, which was used until the 1991–92 centenary season, when it was replaced by a combination of green shirts and white shorts. After various colour combinations in the 1990s, including gold and navy, bright yellow, black and grey, and ecru, the club alternated between yellow and white away kits until the 2008–09 season, when it re-introduced the grey kit. A third kit is designed for European away matches, though it is also worn in domestic away matches on occasions when the current away kit clashes with a team's home kit. Between 2012 and 2015, the kits were designed by Warrior Sports, who became the club's kit providers at the start of the 2012–13 season. In February 2015, Warrior's parent company New Balance announced it would be entering the global football market, with teams sponsored by Warrior now being outfitted by New Balance. The only other branded shirts worn by the club were made by Umbro until 1985, when they were replaced by Adidas, who produced the kits until 1996 when Reebok took over. They produced the kits for 10 years before Adidas made the kits from 2006 to 2012. Nike will become the club's official kit supplier from the 2020–21 season.
Liverpool was the first English professional club to have a sponsor's logo on its shirts, after agreeing a deal with Hitachi in 1979. Since then the club has been sponsored by Crown Paints, Candy, Carlsberg and Standard Chartered. The contract with Carlsberg, which was signed in 1992, was the longest-lasting agreement in English top-flight football. The association with Carlsberg ended at the start of the 2010–11 season, when Standard Chartered Bank became the club's sponsor.
The Liverpool badge is based on the city's liver bird, which in the past had been placed inside a shield. In 1992, to commemorate the centennial of the club, a new badge was commissioned, including a representation of the Shankly Gates. The next year twin flames were added at either side, symbolic of the Hillsborough memorial outside Anfield, where an eternal flame burns in memory of those who died in the Hillsborough disaster. In 2012, Warrior Sports' first Liverpool kit removed the shield and gates, returning the badge to what had adorned Liverpool shirts in the 1970s; the flames were moved to the back collar of the shirt, surrounding the number 96 for the number who died at Hillsborough.
Anfield was built in 1884 on land adjacent to Stanley Park.
Situated 2 miles (3 km) from Liverpool city centre, it was originally used by Everton before the club moved to Goodison Park after a dispute over rent with Anfield owner John Houlding. Left with an empty ground, Houlding founded Liverpool in 1892 and the club has played at Anfield ever since. The capacity of the stadium at the time was 20,000, although only 100 spectators attended Liverpool's first match at Anfield.
The Kop was built in 1906 due to the high turnout for matches and was called the Oakfield Road Embankment initially. Its first game was on 1 September 1906 when the home side beat Stoke City 1–0. In 1906 the banked stand at one end of the ground was formally renamed the Spion Kop after a hill in KwaZulu-Natal. The hill was the site of the Battle of Spion Kop in the Second Boer War, where over 300 men of the Lancashire Regiment died, many of them from Liverpool. At its peak, the stand could hold 28,000 spectators and was one of the largest single-tier stands in the world. Many stadiums in England had stands named after Spion Kop, but Anfield's was the largest of them at the time; it could hold more supporters than some entire football grounds.
Anfield could accommodate more than 60,000 supporters at its peak and had a capacity of 55,000 until the 1990s, when, following recommendations from the "Taylor Report", all clubs in the Premier League were obliged to convert to all-seater stadiums in time for the 1993–94 season, reducing its capacity to 45,276. The findings of the report precipitated the redevelopment of the Kemlyn Road Stand, which was rebuilt in 1992, coinciding with the centenary of the club, and was known as the Centenary Stand until 2017 when it was renamed the Kenny Dalglish Stand. An extra tier was added to the Anfield Road end in 1998, which further increased the capacity of the ground but gave rise to problems when it was opened. A series of support poles and stanchions were inserted to give extra stability to the top tier of the stand after movement of the tier was reported at the start of the 1999–2000 season.
Because of restrictions on expanding the capacity at Anfield, Liverpool announced plans to move to the proposed Stanley Park Stadium in May 2002. Planning permission was granted in July 2004, and in September 2006, Liverpool City Council agreed to grant Liverpool a 999-year lease on the proposed site. Following the takeover of the club by George Gillett and Tom Hicks in February 2007, the proposed stadium was redesigned. The new design was approved by the Council in November 2007. The stadium was scheduled to open in August 2011 and would hold 60,000 spectators, with HKS, Inc. contracted to build the stadium. Construction was halted in August 2008, as Gillett and Hicks had difficulty in financing the £300 million needed for the development. In October 2012, BBC Sport reported that Fenway Sports Group, the new owners of Liverpool FC, had decided to redevelop their current home at Anfield stadium, rather than building a new stadium in Stanley Park. As part of the redevelopment the capacity of Anfield was to increase from 45,276 to approximately 60,000 and would cost approximately £150m. When construction was completed on the new Main stand the capacity of Anfield was increased to 54,074. This £100 million expansion added a third tier to the stand. This was all part of a £260 million project to improve the Anfield area. Jurgen Klopp the manager at the time described the stand as "impressive."
Liverpool is one of the best supported clubs in the world. The club states that its worldwide fan base includes more than 200 officially recognised Supporters Clubs in at least 50 countries. Notable groups include Spirit of Shankly. The club takes advantage of this support through its worldwide summer tours, which has included playing in front of 101,000 in Michigan, U.S., and 95,000 in Melbourne, Australia. Liverpool fans often refer to themselves as Kopites, a reference to the fans who once stood, and now sit, on the Kop at Anfield. In 2008 a group of fans decided to form a splinter club, A.F.C. Liverpool, to play matches for fans who had been priced out of watching Premier League football.
The song "You'll Never Walk Alone", originally from the Rodgers and Hammerstein musical "Carousel" and later recorded by Liverpool musicians Gerry and the Pacemakers, is the club's anthem and has been sung by the Anfield crowd since the early 1960s. It has since gained popularity among fans of other clubs around the world. The song's title adorns the top of the Shankly Gates, which were unveiled on 2 August 1982 in memory of former manager Bill Shankly. The "You'll Never Walk Alone" portion of the Shankly Gates is also reproduced on the club's crest.
The club's supporters have been involved in two stadium disasters. The first was the 1985 Heysel Stadium disaster, in which 39 Juventus supporters were killed. They were confined to a corner by Liverpool fans who had charged in their direction; the weight of the cornered fans caused a wall to collapse. UEFA laid the blame for the incident solely on the Liverpool supporters, and banned all English clubs from European competition for five years. Liverpool was banned for an additional year, preventing it from participating in the 1990–91 European Cup, even though it won the League in 1990. Twenty-seven fans were arrested on suspicion of manslaughter and were extradited to Belgium in 1987 to face trial. In 1989, after a five-month trial in Belgium, 14 Liverpool fans were given three-year sentences for involuntary manslaughter; half of the terms were suspended.
The second disaster took place during an FA Cup semi-final between Liverpool and Nottingham Forest at Hillsborough Stadium, Sheffield, on 15 April 1989. Ninety-six Liverpool fans died as a consequence of overcrowding at the Leppings Lane end, in what became known as the Hillsborough disaster. In the following days "The Sun" newspaper published an article entitled "The Truth", in which it claimed that Liverpool fans had robbed the dead and had urinated on and attacked the police. Subsequent investigations proved the allegations false, leading to a boycott of the newspaper by Liverpool fans across the city and elsewhere; many still refuse to buy "The Sun" 30 years later. Many support organisations were set up in the wake of the disaster, such as the Hillsborough Justice Campaign, which represents bereaved families, survivors and supporters in their efforts to secure justice.
Liverpool's longest-established rivalry is with fellow Liverpool team Everton, against whom they contest the Merseyside derby. The rivalry stems from Liverpool's formation and the dispute with Everton officials and the then owners of Anfield. The Merseyside derby is one of the few local derbies which do not enforce fan segregation, and hence has been known as the "friendly derby". Since the mid-1980s, the rivalry has intensified both on and off the field and, since the inception of the Premier League in 1992, the Merseyside derby has had more players sent off than any other Premier League game. It has been referred to as "the most ill-disciplined and explosive fixture in the Premier League". In terms of support within the city, the number of Liverpool fans outweigh Everton supporters by a ratio of 2:1.
Liverpool's rivalry with Manchester United stems from the cities' competition in the Industrial Revolution of the 19th century. Ranked the two biggest clubs in England by "France Football" magazine, Liverpool and Manchester United are the most successful English teams in both domestic and international competitions, and both clubs have a global fanbase. Viewed as one of the biggest rivalries in world football, it is considered the most famous fixture in English football. The two clubs alternated as champions between 1964 and 1967, and Manchester United became the first English team to win the European Cup in 1968, followed by Liverpool's four European Cup victories. Despite the 39 league titles and nine European Cups between them the two rivals have rarely been successful at the same time – Liverpool's run of titles in the 1970s and 1980s coincided with Manchester United's 26-year title drought, and United's success in the Premier League-era likewise coincided with Liverpool's 30-year title drought, and the two clubs have finished first and second in the league only five times. Such is the rivalry between the clubs they rarely do transfer business with each other. The last player to be transferred between the two clubs was Phil Chisnall, who moved to Liverpool from Manchester United in 1964.
As the owner of Anfield and founder of Liverpool, John Houlding was the club's first chairman, a position he held from its founding in 1892 until 1904. John McKenna took over as chairman after Houlding's departure. McKenna subsequently became President of the Football League. The chairmanship changed hands many times before John Smith, whose father was a shareholder of the club, took up the role in 1973. He oversaw the most successful period in Liverpool's history before stepping down in 1990. His successor was Noel White who became chairman in 1990. In August 1991 David Moores, whose family had owned the club for more than 50 years became chairman. His uncle John Moores was also a shareholder at Liverpool and was chairman of Everton from 1961 to 1973. Moores owned 51 percent of the club, and in 2004 expressed his willingness to consider a bid for his shares in Liverpool.
Moores eventually sold the club to American businessmen George Gillett and Tom Hicks on 6 February 2007. The deal valued the club and its outstanding debts at £218.9 million. The pair paid £5,000 per share, or £174.1m for the total shareholding and £44.8m to cover the club's debts. Disagreements between Gillett and Hicks, and the fans' lack of support for them, resulted in the pair looking to sell the club. Martin Broughton was appointed chairman of the club on 16 April 2010 to oversee its sale. In May 2010, accounts were released showing the holding company of the club to be £350m in debt (due to leveraged takeover) with losses of £55m, causing auditor KPMG to qualify its audit opinion. The group's creditors, including the Royal Bank of Scotland, took Gillett and Hicks to court to force them to allow the board to proceed with the sale of the club, the major asset of the holding company. A High Court judge, Mr Justice Floyd, ruled in favour of the creditors and paved the way for the sale of the club to Fenway Sports Group (formerly New England Sports Ventures), although Gillett and Hicks still had the option to appeal. Liverpool was sold to Fenway Sports Group on 15 October 2010 for £300m.
Liverpool has been described as a global brand; a 2010 report valued the club's trademarks and associated intellectual property at £141m, an increase of £5m on the previous year. Liverpool was given a brand rating of AA (Very Strong). In April 2010 business magazine "Forbes" ranked Liverpool as the sixth most valuable football team in the world, behind Manchester United, Real Madrid, Arsenal, Barcelona and Bayern Munich; they valued the club at $822m (£532m), excluding debt. Accountants Deloitte ranked Liverpool eighth in the Deloitte Football Money League, which ranks the world's football clubs in terms of revenue. Liverpool's income in the 2009–10 season was €225.3m. According to a 2018 report by Deloitte, the club had an annual revenue of €424.2 million for the previous year, and "Forbes" valued the club at $1.944 billion. In 2018, annual revenue increased to €513.7 million, and "Forbes" valued the club at $2.183 billion. In 2019 revenue increased to €604 million (£533 million) according to Deloitte, with the club breaching the half a billion pounds mark.
In April 2020, the owners of the club came under fire from fans and the media for deciding to furlough all non-playing staff during the COVID-19 pandemic. In response to this, the club made a U-turn on the decision and apologised for their initial decision.
Liverpool featured in the first edition of BBC's "Match of the Day", which screened highlights of their match against Arsenal at Anfield on 22 August 1964. The first football match to be televised in colour was between Liverpool and West Ham United, broadcast live in March 1967. Liverpool fans featured in the Pink Floyd song "Fearless", in which they sang excerpts from "You'll Never Walk Alone". To mark the club's appearance in the 1988 FA Cup Final, Liverpool released the "Anfield Rap", a song featuring John Barnes and other members of the squad.
A documentary drama on the Hillsborough disaster, written by Jimmy McGovern, was screened in 1996. It featured Christopher Eccleston as Trevor Hicks, who lost two teenage daughters in the disaster, went on to campaign for safer stadiums and helped to form the Hillsborough Families Support Group. Liverpool featured in the film "The 51st State", in which ex-hitman Felix DeSouza (Robert Carlyle) is a keen supporter of the team and the last scene takes place at a match between Liverpool and Manchester United. The club also featured in children's television show "Scully", about a young boy who tries to gain a trial with Liverpool.
Since the establishment of the club in 1892, 45 players have been club captain of Liverpool F.C. Andrew Hannah became the first captain of the club after Liverpool separated from Everton and formed its own club. Alex Raisbeck, who was club captain from 1899 to 1909, was the longest serving captain before being overtaken by Steven Gerrard who served 12 seasons as Liverpool captain starting from the 2003–04 season. The present captain is Jordan Henderson, who in the 2015–16 season replaced Gerrard who moved to LA Galaxy.
Source:
Source:
Source:
Liverpool's first trophy was the Lancashire League, which it won in the club's first season. In 1901, the club won its first League title, while the nineteenth and most recent was in 2020. Its first success in the FA Cup was in 1965. In terms of the number of trophies won, Liverpool's most successful decade was the 1980s, when the club won six League titles, two FA Cups, four League Cups, one Football League Super Cup, five Charity Shields (one shared) and two European Cups.
The club has accumulated more top-flight wins and points than any other English team. Liverpool also has the highest average league finishing position (3.3) for the 50-year period to 2015 and second-highest average league finishing position for the period 1900–1999 after Arsenal, with an average league placing of 8.7.
Liverpool are the most successful British club in international football with fourteen trophies, having won the European Cup/UEFA Champions League, UEFA's premier club competition, six times, an English record and only surpassed by Real Madrid and A.C. Milan. Liverpool's fifth European Cup win, in 2005, meant that the club was awarded the trophy permanently and was also awarded a multiple-winner badge. Liverpool also hold the English record of three wins in the UEFA Cup, UEFA's secondary club competition. In 2019, the club won the FIFA Club World Cup for the first time, and also became the first English club to win the international treble of Club World Cup, Champions League and UEFA Super Cup. | https://en.wikipedia.org/wiki?curid=18119 |
Lysosome
A lysosome () is a membrane-bound organelle found in many animal cells. They are spherical vesicles that contain hydrolytic enzymes that can break down many kinds of biomolecules. A lysosome has a specific composition, of both its membrane proteins, and its lumenal proteins. The lumen's pH (~4.5–5.0) is optimal for the enzymes involved in hydrolysis, analogous to the activity of the stomach. Besides degradation of polymers, the lysosome is involved in various cell processes, including secretion, plasma membrane repair, apoptosis, cell signaling, and energy metabolism.
Lysosomes act as the waste disposal system of the cell by digesting obsolete or un-used materials in the cytoplasm, from both inside and outside the cell. Material from outside the cell is taken-up through endocytosis, while material from the inside of the cell is digested through autophagy. The sizes of the organelles vary greatly—the larger ones can be more than 10 times the size of the smaller ones. They were discovered and named by Belgian biologist Christian de Duve, who eventually received the Nobel Prize in Physiology or Medicine in 1974.
Lysosomes are known to contain more than 60 different enzymes, and have more than 50 membrane proteins. Enzymes of the lysosomes are synthesised in the rough endoplasmic reticulum. The enzymes are imported from the Golgi apparatus in small vesicles, which fuse with larger acidic vesicles. Enzymes destined for a lysosome are specifically tagged with the molecule mannose 6-phosphate, so that they are properly sorted into acidified vesicles.
Synthesis of lysosomal enzymes is controlled by nuclear genes. Mutations in the genes for these enzymes are responsible for more than 30 different human genetic disorders, which are collectively known as lysosomal storage diseases. These diseases result from an accumulation of specific substrates, due to the inability to break them down. These genetic defects are related to several neurodegenerative disorders, cancers, cardiovascular diseases, and aging-related diseases.
Lysosomes should not be confused with liposomes, or with micelles.
Christian de Duve, the chairman of the Laboratory of Physiological Chemistry at the Catholic University of Louvain in Belgium, had been studying the mechanism of action of a pancreatic hormone insulin in liver cells. By 1949, he and his team had focused on the enzyme called glucose 6-phosphatase, which is the first crucial enzyme in sugar metabolism and the target of insulin. They already suspected that this enzyme played a key role in regulating blood sugar levels. However, even after a series of experiments, they failed to purify and isolate the enzyme from the cellular extracts. Therefore, they tried a more arduous procedure of cell fractionation, by which cellular components are separated based on their sizes using centrifugation.
They succeeded in detecting the enzyme activity from the microsomal fraction. This was the crucial step in the serendipitous discovery of lysosomes. To estimate this enzyme activity, they used that of the standardized enzyme acid phosphatase and found that the activity was only 10% of the expected value. One day, the enzyme activity of purified cell fractions which had been refrigerated for five days was measured. Surprisingly, the enzyme activity was increased to normal of that of the fresh sample. The result was the same no matter how many times they repeated the estimation, and led to the conclusion that a membrane-like barrier limited the accessibility of the enzyme to its substrate, and that the enzymes were able to diffuse after a few days (and react with their substrate). They described this membrane-like barrier as a "saclike structure surrounded by a membrane and containing acid phosphatase."
It became clear that this enzyme from the cell fraction came from membranous fractions, which were definitely cell organelles, and in 1955 De Duve named them "lysosomes" to reflect their digestive properties. The same year, Alex B. Novikoff from the University of Vermont visited de Duve's laboratory, and successfully obtained the first electron micrographs of the new organelle. Using a staining method for acid phosphatase, de Duve and Novikoff confirmed the location of the hydrolytic enzymes of lysosomes using light and electron microscopic studies. de Duve won the Nobel Prize in Physiology or Medicine in 1974 for this discovery.
Originally, De Duve had termed the organelles the "suicide bags" or "suicide sacs" of the cells, for their hypothesized role in apoptosis. However, it has since been concluded that they only play a minor role in cell death.
Lysosomes contain a variety of enzymes, enabling the cell to break down various biomolecules it engulfs, including peptides, nucleic acids, carbohydrates, and lipids (lysosomal lipase). The enzymes responsible for this hydrolysis require an acidic environment for optimal activity.
In addition to being able to break down polymers, lysosomes are capable of fusing with other organelles & digesting large structures or cellular debris; through cooperation with phagosomes, they are able to conduct autophagy, clearing out damaged structures. Similarly, they are able to break-down virus particles or bacteria in phagocytosis of macrophages.
The size of lysosomes varies from 0.1 μm to 1.2 μm. With a pH ranging from ~4.5–5.0, the interior of the lysosomes is acidic compared to the slightly basic cytosol (pH 7.2). The lysosomal membrane protects the cytosol, and therefore the rest of the cell, from the degradative enzymes within the lysosome. The cell is additionally protected from any lysosomal acid hydrolases that drain into the cytosol, as these enzymes are pH-sensitive and do not function well or at all in the alkaline environment of the cytosol. This ensures that cytosolic molecules and organelles are not destroyed in case there is leakage of the hydrolytic enzymes from the lysosome.
The lysosome maintains its pH differential by pumping in protons (H+ ions) from the cytosol across the membrane via proton pumps and chloride ion channels. Vacuolar-ATPases are responsible for transport of protons, while the counter transport of chloride ions is performed by ClC-7 Cl−/H+ antiporter. In this way a steady acidic environment is maintained.
It sources its versatile capacity for degradation by import of enzymes with specificity for different substrates; cathepsins are the major class of hydrolytic enzymes, while lysosomal alpha-glucosidase is responsible for carbohydrates, and lysosomal acid phosphatase is necessary to release phosphate groups of phospholipids.
Many components of animal cells are recycled by transferring them inside or embedded in sections of membrane. For instance, in endocytosis (more specifically, macropinocytosis), a portion of the cell's plasma membrane pinches off to form vesicles that will eventually fuse with an organelle within the cell. Without active replenishment, the plasma membrane would continuously decrease in size. It is thought that lysosomes participate in this dynamic membrane exchange system and are formed by a gradual maturation process from endosomes.
The production of lysosomal proteins suggests one method of lysosome sustainment. Lysosomal protein genes are transcribed in the nucleus. mRNA transcripts exit the nucleus into the cytosol, where they are translated by ribosomes. The nascent peptide chains are translocated into the rough endoplasmic reticulum, where they are modified. Lysosomal soluble proteins exit the endoplasmic reticulum via CLN8-mediated recruitment in COPII-coated vesicles and enter the Golgi apparatus, where a specific lysosomal tag, mannose 6-phosphate, is added to the peptides. The presence of these tags allow for binding to mannose 6-phosphate receptors in the Golgi apparatus, a phenomenon that is crucial for proper packaging into vesicles destined for the lysosomal system.
Upon leaving the Golgi apparatus, the lysosomal enzyme-filled vesicle fuses with a late endosome, a relatively acidic organelle with an approximate pH of 5.5. This acidic environment causes dissociation of the lysosomal enzymes from the mannose 6-phosphate receptors. The enzymes are packed into vesicles for further transport to established lysosomes. The late endosome itself can eventually grow into a mature lysosome, as evidenced by the transport of endosomal membrane components from the lysosomes back to the endosomes.
As the endpoint of endocytosis, the lysosome also acts as a safeguard in preventing pathogens from being able to reach the cytoplasm before being degraded. Pathogens often hijack endocytotic pathways such as pinocytosis in order to gain entry into the cell. The lysosome prevents easy entry into the cell by hydrolyzing the biomolecules of pathogens necessary for their replication strategies; reduced Lysosomal activity results in an increase in viral infectivity, including HIV. In addition, AB5 toxins such as cholera hijack the endosomal pathway while evading lysosomal degradation.
Lysosomes are involved in a group of genetically inherited deficiencies, or mutations called lysosomal storage diseases (LSD), inborn errors of metabolism caused by a dysfunction of one of the enzymes. The rate of incidence is estimated to be 1 in 5,000 births, and the true figure expected to be higher as many cases are likely to be undiagnosed or misdiagnosed. The primary cause is deficiency of an acid hydrolase. Other conditions are due to defects in lysosomal membrane proteins that fail to transport the enzyme, non-enzymatic soluble lysosomal proteins. The initial effect of such disorders is accumulation of specific macromolecules or monomeric compounds inside the endosomal–autophagic–lysosomal system. This results in abnormal signaling pathways, calcium homeostasis, lipid biosynthesis and degradation and intracellular trafficking, ultimately leading to pathogenetic disorders. The organs most affected are brain, viscera, bone and cartilage.
There is no direct medical treatment to cure LSDs. The most common LSD is Gaucher's disease, which is due to deficiency of the enzyme glucocerebrosidase. Consequently, the enzyme substrate, the fatty acid glucosylceramide accumulates, particularly in white blood cells, which in turn affects spleen, liver, kidneys, lungs, brain and bone marrow. The disease is characterized by bruises, fatigue, anaemia, low blood platelets, osteoporosis, and enlargement of the liver and spleen. As of 2017, enzyme replacement therapy is available for treating 8 of the 50-60 known LDs.
The most severe and rarely found, lysosomal storage disease is inclusion cell disease.
Metachromatic leukodystrophy is another lysosomal storage disease that also affects sphingolipid metabolism.
Dysfunctional lysosome activity is also heavily implicated in the biology of aging, and age-related diseases such as Alzheimer's, Parkinson's, and cardiovascular disease.
Weak bases with lipophilic properties accumulate in acidic intracellular compartments like lysosomes. While the plasma and lysosomal membranes are permeable for neutral and uncharged species of weak bases, the charged protonated species of weak bases do not permeate biomembranes and accumulate within lysosomes. The concentration within lysosomes may reach levels 100 to 1000 fold higher than extracellular concentrations. This phenomenon is called lysosomotropism, "acid trapping" or "proton pump" effect. The amount of accumulation of lysosomotropic compounds may be estimated using a cell-based mathematical model.
A significant part of the clinically approved drugs are lipophilic weak bases with lysosomotropic properties. This explains a number of pharmacological properties of these drugs, such as high tissue-to-blood concentration gradients or long tissue elimination half-lifes; these properties have been found for drugs such as haloperidol, levomepromazine, and amantadine. However, high tissue concentrations and long elimination half-lives are explained also by lipophilicity and absorption of drugs to fatty tissue structures. Important lysosomal enzymes, such as acid sphingomyelinase, may be inhibited by lysosomally accumulated drugs. Such compounds are termed FIASMAs (functional inhibitor of acid sphingomyelinase) and include for example fluoxetine, sertraline, or amitriptyline.
Ambroxol is a lysosomotropic drug of clinical use to treat conditions of productive cough for its mucolytic action. Ambroxol triggers the exocytosis of lysosomes via neutralization of lysosomal pH and calcium release from acidic calcium stores. Presumably for this reason, Ambroxol was also found to improve cellular function in some disease of lysosomal origin such as Parkinson's or lysosomal storage disease.
Impaired lysosome function is prominent in systemic lupus erythematosus preventing macrophages and monocytes from degrading neutrophil extracellular traps and immune complexes. The failure to degrade internalized immune complexes stems from chronic mTORC2 activity, which impairs lysosome acidification. As a result, immune complexes in the lysosome recycle to the surface of macrophages causing an accumulation of nuclear antigens upstream of multiple lupus-associated pathologies.
By scientific convention, the term lysosome is applied to these vesicular organelles only in animals, and the term vacuole is applied to those in plants, fungi and algae (some animal cells also have vacuoles). Discoveries in plant cells since the 1970s started to challenge this definition. Plant vacuoles are found to be much more diverse in structure and function than previously thought. Some vacuoles contain their own hydrolytic enzymes and perform the classic lysosomal activity, which is autophagy. These vacuoles are therefore seen as fulfilling the role of the animal lysosome. Based on de Duve's description that "only when considered as part of a system involved directly or indirectly in intracellular digestion does the term lysosome describe a physiological unit", some botanists strongly argued that these vacuoles are lysosomes. However, this is not universally accepted as the vacuoles are strictly not similar to lysosomes, such as in their specific enzymes and lack of phagocytic functions. Vacuoles do not have catabolic activity and do not undergo exocytosis as lysosomes do.
The word "lysosome" (, ) is New Latin that uses the combining forms "lyso-" (referring to lysis and derived from the Latin "lysis", meaning "to loosen", via Ancient Greek λύσις [lúsis]), and "-some", from "soma", "body", yielding "body that lyses" or "lytic body". The adjectival form is "lysosomal". The forms "*lyosome" and "*lyosomal" are much rarer; they use the "lyo-" form of the prefix but are often treated by readers and editors as mere unthinking replications of typos, which has no doubt been true as often as not. | https://en.wikipedia.org/wiki?curid=18120 |
Lisp machine
Lisp machines are general-purpose computers designed to efficiently run Lisp as their main software and programming language, usually via hardware support. They are an example of a high-level language computer architecture, and in a sense, they were the first commercial single-user workstations. Despite being modest in number (perhaps 7,000 units total as of 1988), Lisp machines commercially pioneered many now-commonplace technologies, including effective garbage collection, laser printing, windowing systems, computer mice, high-resolution bit-mapped raster graphics, computer graphic rendering, and networking innovations such as Chaosnet. Several firms built and sold Lisp machines in the 1980s: Symbolics (3600, 3640, XL1200, MacIvory, and other models), Lisp Machines Incorporated (LMI Lambda), Texas Instruments (Explorer and MicroExplorer), and Xerox (Interlisp-D workstations). The operating systems were written in Lisp Machine Lisp, Interlisp (Xerox), and later partly in Common Lisp.
Artificial intelligence (AI) computer programs of the 1960s and 1970s intrinsically required what was then considered a huge amount of computer power, as measured in processor time and memory space. The power requirements of AI research were exacerbated by the Lisp symbolic programming language, when commercial hardware was designed and optimized for assembly- and Fortran-like programming languages. At first, the cost of such computer hardware meant that it had to be shared among many users. As integrated circuit technology shrank the size and cost of computers in the 1960s and early 1970s, and the memory needs of AI programs began to exceed the address space of the most common research computer, the DEC PDP-10, researchers considered a new approach: a computer designed specifically to develop and run large artificial intelligence programs, and tailored to the semantics of the Lisp language. To keep the operating system (relatively) simple, these machines would not be shared, but would be dedicated to single users.
In 1973, Richard Greenblatt and Thomas Knight, programmers at Massachusetts Institute of Technology (MIT) Artificial Intelligence Laboratory (AI Lab), began what would become the MIT Lisp Machine Project when they first began building a computer hardwired to run certain basic Lisp operations, rather than run them in software, in a 24-bit tagged architecture. The machine also did incremental (or "Arena") garbage collection. More specifically, since Lisp variables are typed at runtime rather than compile time, a simple addition of two variables could take five times as long on conventional hardware, due to test and branch instructions. Lisp Machines ran the tests in parallel with the more conventional single instruction additions. If the simultaneous tests failed, then the result was discarded and recomputed; this meant in many cases a speed increase by several factors. This simultaneous checking approach was used as well in testing the bounds of arrays when referenced, and other memory management necessities (not merely garbage collection or arrays).
Type checking was further improved and automated when the conventional byte word of 32-bits was lengthened to 36-bits for Symbolics 3600-model Lisp machines and eventually to 40-bits or more (usually, the excess bits not accounted for by the following were used for error-correcting codes). The first group of extra bits were used to hold type data, making the machine a tagged architecture, and the remaining bits were used to implement CDR coding (wherein the usual linked list elements are compressed to occupy roughly half the space), aiding garbage collection by reportedly an order of magnitude. A further improvement was two microcode instructions which specifically supported Lisp functions, reducing the cost of calling a function to as little as 20 clock cycles, in some Symbolics implementations.
The first machine was called the CONS machine (named after the list construction operator codice_1 in Lisp). Often it was affectionately referred to as the "Knight machine", perhaps since Knight wrote his master's thesis on the subject; it was extremely well received. It was subsequently improved into a version called CADR (a pun; in Lisp, the codice_2 function, which returns the second item of a list, is pronounced or , as some pronounce the word "cadre") which was based on essentially the same architecture. About 25 of what were essentially prototype CADRs were sold within and without MIT for ~$50,000; it quickly became the favorite machine for hacking- many of the most favored software tools were quickly ported to it (e.g. Emacs was ported from ITS in 1975). It was so well received at an AI conference held at MIT in 1978 that Defense Advanced Research Projects Agency (DARPA) began funding its development.
In 1979, Russell Noftsker, being convinced that Lisp machines had a bright commercial future due to the strength of the Lisp language and the enabling factor of hardware acceleration, proposed to Greenblatt that they commercialize the technology. In a counter-intuitive move for an AI Lab hacker, Greenblatt acquiesced, hoping perhaps that he could recreate the informal and productive atmosphere of the Lab in a real business. These ideas and goals were considerably different from those of Noftsker. The two negotiated at length, but neither would compromise. As the proposed firm could succeed only with the full and undivided assistance of the AI Lab hackers as a group, Noftsker and Greenblatt decided that the fate of the enterprise was up to them, and so the choice should be left to the hackers.
The ensuing discussions of the choice divided the lab into two factions. In February 1979, matters came to a head. The hackers sided with Noftsker, believing that a commercial venture fund-backed firm had a better chance of surviving and commercializing Lisp machines than Greenblatt's proposed self-sustaining start-up. Greenblatt lost the battle.
It was at this juncture that Symbolics, Noftsker's enterprise, slowly came together. While Noftsker was paying his staff a salary, he had no building or any equipment for the hackers to work on. He bargained with Patrick Winston that, in exchange for allowing Symbolics’ staff to keep working out of MIT, Symbolics would let MIT use internally and freely all the software Symbolics developed. A consultant from CDC, who was trying to put together a natural language computer application with a group of West-coast programmers, came to Greenblatt, seeking a Lisp machine for his group to work with, about eight months after the disastrous conference with Noftsker. Greenblatt had decided to start his own rival Lisp machine firm, but he had done nothing. The consultant, Alexander Jacobson, decided that the only way Greenblatt was going to start the firm and build the Lisp machines that Jacobson desperately needed was if Jacobson pushed and otherwise helped Greenblatt launch the firm. Jacobson pulled together business plans, a board, a partner for Greenblatt (one F. Stephen Wyle). The newfound firm was named "LISP Machine, Inc." (LMI), and was funded by CDC orders, via Jacobson.
Around this time Symbolics (Noftsker's firm) began operating. It had been hindered by Noftsker's promise to give Greenblatt a year's head start, and by severe delays in procuring venture capital. Symbolics still had the major advantage that while 3 or 4 of the AI Lab hackers had gone to work for Greenblatt, a solid 14 other hackers had signed onto Symbolics. Two AI Lab people were not hired by either: Richard Stallman and Marvin Minsky. Stallman, however, blamed Symbolics for the decline of the hacker community that had centered around the AI lab. For two years, from 1982 to the end of 1983, Stallman worked by himself to clone the output of the Symbolics programmers, with the aim of preventing them from gaining a monopoly on the lab's computers.
Regardless, after a series of internal battles, Symbolics did get off the ground in 1980/1981, selling the CADR as the LM-2, while Lisp Machines, Inc. sold it as the LMI-CADR. Symbolics did not intend to produce many LM-2s, since the 3600 family of Lisp machines was supposed to ship quickly, but the 3600s were repeatedly delayed, and Symbolics ended up producing ~100 LM-2s, each of which sold for $70,000. Both firms developed second-generation products based on the CADR: the Symbolics 3600 and the LMI-LAMBDA (of which LMI managed to sell ~200). The 3600, which shipped a year late, expanded on the CADR by widening the machine word to 36-bits, expanding the address space to 28-bits, and adding hardware to accelerate certain common functions that were implemented in microcode on the CADR. The LMI-LAMBDA, which came out a year after the 3600, in 1983, was compatible with the CADR (it could run CADR microcode), but hardware differences existed. Texas Instruments (TI) joined the fray when it licensed the LMI-LAMBDA design and produced its own variant, the TI Explorer. Some of the LMI-LAMBDAs and the TI Explorer were dual systems with both a Lisp and a Unix processor. TI also developed a 32-bit microprocessor version of its Lisp CPU for the TI Explorer. This Lisp chip also was used for the MicroExplorer – a NuBus board for the Apple Macintosh II (NuBus was initially developed at MIT for use in Lisp machines).
Symbolics continued to develop the 3600 family and its operating system, Genera, and produced the Ivory, a VLSI implementation of the Symbolics architecture. Starting in 1987, several machines based on the Ivory processor were developed: boards for Suns and Macs, stand-alone workstations and even embedded systems (I-Machine Custom LSI, 32 bit address, Symbolics XL-400, UX-400, MacIvory II; in 1989 available platforms were Symbolics XL-1200, MacIvory III, UX-1200, Zora, NXP1000 "pizza box"). Texas Instruments shrank the Explorer into silicon as the MicroExplorer which was offered as a card for the Apple Mac II. LMI abandoned the CADR architecture and developed its own K-Machine, but LMI went bankrupt before the machine could be brought to market. Before its demise, LMI was working on a distributed system for the LAMBDA using Moby space.
These machines had hardware support for various primitive Lisp operations (data type testing, CDR coding) and also hardware support for incremental garbage collection. They ran large Lisp programs very efficiently. The Symbolics machine was competitive against many commercial super minicomputers, but was never adapted for conventional purposes. The Symbolics Lisp Machines were also sold to some non-AI markets like computer graphics, modeling, and animation.
The MIT-derived Lisp machines ran a Lisp dialect named Lisp Machine Lisp, descended from MIT's Maclisp. The operating systems were written from the ground up in Lisp, often using object-oriented extensions. Later, these Lisp machines also supported various versions of Common Lisp (with Flavors, New Flavors, and Common Lisp Object System (CLOS)).
Bolt, Beranek and Newman (BBN) developed its own Lisp machine, named Jericho, which ran a version of Interlisp. It was never marketed. Frustrated, the whole AI group resigned, and were hired mostly by Xerox. So, Xerox Palo Alto Research Center had, simultaneously with Greenblatt's own development at MIT, developed their own Lisp machines which were designed to run InterLisp (and later Common Lisp). The same hardware was used with different software also as Smalltalk machines and as the Xerox Star office system. These included the Xerox 1100, "Dolphin" (1979); the Xerox 1132, "Dorado"; the Xerox 1108, "Dandelion" (1981); the Xerox 1109, "Dandetiger"; and the Xerox 1186/6085, "Daybreak". The operating system of the Xerox Lisp machines has also been ported to a virtual machine and is available for several platforms as a product named "Medley". The Xerox machine was well known for its advanced development environment (InterLisp-D), the ROOMS window manager, for its early graphical user interface and for novel applications like NoteCards (one of the first hypertext applications).
Xerox also worked on a Lisp machine based on reduced instruction set computing (RISC), using the 'Xerox Common Lisp Processor' and planned to bring it to market by 1987, which did not occur.
In the mid-1980s, Integrated Inference Machines (IIM) built prototypes of Lisp machines named Inferstar.
In 1984–85 a UK firm, Racal-Norsk, a joint subsidiary of Racal and Norsk Data, attempted to repurpose Norsk Data's ND-500 supermini as a microcoded Lisp machine, running CADR software: the Knowledge Processing System (KPS).
There were several attempts by Japanese manufacturers to enter the Lisp machine market: the Fujitsu Facom-alpha mainframe co-processor, NTT's Elis, Toshiba's AI processor (AIP) and NEC's LIME. Several university research efforts produced working prototypes, among them are Kobe University's TAKITAC-7, RIKEN's FLATS, and Osaka University's EVLIS.
In France, two Lisp Machine projects arose: M3L at Toulouse Paul Sabatier University and later MAIA.
In Germany Siemens designed the RISC-based Lisp co-processor COLIBRI.
With the onset of the "AI winter" and the early beginnings of the microcomputer revolution, which would sweep away the minicomputer and workstation makers, cheaper desktop PCs soon could run Lisp programs even faster than Lisp machines, with no use of special purpose hardware. Their high profit margin hardware business eliminated, most Lisp machine makers had gone out of business by the early 90s, leaving only software based firms like Lucid Inc. or hardware makers who had switched to software and services to avoid the crash. , besides Xerox, Symbolics is the only Lisp machine firm still operating, selling the Open Genera Lisp machine software environment and the Macsyma computer algebra system.
Several attempts to write open-source emulators for various Lisp Machines have been made: CADR Emulation, Symbolics L Lisp Machine Emulation, the E3 Project (TI Explorer II Emulation), Meroko (TI Explorer I), and Nevermore (TI Explorer I). On 3 October 2005, the MIT released the CADR Lisp Machine source code as open source.
In September 2014, Alexander Burger, developer of PicoLisp, announced PilMCU, an implementation of PicoLisp in hardware.
The Bitsavers' PDF Document Archive has PDF versions of the extensive documentation for the Symbolics Lisp Machines, the TI Explorer and MicroExplorer Lisp Machines and the Xerox Interlisp-D Lisp Machines.
Domains using the Lisp machines were mostly in the wide field of artificial intelligence applications, but also in computer graphics, medical image processing, and many others.
The main commercial expert systems of the 80s were available: Intellicorp's Knowledge Engineering Environment (KEE), Knowledge Craft, from The Carnegie Group Inc., and ART (Automated Reasoning Tool) from Inference Corporation.
Initially the Lisp machines were designed as personal workstations for software development in Lisp. They were used by one person and offered no multi-user mode. The machines provided a large, black and white, bitmap display, keyboard and mouse, network adapter, local hard disks, more than 1 MB RAM, serial interfaces, and a local bus for extension cards. Color graphics cards, tape drives, and laser printers were optional.
The processor did not run Lisp directly, but was a stack machine with instructions optimized for compiled Lisp. The early Lisp machines used microcode to provide the instruction set. For several operations, type checking and dispatching was done in hardware at runtime. For example, only one addition operation could be used with various numeric types (integer, float, rational, and complex numbers). The result was a very compact compiled representation of Lisp code.
The following example uses a function that counts the number of elements of a list for which a predicate returns codice_3.
The disassembled machine code for above function (for the Ivory microprocessor from Symbolics):
Command: (disassemble (compile #'example-count))
The operating system used virtual memory to provide a large address space. Memory management was done with garbage collection. All code shared a single address space. All data objects were stored with a tag in memory, so that the type could be determined at runtime. Multiple execution threads were supported and termed "processes". All processes ran in the one address space.
All operating system software was written in Lisp. Xerox used Interlisp. Symbolics, LMI, and TI used Lisp Machine Lisp (descendant of MacLisp). With the appearance of Common Lisp, Common Lisp was supported on the Lisp Machines and some system software was ported to Common Lisp or later written in Common Lisp.
Some later Lisp machines (like the TI MicroExplorer, the Symbolics MacIvory or the Symbolics UX400/1200) were no longer complete workstations, but boards designed to be embedded in host computers: Apple Macintosh II and SUN 3 or 4.
Some Lisp machines, such as the Symbolics XL1200, had extensive graphics abilities using special graphics boards. These machines were used in domains like medical image processing, 3D animation, and CAD. | https://en.wikipedia.org/wiki?curid=18123 |
Links (web browser)
Links is an open source text and graphic web browser with a pull-down menu system. It renders complex pages, has partial HTML 4.0 support (including tables and frames and support for multiple character sets such as UTF-8), supports color and monochrome terminals and allows horizontal scrolling.
It is intended for users who want to retain many typical elements of graphical user interfaces (pop-up windows, menus etc.) in a text-only environment.
The original version of Links was developed by Mikuláš Patočka in the Czech Republic. His group "Twibright Labs" later developed version 2 of the Links browser, that displays graphics, renders fonts in different sizes (with spatial anti-aliasing), but does not support JavaScript any more (it used to, up to version 2.1pre28). The resulting browser is very fast, but it does not display many pages as they were intended. The graphical mode works even on Unix systems without the X Window System or any other window environment, using either SVGALib or the framebuffer of the system's graphics card.
The graphics stack has several peculiarities unusual for a web browser. The fonts displayed by Links are not derived from the system, but compiled into the binary as grayscale bitmaps in Portable Network Graphics (PNG) format. This allows the browser to be one executable file independent of the system libraries. However this increases the size of the executable to about 5 MB.
The fonts are anti-aliased without hinting and for small line pitch an artificial sharpening is employed to increase legibility. Subpixel sampling further increases legibility on LCD displays. This allowed Links to have anti-aliased fonts at a time when anti-aliased font libraries were uncommon.
All graphic elements (images and text) are first converted from given gamma space (according to known or assumed gamma information in PNG, JPEG etc.) through known user gamma setting into a 48 bits per pixel photometrically linear space where they are resampled with bilinear resampling to the target size, possibly taking aspect ratio correction into account. Then the data are passed through high-performance restartable dithering engine which is used regardless of monitor bit depth, i.e., also for 24 bits per pixel colour. This Floyd-Steinberg dithering engine takes into account the gamma characteristics of the monitor and uses 768 KiB of dithering tables to avoid time expensive calculations. A technique similar to self-modifying code, function templates, is used to maximize the speed of the dithering engine without using assembly language optimization, which is non-portable.
Images which are scaled down also use subpixel sampling on LCD to increase level of detail.
The reason for this high quality processing is: provide proper realistic up and down sampling of images, and photorealistic display regardless of the monitor gamma, without colour fringing caused by 8-bit gamma correction built into the X server. It also increases the perceived colour depth over 24 bits per pixel.
Links has graphics drivers for the X Server, Linux framebuffer, svgalib, OS/2 PMShell and AtheOS GUI. This allows it to run in graphics mode even on platforms which don't have X Server.
"Experimental/Enhanced Links" (ELinks) is a fork of Links led by Petr Baudis. It is based on Links 0.9. It has a more open development and incorporates patches from other Links versions (such as additional extension scripting in Lua) and from Internet users.
"Hacked Links" is another version of the Links browser which has merged some of Elinks' features into Links 2.
Andrey Mirtchovski has ported it to Plan 9 from Bell Labs. It is considered a good browser on that operating system, though some users have complained about its inability to cut and paste with the Plan 9 snarf buffer.
, the last release of Hacked Links is that of July 9, 2003 with some further changes unreleased.
Links was also ported to run on the Sony PSP platform as PSPRadio by Rafael Cabezas with the last version (2.1pre23_PSP_r1261) released on February 6, 2007.
The BeOS port was updated by François Revol who also added GUI support. It also runs on Haiku. | https://en.wikipedia.org/wiki?curid=18125 |
Learning object
A learning object is "a collection of content items, practice items, and assessment items that are combined based on a single learning objective". The term is credited to Wayne Hodgins, and dates from a working group in 1994 bearing the name. The concept encompassed by 'Learning Objects' is known by numerous other terms, including: content objects, chunks, educational objects, information objects, intelligent objects, knowledge bits, knowledge objects, learning components, media objects, reusable curriculum components, nuggets, reusable information objects, reusable learning objects, testable reusable units of cognition, training components, and units of learning.
The core idea of the use of learning objects is characterized by the following: discoverability, reusability, and interoperability. To support discoverability, learning objects are described by Learning Object Metadata, formalized as IEEE 1484.12 Learning object metadata. To support reusability, the IMS Consortium proposed a series of specifications such as the IMS Content package. And to support interoperability, the U.S. military's Advanced Distributed Learning organization created the Sharable Content Object Reference Model. Learning objects were designed in order to reduce the cost of learning, standardize learning content, and to enable the use and reuse of learning content by learning management systems.
The Institute of Electrical and Electronics Engineers (IEEE) defines a learning object as "any entity, digital or non-digital, that may be used for learning, education or training".
Chiappe defined Learning Objects as: "A digital self-contained and reusable entity, with a clear educational purpose, with at least three internal and editable components: content, learning activities and elements of context. The learning objects must have an external structure of information to facilitate their identification, storage and retrieval: the metadata."
The following definitions focus on the relation between learning object and digital media. RLO-CETL, a British inter-university Learning Objects Center, defines "reusable learning objects" as "web-based interactive chunks of e-learning designed to explain a stand-alone learning objective". Daniel Rehak and Robin Mason define it as "a digitized entity which can be used, reused or referenced during technology supported learning".
Adapting a definition from the Wisconsin Online Resource Center, Robert J. Beck suggests that learning objects have the following key characteristics:
The following is a list of some of the types of information that may be included in a learning object and its metadata:
One of the key issues in using learning objects is their identification by search engines or content management systems. This is usually facilitated by assigning descriptive learning object metadata. Just as a book in a library has a record in the card catalog, learning objects must also be tagged with metadata. The most important pieces of metadata typically associated with a learning object include:
A mutated learning object is, according to Michael Shaw, a learning object that has been "re-purposed and/or re-engineered, changed or simply re-used in some way different from its original intended design". Shaw also introduces the term "contextual learning object", to describe a learning object that has been "designed to have specific meaning and purpose to an intended learner". This may be useful if the intent involves just-in-time learning and the individual needs of individual learners.
Before any institution invests a great deal of time and energy into building high-quality e-learning content (which can cost over $10,000 per classroom hour), it needs to consider how this content can be easily loaded into a Learning Management System. It is possible for example, to package learning objects with SCORM specification and load it in Moodle Learning Management System or Desire2Learn Learning Environment.
If all of the properties of a course can be precisely defined in a common format, the content can be serialized into a standard format such as XML and loaded into other systems. When it is considered that some e-learning courses need to include video, mathematical equations using MathML, chemistry equations using CML and other complex structures, the issues become very complex, especially if the systems needs to understand and validate each structure and then place it correctly in a database.
In 2001, David Wiley criticized learning object theory in his paper, The Reusability Paradox which is summarized by D'Arcy Norman as, "If a learning object is useful in a particular context, by definition it is not reusable in a different context. If a learning object is reusable in many contexts, it isn’t particularly useful in any."
In Three Objections to Learning Objects and E-learning Standards, Norm Friesen, Canada Research Chair in E-Learning Practices at Thompson Rivers University, points out that the word "neutrality" in itself implies "a state or position that is antithetical ... to pedagogy and teaching."
Innayah: Creating An Audio Script with Learning Object, unpublished, 2013. | https://en.wikipedia.org/wiki?curid=18126 |
Louisiana
Louisiana (, ) is a state in the Deep South region of the South Central United States. It is the 19th-smallest by area and the 25th most populous of the 50 U.S. states. Louisiana is bordered by the state of Texas to the west, Arkansas to the north, Mississippi to the east, and the Gulf of Mexico to the south. A large part of its eastern boundary is demarcated by the Mississippi River. Louisiana is the only U.S. state with political subdivisions termed parishes, which are equivalent to counties. The state's capital is Baton Rouge, and its largest city is New Orleans.
Much of the state's lands were formed from sediment washed down the Mississippi River, leaving enormous deltas and vast areas of coastal marsh and swamp. These contain a rich southern biota; typical examples include birds such as ibis and egrets. There are also many species of tree frogs, and fish such as sturgeon and paddlefish. In more elevated areas, fire is a natural process in the landscape and has produced extensive areas of longleaf pine forest and wet savannas. These support an exceptionally large number of plant species, including many species of terrestrial orchids and carnivorous plants. Louisiana has more Native American tribes than any other southern state, including four that are federally recognized, ten that are state recognized, and four that have not received recognition.
Some Louisiana urban environments have a multicultural, multilingual heritage, being so strongly influenced by a mixture of 18th-century French, Italian, Haitian, Spanish, Native American, and African cultures that they are considered to be exceptional in the U.S. Before the American purchase of the territory in 1803, the present-day State of Louisiana had been both a French colony and for a brief period a Spanish one. In addition, colonists imported numerous African people as slaves in the 18th century. Many came from peoples of the same region of West Africa, thus concentrating their culture. In the post-Civil War environment, Anglo-Americans increased the pressure for Anglicization, and in 1921, English was for a time made the sole language of instruction in Louisiana schools before a policy of multilingualism was revived in 1974. There has never been an official language in Louisiana, and the state constitution enumerates "the right of the people to preserve, foster, and promote their respective historic, linguistic, and cultural origins".
Like other states in the Deep South region, Louisiana frequently ranks low in terms of health, education, and development, and high in measures of poverty. In 2018, Louisiana was ranked as the least healthy state in the country, with high levels of drug-related deaths and excessive alcohol consumption, while it has had the highest homicide rate in the United States since at least the 1990s.
Louisiana was named after Louis XIV, King of France from 1643 to 1715. When René-Robert Cavelier, Sieur de La Salle claimed the territory drained by the Mississippi River for France, he named it . The suffix ana (or ane) is a Latin suffix that can refer to "information relating to a particular individual, subject, or place". Thus, roughly, Louis + ana carries the idea of "related to Louis". Once part of the French Colonial Empire, the Louisiana Territory stretched from present-day Mobile Bay to just north of the present-day Canada–United States border, including a small part of what is now the Canadian provinces of Alberta and Saskatchewan.
The Gulf of Mexico did not exist 250 million years ago when there was but one supercontinent, Pangea. As Pangea split apart, the Atlantic Ocean and Gulf of Mexico opened. Louisiana slowly developed, over millions of years, from water into land, and from north to south. The oldest rocks are exposed in the north, in areas such as the Kisatchie National Forest. The oldest rocks date back to the early Cenozoic Era, some 60 million years ago. The history of the formation of these rocks can be found in D. Spearing's "Roadside Geology of Louisiana".
The youngest parts of the state were formed during the last 12,000 years as successive deltas of the Mississippi River: the Maringouin, Teche, St. Bernard, Lafourche, the modern Mississippi, and now the Atchafalaya. The sediments were carried from north to south by the Mississippi River.
In between the Tertiary rocks of the north, and the relatively new sediments along the coast, is a vast belt known as the Pleistocene Terraces. Their age and distribution can be largely related to the rise and fall of sea levels during past ice ages. In general, the northern terraces have had sufficient time for rivers to cut deep channels, while the newer terraces tend to be much flatter.
Salt domes are also found in Louisiana. Their origin can be traced back to the early Gulf of Mexico when the shallow ocean had high rates of evaporation. There are several hundred salt domes in the state; one of the most familiar is Avery Island, Louisiana. Salt domes are important not only as a source of salt; they also serve as underground traps for oil and gas.
Louisiana is bordered to the west by Texas; to the north by Arkansas; to the east by Mississippi; and to the south by the Gulf of Mexico.
The state may properly be divided into two parts, the uplands of the north, and the alluvial along the coast. The alluvial region includes low swamp lands, coastal marshlands and beaches, and barrier islands that cover about . This area lies principally along the Gulf of Mexico and the Mississippi River, which traverses the state from north to south for a distance of about and empties into the Gulf of Mexico; the Red River; the Ouachita River and its branches; and other minor streams (some of which are called bayous).
The breadth of the alluvial region along the Mississippi is from 10 to 60 miles (15 to 100 km), and along the other rivers, the alluvial region averages about 10 miles (15 km) across. The Mississippi River flows along a ridge formed by its natural deposits (known as a levee), from which the lands decline toward a river beyond at an average fall of six feet per mile (3 m/km). The alluvial lands along other streams present similar features.
The higher and contiguous hill lands of the north and northwestern part of the state have an area of more than . They consist of prairie and woodlands. The elevations above sea level range from 10 feet (3 m) at the coast and swamp lands to 50 and 60 feet (15–18 m) at the prairie and alluvial lands. In the uplands and hills, the elevations rise to Driskill Mountain, the highest point in the state only 535 feet (163 m) above sea level. From 1932 to 2010 the state lost 1,800 square miles due to rises in sea level and erosion. The Louisiana Coastal Protection and Restoration Authority (CPRA) spends around $1 billion per year to help shore up and protect Louisiana shoreline and land in both federal and state funding.
Besides the waterways already named, there are the Sabine, forming the western boundary; and the Pearl, the eastern boundary; the Calcasieu, the Mermentau, the Vermilion, Bayou Teche, the Atchafalaya, the Boeuf, Bayou Lafourche, the Courtableau River, Bayou D'Arbonne, the Macon River, the Tensas, Amite River, the Tchefuncte, the Tickfaw, the Natalbany River, and a number of other smaller streams, constituting a natural system of navigable waterways, aggregating over long.
The state also has political jurisdiction over the approximately -wide portion of subsea land of the inner continental shelf in the Gulf of Mexico. Through a peculiarity of the political geography of the United States, this is substantially less than the -wide jurisdiction of nearby states Texas and Florida, which, like Louisiana, have extensive Gulf coastlines.
The southern coast of Louisiana in the United States is among the fastest-disappearing areas in the world. This has largely resulted from human mismanagement of the coast (see Wetlands of Louisiana). At one time, the land was added to when spring floods from the Mississippi River added sediment and stimulated marsh growth; the land is now shrinking. There are multiple causes.
Artificial levees block spring flood water that would bring fresh water and sediment to marshes. Swamps have been extensively logged, leaving canals and ditches that allow salt water to move inland. Canals dug for the oil and gas industry also allow storms to move sea water inland, where it damages swamps and marshes. Rising sea waters have exacerbated the problem. Some researchers estimate that the state is losing a landmass equivalent to 30 football fields every day. There are many proposals to save coastal areas by reducing human damage, including restoring natural floods from the Mississippi. Without such restoration, coastal communities will continue to disappear. And as the communities disappear, more and more people are leaving the region. Since the coastal wetlands support an economically important coastal fishery, the loss of wetlands is adversely affecting this industry.
Louisiana has a humid subtropical climate (Köppen climate classification "Cfa"), with long, hot, humid summers and short, mild winters. The subtropical characteristics of the state are due to its low latitude, low lying topography, and the influence of the Gulf of Mexico, which at its farthest point is no more than away.
Rain is frequent throughout the year, although from April to September is slightly wetter than the rest of the year, which is the state's wet season. There is a dip in precipitation in October. In summer, thunderstorms build during the heat of the day and bring intense but brief, tropical downpours. In winter, rainfall is more frontal and less intense.
Summers in southern Louisiana have high temperatures from June through September averaging 90 °F (32 °C) or more, and overnight lows averaging above 70 °F (22 °C). At times, temperatures in the 90s F, combined with dew points in the upper 70s F, create sensible temperatures over . The humid, thick, jungle-like heat in southern Louisiana is a famous subject of countless stories and movies.
Temperatures are generally warm in the winter in the southern part of the state, with highs around New Orleans, Baton Rouge, the rest of south Louisiana, and the Gulf of Mexico averaging 66 °F (19 °C). The northern part of the state is mildly cool in the winter, with highs averaging 59 °F (15 °C). The overnight lows in the winter average well above freezing throughout the state, with 46 °F (8 °C) the average near the Gulf and an average low of 37 °F (3 °C) in the winter in the northern part of the state.
On occasion, cold fronts from low-pressure centers to the north, reach Louisiana in winter. Low temperatures near 20 °F (−8 °C) occur on occasion in the northern part of the state but rarely do so in the southern part of the state. Snow is rare near the Gulf of Mexico, although residents in the northern parts of the state might receive a dusting of snow a few times each decade. Louisiana's highest recorded temperature is in Plain Dealing on August 10, 1936, while the coldest recorded temperature is at Minden on February 13, 1899.
Louisiana is often affected by tropical cyclones and is very vulnerable to strikes by major hurricanes, particularly the lowlands around and in the New Orleans area. The unique geography of the region, with the many bayous, marshes and inlets, can result in water damage across a wide area from major hurricanes. The area is also prone to frequent thunderstorms, especially in the summer.
The entire state averages over 60 days of thunderstorms a year, more than any other state except Florida. Louisiana averages 27 tornadoes annually. The entire state is vulnerable to a tornado strike, with the extreme southern portion of the state slightly less so than the rest of the state. Tornadoes are more common from January to March in the southern part of the state, and from February through March in the northern part of the state.
Owing to its location and geology, the state has high biological diversity. Some vital areas, such as southwestern prairie, have experienced a loss in excess of 98 percent. The pine flatwoods are also at great risk, mostly from fire suppression and urban sprawl. There is not yet a properly organized system of natural areas to represent and protect Louisiana's biological diversity. Such a system would consist of a protected system of core areas linked by biological corridors, such as Florida is planning.
Louisiana contains a number of areas which, to varying degrees, prevent people from using them. In addition to National Park Service areas and a United States National Forest, Louisiana operates a system of state parks, state historic sites, one state preservation area, one state forest, and many Wildlife Management Areas.
One of Louisiana's largest government-owned areas is Kisatchie National Forest. It is some 600,000 acres in area, more than half of which is flatwoods vegetation, which supports many rare plant and animal species. These include the Louisiana pine snake and red-cockaded woodpecker. The system of government-owned cypress swamps around Lake Pontchartrain is another large area, with southern wetland species including egrets, alligators, and sturgeon. At least 12 core areas would be needed to build a "protected areas system" for the state; these would range from southwestern prairies, to the Pearl River Floodplain in the east, to the Mississippi River alluvial swamps in the north.
Historic or scenic areas managed, protected, or otherwise recognized by the National Park Service include:
Louisiana operates a system of 22 state parks, 17 state historic sites and one state preservation area.
Louisiana has 955,973 acres, in four ecoregions under the wildlife management of the Louisiana Department of Wildlife and Fisheries. The Nature Conservancy also owns and manages a set of natural areas.
The Louisiana Natural and Scenic Rivers System provides a degree of protection for 51 rivers, streams and bayous in the state. It is administered by the Louisiana Department of Wildlife and Fisheries.
The Louisiana Department of Transportation and Development is the state government organization in charge of maintaining public transportation, roadways, bridges, canals, select levees, floodplain management, port facilities, commercial vehicles, and aviation which includes 69 airports.
The Intracoastal Waterway is an important means of transporting commercial goods such as petroleum and petroleum products, agricultural produce, building materials and manufactured goods.
In 2011, Louisiana ranked among the five deadliest states for debris/litter-caused vehicle accidents per total number of registered vehicles and population size. Figures derived from the NTSHA show at least 25 persons in Louisiana were killed per year in motor vehicle collisions with non-fixed objects, including debris, dumped litter, animals and their carcasses.
Louisiana was inhabited by Native Americans for many millennia before the arrival of Europeans in the 16th century. During the Middle Archaic period, Louisiana was the site of the earliest mound complex in North America and one of the earliest dated, complex constructions in the Americas, the Watson Brake site near present-day Monroe. An 11-mound complex, it was built about 5400 BP (3500 BC). The Middle Archaic sites of Caney and Frenchman's Bend have also been securely dated to 5600–5000 BP (3700–3100 BC), demonstrating that seasonal hunter-gatherers organized to build complex earthwork constructions in present-day northern Louisiana. These discoveries overturned previous assumptions in archaeology that such complex mounds were built only by cultures of more settled peoples who were dependent on maize cultivation. The Hedgepeth Site in Lincoln Parish is more recent, dated to 5200–4500 BP (3300–2600 BC).
Nearly 2,000 years later, Poverty Point was built; it is the largest and best-known Late Archaic site in the state. The city of modern-day Epps developed near it. The Poverty Point culture may have reached its peak around 1500 BC, making it the first complex culture, and possibly the first tribal culture in North America. It lasted until approximately 700 BC.
The Poverty Point culture was followed by the Tchefuncte and Lake Cormorant cultures of the Tchula period, local manifestations of Early Woodland period. The Tchefuncte culture were the first people in the area of Louisiana to make large amounts of pottery. These cultures lasted until AD 200. The Middle Woodland period started in Louisiana with the Marksville culture in the southern and eastern part of the state, reaching across the Mississippi River to the east around Natchez and the Fourche Maline culture in the northwestern part of the state. The Marksville culture was named after the Marksville Prehistoric Indian Site in Avoyelles Parish.
These cultures were contemporaneous with the Hopewell cultures of present-day Ohio and Illinois, and participated in the Hopewell Exchange Network. Trade with peoples to the southwest brought the bow and arrow. The first burial mounds were built at this time. Political power began to be consolidated, as the first platform mounds at ritual centers were constructed for the developing hereditary political and religious leadership.
By 400 the Late Woodland period had begun with the Baytown culture, Troyville culture, and Coastal Troyville during the Baytown Period and were succeeded by the Coles Creek cultures. Where the Baytown peoples built dispersed settlements, the Troyville people instead continued building major earthwork centers. Population increased dramatically and there is strong evidence of a growing cultural and political complexity. Many Coles Creek sites were erected over earlier Woodland period mortuary mounds. Scholars have speculated that emerging elites were symbolically and physically appropriating dead ancestors to emphasize and project their own authority.
The Mississippian period in Louisiana was when the Plaquemine and the Caddoan Mississippian cultures developed, and the peoples adopted extensive maize agriculture, cultivating different strains of the plant by saving seeds, selecting for certain characteristics, etc. The Plaquemine culture in the lower Mississippi River Valley in western Mississippi and eastern Louisiana began in 1200 and continued to about 1600. Examples in Louisiana include the Medora Site, the archaeological type site for the culture in West Baton Rouge Parish whose characteristics helped define the culture, the Atchafalaya Basin Mounds in St Mary Parish, the Fitzhugh Mounds in Madison Parish, the Scott Place Mounds in Union Parish, and the Sims Site in St Charles Parish.
Plaquemine culture was contemporaneous with the Middle Mississippian culture that is represented by its largest settlement, the Cahokia site in Illinois east of St. Louis, Missouri. At its peak Cahokia is estimated to have had a population of more than 20,000. The Plaquemine culture is considered ancestral to the historic Natchez and Taensa peoples, whose descendants encountered Europeans in the colonial era.
By 1000 in the northwestern part of the state, the Fourche Maline culture had evolved into the Caddoan Mississippian culture. The Caddoan Mississippians occupied a large territory, including what is now eastern Oklahoma, western Arkansas, northeast Texas, and northwest Louisiana. Archaeological evidence has demonstrated that the cultural continuity is unbroken from prehistory to the present. The Caddo and related Caddo-language speakers in prehistoric times and at first European contact were the direct ancestors of the modern Caddo Nation of Oklahoma of today. Significant Caddoan Mississippian archaeological sites in Louisiana include Belcher Mound Site in Caddo Parish and Gahagan Mounds Site in Red River Parish.
Many current place names in Louisiana, including Atchafalaya, Natchitouches (now spelled Natchitoches), Caddo, Houma, Tangipahoa, and Avoyel (as Avoyelles), are transliterations of those used in various Native American languages.
The first European explorers to visit Louisiana came in 1528 when a Spanish expedition led by Pánfilo de Narváez located the mouth of the Mississippi River. In 1542, Hernando de Soto's expedition skirted to the north and west of the state (encountering Caddo and Tunica groups) and then followed the Mississippi River down to the Gulf of Mexico in 1543. Spanish interest in Louisiana faded away for a century and a half.
In the late 17th century, French and French Canadian expeditions, which included sovereign, religious and commercial aims, established a foothold on the Mississippi River and Gulf Coast. With its first settlements, France laid claim to a vast region of North America and set out to establish a commercial empire and French nation stretching from the Gulf of Mexico to Canada.
In 1682, the French explorer Robert Cavelier de La Salle named the region Louisiana to honor King Louis XIV of France. The first permanent settlement, Fort Maurepas (at what is now Ocean Springs, Mississippi, near Biloxi), was founded in 1699 by Pierre Le Moyne d'Iberville, a French military officer from Canada. By then the French had also built a small fort at the mouth of the Mississippi at a settlement they named La Balise (or La Balize), "seamark" in French. By 1721 they built a wooden lighthouse-type structure here to guide ships on the river.
A royal ordinance of 1722—following the Crown's transfer of the Illinois Country's governance from Canada to Louisiana—may have featured the broadest definition of Louisiana: all land claimed by France south of the Great Lakes between the Rocky Mountains and the Alleghenies. A generation later, trade conflicts between Canada and Louisiana led to a more defined boundary between the French colonies; in 1745, Louisiana governor general Vaudreuil set the northern and eastern bounds of his domain as the Wabash valley up to the mouth of the Vermilion River (near present-day Danville, Illinois); from there, northwest to "le Rocher" on the Illinois River, and from there west to the mouth of the Rock River (at present day Rock Island, Illinois). Thus, Vincennes and Peoria were the limit of Louisiana's reach; the outposts at Ouiatenon (on the upper Wabash near present-day Lafayette, Indiana), Chicago, Fort Miamis (near present-day Fort Wayne, Indiana), and Prairie du Chien, Wisconsin, operated as dependencies of Canada.
The settlement of Natchitoches (along the Red River in present-day northwest Louisiana) was established in 1714 by Louis Juchereau de St. Denis, making it the oldest permanent European settlement in the modern state of Louisiana. The French settlement had two purposes: to establish trade with the Spanish in Texas via the Old San Antonio Road, and to deter Spanish advances into Louisiana. The settlement soon became a flourishing river port and crossroads, giving rise to vast cotton kingdoms along the river that were worked by imported African slaves. Over time, planters developed large plantations and built fine homes in a growing town. This became a pattern repeated in New Orleans and other places, although the commodity crop in the south was primarily sugar cane.
Louisiana's French settlements contributed to further exploration and outposts, concentrated along the banks of the Mississippi and its major tributaries, from Louisiana to as far north as the region called the Illinois Country, around present-day St. Louis, Missouri. The latter was settled by French colonists from Illinois.
Initially, Mobile and then Biloxi served as the capital of La Louisiane. Recognizing the importance of the Mississippi River to trade and military interests, and wanting to protect the capital from severe coastal storms, France developed New Orleans from 1722 as the seat of civilian and military authority south of the Great Lakes. From then until the United States acquired the territory in the Louisiana Purchase of 1803, France and Spain jockeyed for control of New Orleans and the lands west of the Mississippi.
In the 1720s, German immigrants settled along the Mississippi River, in a region referred to as the German Coast.
France ceded most of its territory to the east of the Mississippi to Great Britain in 1763, in the aftermath of Britain's victory in the Seven Years' War (generally referred to in North America as the French and Indian War). The rest of Louisiana, including the area around New Orleans and the parishes around Lake Pontchartrain, had become a colony of Spain by the Treaty of Fontainebleau (1762). The transfer of power on either side of the river would be delayed until later in the decade.
In 1765, during Spanish rule, several thousand French-speaking refugees from the region of Acadia (now Nova Scotia, New Brunswick, and Prince Edward Island, Canada) made their way to Louisiana after having been expelled from their homelands by the British during the French and Indian War. They settled chiefly in the southwestern Louisiana region now called Acadiana. The Spanish, eager to gain more Catholic settlers, welcomed the Acadian refugees, the ancestors of Louisiana's Cajuns.
Spanish Canary Islanders, called Isleños, emigrated from the Canary Islands of Spain to Louisiana under the Spanish crown between 1778 and 1783.
In 1800, France's Napoleon Bonaparte reacquired Louisiana from Spain in the Treaty of San Ildefonso, an arrangement kept secret for two years.
Jean-Baptiste Le Moyne, Sieur de Bienville brought the first two African slaves to Louisiana in 1708, transporting them from a French colony in the West Indies. In 1709, French financier Antoine Crozat obtained a monopoly of commerce in La Louisiane, which extended from the Gulf of Mexico to what is now Illinois. "That concession allowed him to bring in a cargo of blacks from Africa every year," the British historian Hugh Thomas wrote. Physical conditions, including disease, were so harsh there was high mortality among both the colonists and the slaves, resulting in continuing demand and importation of slaves.
Starting in 1719, traders began to import slaves in higher numbers; two French ships, the "Du Maine" and the "Aurore", arrived in New Orleans carrying more than 500 black slaves coming from Africa. Previous slaves in Louisiana had been transported from French colonies in the West Indies. By the end of 1721, New Orleans counted 1,256 inhabitants, of whom about half were slaves.
In 1724, the French government issued a law called the Code Noir ("Black Code" in English) which "regulate[d] the interaction of whites [blancs] and blacks [noirs] in its colony of Louisiana (which was much larger than the current state of Louisiana). The law consisted of 57 articles, which regulated religion in the colony, outlawed "interracial" marriages (those between people of different skin color, the varying shades of which were also defined by law), restricted manumission, outlined legal punishment of slaves for various offenses, and defined some obligations of owners to their slaves. The main intent of the French government was to assert control over the slave system of agriculture in Louisiana and to impose restrictions on slaveowners there. In practice, the Code Noir was exceedingly difficult to enforce from afar. Some priests continued to perform interracial marriage ceremonies, for example, and some slaveholders continued to manumit slaves without permission while others punished slaves brutally.
Article II of the Code Noir of 1724 required owners to provide their slaves with religious education in the state religion, Roman Catholicism. Sunday was to be a day of rest for slaves. On days off, slaves were expected to feed and take care of themselves. During the 1740s economic crisis in the colony, owners had trouble feeding their slaves and themselves. Giving them time off also effectively gave more power to slaves, who started cultivating their own gardens and crafting items for sale as their own property. They began to participate in the economic development of the colony while at the same time increasing independence and self-subsistence.
Article VI of the Code Noir forbade mixed marriages, forbade but did little to protect slave women from rape by their owners, overseers or other slaves. On balance, the Code benefitted the owners but had more protections and flexibility than did the institution of slavery in the southern Thirteen Colonies.
The Louisiana Black Code of 1806 made the cruel punishment of slaves a crime, but owners and overseers were seldom prosecuted for such acts.
Fugitive slaves, called maroons, could easily hide in the backcountry of the bayous and survive in small settlements. The word "maroon" comes from the Spanish "cimarron", meaning "fugitive cattle".
In the late 18th century, the last Spanish governor of the Louisiana territory wrote:
Truly, it is impossible for lower Louisiana to get along without slaves and with the use of slaves, the colony had been making great strides toward prosperity and wealth.
When the United States purchased Louisiana in 1803, it was soon accepted that enslaved Africans could be brought to Louisiana as easily as they were brought to neighboring Mississippi, though it violated U.S. law to do so. Despite demands by United States Rep. James Hillhouse and by the pamphleteer Thomas Paine to enforce existing federal law against slavery in the newly acquired territory, slavery prevailed because it was the source of great profits and the lowest-cost labor.
At the start of the 19th century, Louisiana was a small producer of sugar with a relatively small number of slaves, compared to Saint-Domingue and the West Indies. It soon thereafter became a major sugar producer as new settlers arrived to develop plantations. William C. C. Claiborne, Louisiana's first United States governor, said African slave labor was needed because white laborers "cannot be had in this unhealthy climate". Hugh Thomas wrote that Claiborne was unable to enforce the abolition of the African slave trade, which the U.S. and Great Britain adopted in 1808. The United States continued to protect the domestic slave trade, including the coastwise trade—the transport of slaves by ship along the Atlantic Coast and to New Orleans and other Gulf ports.
By 1840, New Orleans had the biggest slave market in the United States, which contributed greatly to the economy of the city and of the state. New Orleans had become one of the wealthiest cities, and the third largest city, in the nation. The ban on the African slave trade and importation of slaves had increased demand in the domestic market. During the decades after the American Revolutionary War, more than one million enslaved African Americans underwent forced migration from the Upper South to the Deep South, two thirds of them in the slave trade. Others were transported by their owners as slaveholders moved west for new lands.
With changing agriculture in the Upper South as planters shifted from tobacco to less labor-intensive mixed agriculture, planters had excess laborers. Many sold slaves to traders to take to the Deep South. Slaves were driven by traders overland from the Upper South or transported to New Orleans and other coastal markets by ship in the coastwise slave trade. After sales in New Orleans, steamboats operating on the Mississippi transported slaves upstream to markets or plantation destinations at Natchez and Memphis.
Spanish occupation of Louisiana lasted from 1769 to 1800. Beginning in the 1790s, waves of immigration took place from Saint-Domingue, following a slave rebellion that started in 1791. Over the next decade, thousands of migrants landed in Louisiana from the island, including ethnic Europeans, free people of color, and African slaves, some of the latter brought in by each free group. They greatly increased the French-speaking population in New Orleans and Louisiana, as well as the number of Africans, and the slaves reinforced African culture in the city. The process of gaining independence in Saint-Domingue was complex, but uprisings continued. In 1803, France pulled out its surviving troops from the island, having suffered the loss of two-thirds sent to the island two years before, mostly to yellow fever. In 1804, Haiti, the second republic in the western hemisphere, proclaimed its independence, achieved by slave leaders.
Pierre Clément de Laussat (Governor, 1803) said: "Saint-Domingue was, of all our colonies in the Antilles, the one whose mentality and customs influenced Louisiana the most."
When the United States won its independence from Great Britain in 1783, one of its major concerns was having a European power on its western boundary, and the need for unrestricted access to the Mississippi River. As American settlers pushed west, they found that the Appalachian Mountains provided a barrier to shipping goods eastward. The easiest way to ship produce was to use a flatboat to float it down the Ohio and Mississippi Rivers to the port of New Orleans, where goods could be put on ocean-going vessels. The problem with this route was that the Spanish owned both sides of the Mississippi below Natchez.
Napoleon's ambitions in Louisiana involved the creation of a new empire centered on the Caribbean sugar trade. By the terms of the Treaty of Amiens of 1802, Great Britain returned ownership of the islands of Martinique and Guadaloupe to the French. Napoleon looked upon Louisiana as a depot for these sugar islands, and as a buffer to U.S. settlement. In October 1801 he sent a large military force to take back Saint-Domingue, then under control of Toussaint Louverture after a slave rebellion.
When the army led by Napoleon's brother-in-law Leclerc was defeated, Napoleon decided to sell Louisiana.
Thomas Jefferson, third president of the United States, was disturbed by Napoleon's plans to re-establish French colonies in America. With the possession of New Orleans, Napoleon could close the Mississippi to U.S. commerce at any time. Jefferson authorized Robert R. Livingston, U.S. Minister to France, to negotiate for the purchase of the City of New Orleans, portions of the east bank of the Mississippi, and free navigation of the river for U.S. commerce. Livingston was authorized to pay up to $2 million.
An official transfer of Louisiana to French ownership had not yet taken place, and Napoleon's deal with the Spanish was a poorly kept secret on the frontier. On October 18, 1802, however, Juan Ventura Morales, Acting Intendant of Louisiana, made public the intention of Spain to revoke the right of deposit at New Orleans for all cargo from the United States. The closure of this vital port to the United States caused anger and consternation. Commerce in the west was virtually blockaded. Historians believe the revocation of the right of deposit was prompted by abuses by the Americans, particularly smuggling, and not by French intrigues as was believed at the time. President Jefferson ignored public pressure for war with France, and appointed James Monroe a special envoy to Napoleon, to assist in obtaining New Orleans for the United States. Jefferson also raised the authorized expenditure to $10 million.
However, on April 11, 1803, French Foreign Minister Talleyrand surprised Livingston by asking how much the United States was prepared to pay for the entirety of Louisiana, not just New Orleans and the surrounding area (as Livingston's instructions covered). Monroe agreed with Livingston that Napoleon might withdraw this offer at any time (leaving them with no ability to obtain the desired New Orleans area), and that approval from President Jefferson might take months, so Livingston and Monroe decided to open negotiations immediately. By April 30, they closed a deal for the purchase of the entire Louisiana territory of for 60 million Francs (approximately $15 million).
Part of this sum, $3.5 million, was used to forgive debts owed by France to the United States. The payment was made in United States bonds, which Napoleon sold at face value to the Dutch firm of Hope and Company, and the British banking house of Baring, at a discount of 87½ per each $100 unit. As a result, France received only $8,831,250 in cash for Louisiana. English banker Alexander Baring conferred with Marbois in Paris, shuttled to the United States to pick up the bonds, took them to Britain, and returned to France with the money—which Napoleon used to wage war against Baring's own country.
When news of the purchase reached the United States, Jefferson was surprised. He had authorized the expenditure of $10 million for a port city, and instead received treaties committing the government to spend $15 million on a land package which would double the size of the country. Jefferson's political opponents in the Federalist Party argued the Louisiana purchase was a worthless desert, and that the Constitution did not provide for the acquisition of new land or negotiating treaties without the consent of the Senate. What really worried the opposition was the new states which would inevitably be carved from the Louisiana territory, strengthening Western and Southern interests in Congress, and further reducing the influence of New England Federalists in national affairs. President Jefferson was an enthusiastic supporter of westward expansion, and held firm in his support for the treaty. Despite Federalist objections, the U.S. Senate ratified the Louisiana treaty on October 20, 1803.
By statute enacted on October 31, 1803, President Thomas Jefferson was authorized to take possession of the territories ceded by France and provide for initial governance. A transfer ceremony was held in New Orleans on November 29, 1803. Since the Louisiana territory had never officially been turned over to the French, the Spanish took down their flag, and the French raised theirs. The following day, General James Wilkinson accepted possession of New Orleans for the United States. A similar ceremony was held in St. Louis on March 9, 1804, when a French tricolor was raised near the river, replacing the Spanish national flag. The following day, Captain Amos Stoddard of the First U.S. Artillery marched his troops into town and had the American flag run up the fort's flagpole. The Louisiana territory was officially transferred to the United States government, represented by Meriwether Lewis.
The Louisiana Territory, purchased for less than three cents an acre, doubled the size of the United States overnight, without a war or the loss of a single American life, and set a precedent for the purchase of territory. It opened the way for the eventual expansion of the United States across the continent to the Pacific.
Shortly after the United States took possession, the area was divided into two territories along the 33rd parallel north on March 26, 1804, thereby organizing the Territory of Orleans to the south and the District of Louisiana (subsequently formed as the Louisiana Territory) to the north.
Louisiana became the eighteenth U.S. state on April 30, 1812; the Territory of Orleans became the State of Louisiana and the Louisiana Territory was simultaneously renamed the Missouri Territory. An area known as the Florida Parishes was soon annexed into the state of Louisiana on April 14, 1812.
From 1824 to 1861, Louisiana moved from a political system based on personality and ethnicity to a distinct two-party system, with Democrats competing first against "Whigs", then "Know Nothings", and finally only other Democrats.
According to the 1860 census, 331,726 people were enslaved, nearly 47% of the state's total population of 708,002. The strong economic interest of elite whites in maintaining the slave society contributed to Louisiana's decision to secede from the Union on January 26, 1861. It followed other Southern states in seceding after the election of Abraham Lincoln as president of the United States. Louisiana's secession was announced on January 26, 1861, and it became part of the Confederate States of America.
The state was quickly defeated in the Civil War, a result of Union strategy to cut the Confederacy in two by seizing the Mississippi. Federal troops captured New Orleans on April 25, 1862. Because a large part of the population had Union sympathies (or compatible commercial interests), the federal government took the unusual step of designating the areas of Louisiana under federal control as a state within the Union, with its own elected representatives to the U.S. Congress.
Following the Civil War and emancipation of slaves, violence rose in the South as the war was carried on by insurgent private and paramilitary groups. Initially state legislatures were dominated by former Confederates, who passed Black Codes to regulate freedmen and generally refused to give the vote. They refused to extend voting rights to African Americans who had been free before the war and had sometimes obtained education and property (as in New Orleans.) Following the Memphis riots of 1866 and the New Orleans riot the same year, the Fourteenth Amendment was passed that provided suffrage and full citizenship for freedmen. Congress passed the Reconstruction Act, establishing military districts for those states where conditions were considered the worst, including Louisiana. It was grouped with Texas in what was administered as the Fifth Military District.
African Americans began to live as citizens with some measure of equality before the law. Both freedmen and people of color who had been free before the war began to make more advances in education, family stability and jobs. At the same time, there was tremendous social volatility in the aftermath of war, with many whites actively resisting defeat and the free labor market. White insurgents mobilized to enforce white supremacy, first in Ku Klux Klan chapters.
By 1877, when federal forces were withdrawn, white Democrats in Louisiana and other states had regained control of state legislatures, often by paramilitary groups such as the White League, which suppressed black voting through intimidation and violence. Following Mississippi's example in 1890, in 1898, the white Democratic, planter-dominated legislature passed a new constitution that effectively disenfranchised people of color, by raising barriers to voter registration, such as poll taxes, residency requirements and literacy tests. The effect was immediate and long lasting. In 1896, there were 130,334 black voters on the rolls and about the same number of white voters, in proportion to the state population, which was evenly divided.
The state population in 1900 was 47% African-American: a total of 652,013 citizens. Many in New Orleans were descendants of Creoles of color, the sizeable population of free people of color before the Civil War. By 1900, two years after the new constitution, only 5,320 black voters were registered in the state. Because of disfranchisement, by 1910 there were only 730 black voters (less than 0.5 percent of eligible African-American men), despite advances in education and literacy among blacks and people of color. Blacks were excluded from the political system and also unable to serve on juries. White Democrats had established one-party Democratic rule, which they maintained in the state for decades deep into the 20th century until after congressional passage of the 1965 Voting Rights Act provided federal oversight and enforcement of the constitutional right to vote.
In the early decades of the 20th century, thousands of African Americans left Louisiana in the Great Migration north to industrial cities for jobs and education, and to escape Jim Crow society and lynchings. The boll weevil infestation and agricultural problems cost many sharecroppers and farmers their jobs. The mechanization of agriculture also reduced the need for laborers. Beginning in the 1940s, blacks went West to California for jobs in its expanding defense industries.
During some of the Great Depression, Louisiana was led by Governor Huey Long. He was elected to office on populist appeal. His public works projects provided thousands of jobs to people in need, and he supported education and increased suffrage for poor whites, but Long was criticized for his allegedly demogogic and autocratic style. He extended patronage control through every branch of Louisiana's state government. Especially controversial were his plans for wealth redistribution in the state. Long's rule ended abruptly when he was assassinated in the state capitol in 1935.
Mobilization for World War II created jobs in the state. But thousands of other workers, black and white alike, migrated to California for better jobs in its burgeoning defense industry. Many African Americans left the state in the Second Great Migration, from the 1940s through the 1960s to escape social oppression and seek better jobs. The mechanization of agriculture in the 1930s had sharply cut the need for laborers. They sought skilled jobs in the defense industry in California, better education for their children, and living in communities where they could vote.
On November 26, 1958, at Chennault Air Force Base, a USAF B-47 bomber with a nuclear weapon on board developed a fire while on the ground. The aircraft wreckage and the site of the accident were contaminated after a limited explosion of non-nuclear material.
In the 1950s the state created new requirements for a citizenship test for voter registration. Despite opposition by the States Rights Party, downstate black voters had begun to increase their rate of registration, which also reflected the growth of their middle classes. In 1960 the state established the Louisiana State Sovereignty Commission, to investigate civil rights activists and maintain segregation.
Despite this, gradually black voter registration and turnout increased to 20% and more, and it was 32% by 1964, when the first national civil rights legislation of the era was passed. The percentage of black voters ranged widely in the state during these years, from 93.8% in Evangeline Parish to 1.7% in Tensas Parish, for instance, where there were white efforts to suppress the vote in the black-majority parish.
Violent attacks on civil rights activists in two mill towns were catalysts to the founding of the first two chapters of the Deacons for Defense and Justice in late 1964 and early 1965, in Jonesboro and Bogalusa, respectively. Made up of veterans of World War II and the Korean War, they were armed self-defense groups established to protect activists and their families. Continued violent white resistance in Bogalusa to blacks trying to use public facilities in 1965, following passage of the Civil Rights Act of 1964, caused the federal government to order local police to protect the activists. Other chapters were formed in Louisiana, Mississippi, and Alabama.
By 1960 the proportion of African Americans in Louisiana had dropped to 32%. The 1,039,207 black citizens were still suppressed by segregation and disfranchisement. African Americans continued to suffer disproportionate discriminatory application of the state's voter registration rules. Because of better opportunities elsewhere, from 1965 to 1970, blacks continued to migrate out of Louisiana, for a net loss of more than 37,000 people. Based on official census figures, the African-American population in 1970 stood at 1,085,109, a net gain of more than 46,000 people compared to 1960. During the latter period, some people began to migrate to cities of the New South for opportunities. Since that period, blacks entered the political system and began to be elected to office, as well as having other opportunities.
On May 21, 1919, the Nineteenth Amendment to the United States Constitution, giving women full rights to vote, was passed at a national level, and was made the law throughout the United States on August 18, 1920. Louisiana finally ratified the amendment on June 11, 1970.
Due to its location on the Gulf Coast, Louisiana has regularly suffered the effects of tropical storms and damaging hurricanes. On August 29, 2005, New Orleans and many other low-lying parts of the state along the Gulf of Mexico were hit by the catastrophic Hurricane Katrina. It caused widespread damage due to breaching of levees and large-scale flooding of more than 80% of the city. Officials had issued warnings to evacuate the city and nearby areas, but tens of thousands of people, mostly African Americans, stayed behind, many of them stranded. Many people died and survivors suffered through the damage of the widespread floodwaters.
In August 2016, an unnamed storm dumped trillions of gallons of rain on southern Louisiana, including the cities of Denham Springs, Baton Rouge, Gonzales, St. Amant and Lafayette, causing catastrophic flooding. An estimated 110,000 homes were damaged and thousands of residents were displaced.
The United States Census Bureau estimates that the population of Louisiana was 4,648,794 on July 1, 2019, a 2.55% increase since the 2010 United States Census. The population density of the state is 104.9 people per square mile.
The center of population of Louisiana is located in Pointe Coupee Parish, in the city of New Roads.
According to the 2010 United States Census, 5.4% of the population age5 and older spoke Spanish at home, up from 3.5% in 2000; and 4.5% spoke French (including Louisiana French and Louisiana Creole), down from 4.8% in 2000.
According to U.S. census estimates, the population of Louisiana in 2014 was:
The major ancestry groups of Louisiana are African American (30.4%), French (16.8%), American (9.5%), German (8.3%), Irish (7.5%), English (6.6%), Italian (4.8%) and Scottish (1.1%).
As of 2011, 49.0% of Louisiana's population younger than age1 were minorities.
The largest denominations by number of adherents in 2010 were the Catholic Church with 1,200,900; Southern Baptist Convention with 709,650; and the United Methodist Church with 146,848. Non-denominational Evangelical Protestant congregations had 195,903 members.
As in other Southern states, the majority of Louisianians, particularly in the north of the state, belong to various Protestant denominations, with Protestants comprising 57% of the state's adult population. Protestants are concentrated in the northern and central parts of the state and in the northern tier of the Florida Parishes. Because of French and Spanish heritage, and their descendants the Creoles, and later Irish, Italian, Portuguese and German immigrants, southern Louisiana and the greater New Orleans area are predominantly Catholic.
Since Creoles were the first settlers, planters and leaders of the territory, they have traditionally been well represented in politics. For instance, most of the early governors were Creole Catholics. Because Catholics still constitute a significant fraction of Louisiana's population, they have continued to be influential in state politics. both Senators and the governor were Catholic. The high proportion and influence of the Catholic population makes Louisiana distinct among Southern states.
Jewish communities are established in the state's larger cities, notably New Orleans and Baton Rouge. The most significant of these is the Jewish community of the New Orleans area. In 2000, before the 2005 Hurricane Katrina, its population was about 12,000. Louisiana was among the southern states with a significant Jewish population before the 20th century; Virginia, South Carolina, and Georgia also had influential Jewish populations in some of their major cities from the 18th and 19th centuries. The earliest Jewish colonists were Sephardic Jews who immigrated with English colonists from London. Later in the 19th century, German Jews began to immigrate, followed by those from eastern Europe and the Russian Empire in the late 19th and early 20th centuries.
Prominent Jews in Louisiana's political leadership have included Whig (later Democrat) Judah P. Benjamin (1811–1884), who represented Louisiana in the U.S. Senate before the American Civil War and then became the Confederate secretary of state; Democrat-turned-Republican Michael Hahn who was elected as governor, serving 1864–1865 when Louisiana was occupied by the Union Army, and later elected in 1884 as a U.S. congressman; Democrat Adolph Meyer (1842–1908), Confederate Army officer who represented the state in the U.S. House of Representatives from 1891 until his death in 1908; Republican secretary of state Jay Dardenne (1954–), and Republican (Democrat before 2011) attorney general Buddy Caldwell (1946–).
The total gross state product in 2010 for Louisiana was $213.6 billion, placing it 24th in the nation. Its per capita personal income is $30,952, ranking 41st in the United States.
In 2014, Louisiana was ranked as one of the most small business friendly states, based on a study drawing upon data from more than 12,000 small business owners.
The state's principal agricultural products include seafood (it is the biggest producer of crawfish in the world, supplying approximately 90%), cotton, soybeans, cattle, sugarcane, poultry and eggs, dairy products, and rice. Industry generates chemical products, petroleum and coal products, processed foods and transportation equipment, and paper products. Tourism is an important element in the economy, especially in the New Orleans area.
The Port of South Louisiana, located on the Mississippi River between New Orleans and Baton Rouge, is the largest volume shipping port in the Western Hemisphere and 4th largest in the world, as well as the largest bulk cargo port in the world.
New Orleans, Shreveport, and Baton Rouge are home to a thriving film industry. State financial incentives since 2002 and aggressive promotion have given Louisiana the nickname "Hollywood South". Because of its distinctive culture within the United States, only Alaska is Louisiana's rival in popularity as a setting for reality television programs. In late 2007 and early 2008, a film studio was scheduled to open in Tremé, with state-of-the-art production facilities, and a film training institute.
Tabasco sauce, which is marketed by one of the United States' biggest producers of hot sauce, the McIlhenny Company, originated on Avery Island.
Louisiana has three personal income tax brackets, ranging from 2% to 6%. The sales tax rate is 4%: a 3.97% Louisiana sales tax and a .03% Louisiana Tourism Promotion District sales tax. Political subdivisions also levy their own sales tax in addition to the state fees. The state also has a use tax, which includes 4% to be distributed by the Department of Revenue to local governments. Property taxes are assessed and collected at the local level. Louisiana is a subsidized state, receiving $1.44 from the federal government for every dollar paid in.
Tourism and culture are major players in Louisiana's economy, earning an estimated $5.2 billion per year. Louisiana also hosts many important cultural events, such as the World Cultural Economic Forum, which is held annually in the fall at the New Orleans Morial Convention Center.
As of July 2017, the state's unemployment rate was 5.3%.
Louisiana taxpayers receive more federal funding per dollar of federal taxes paid compared to the average state. Per dollar of federal tax collected in 2005, Louisiana citizens received approximately $1.78 in the way of federal spending. This ranks the state fourth highest nationally and represents a rise from 1995 when Louisiana received $1.35 per dollar of taxes in federal spending (ranked seventh nationally). Neighboring states and the amount of federal spending received per dollar of federal tax collected were: Texas ($0.94), Arkansas ($1.41), and Mississippi ($2.02). Federal spending in 2005 and subsequent years since has been exceptionally high due to the recovery from Hurricane Katrina.
Tax Foundation.
Louisiana is rich in petroleum and natural gas. Petroleum and gas deposits are found in abundance both onshore and offshore in State-owned waters. In addition, vast petroleum and natural gas reserves are found offshore from Louisiana in the federally administered Outer Continental Shelf (OCS) in the Gulf of Mexico. According to the Energy Information Administration, the Gulf of Mexico OCS is the largest U.S. petroleum-producing region. Excluding the Gulf of Mexico OCS, Louisiana ranks fourth in petroleum production and is home to about two percent of the total U.S. petroleum reserves.
Louisiana's natural gas reserves account for about five percent of the U.S. total. The recent discovery of the Haynesville Shale formation in parts of or all of Caddo, Bossier, Bienville, Sabine, De Soto, Red River, and Natchitoches parishes have made it the world's fourth largest gas field with some wells initially producing over 25 million cubic feet of gas daily.
Louisiana was the first site of petroleum drilling over water in the world, on Caddo Lake in the northwest corner of the state. The petroleum and gas industry, as well as its subsidiary industries such as transport and refining, have dominated Louisiana's economy since the 1940s. Beginning in 1950, Louisiana was sued several times by the U.S. Interior Department, in efforts by the federal government to strip Louisiana of its submerged land property rights. These control vast stores of reservoirs of petroleum and natural gas.
When petroleum and gas boomed in the 1970s, so did Louisiana's economy. The Louisiana economy as well as its politics of the last half-century cannot be understood without thoroughly accounting for the influence of the petroleum and gas industries. Since the 1980s, these industries' headquarters have consolidated in Houston, but many of the jobs that operate or provide logistical support to the U.S. Gulf of Mexico crude-oil-and-gas industry remained in Louisiana .
In 1849, the state moved the capital from New Orleans to Baton Rouge. Donaldsonville, Opelousas, and Shreveport have briefly served as the seat of Louisiana state government. The Louisiana State Capitol and the Louisiana Governor's Mansion are both located in Baton Rouge. The Louisiana Supreme Court, however, did not move to Baton Rouge but remains headquartered in New Orleans.
The current Louisiana governor is Democrat John Bel Edwards. The current United States senators are Republicans John Neely Kennedy and Bill Cassidy. Louisiana has six congressional districts and is represented in the U.S. House of Representatives by five Republicans and one Democrat. Louisiana had eight votes in the Electoral College for the 2012 election. It lost one House seat due to stagnant population growth in the 2010 Census.
Louisiana is divided into 64 parishes (the equivalent of counties in most other states).
Most parishes have an elected government known as the Police Jury, dating from the colonial days. It is the legislative and executive government of the parish, and is elected by the voters. Its members are called Jurors, and together they elect a president as their chairman.
A more limited number of parishes operate under home rule charters, electing various forms of government. This include mayor–council, council–manager (in which the council hires a professional operating manager for the parish), and others.
The Louisiana political and legal structure has maintained several elements from the times of French and Spanish governance. One is the use of the term "parish" (from the French: "paroisse") in place of "county" for administrative subdivision. Another is the legal system of civil law based on French, German, and Spanish legal codes and ultimately Roman law, as opposed to English common law.
Louisiana's civil law system is what the majority of nations in the world use, especially in Europe and its former colonies, excluding those that derive from the British Empire. However, it is incorrect to equate the Louisiana Civil Code with the Napoleonic Code. Although the Napoleonic Code and Louisiana law draw from common legal roots, the Napoleonic Code was never in force in Louisiana, as it was enacted in 1804, after the United States had purchased and annexed Louisiana in 1803.
While the Louisiana Civil Code of 1808 has been continuously revised and updated since its enactment, it is still considered the controlling authority in the state. Differences are found between Louisianan civil law and the common law found in the other U.S. states. While some of these differences have been bridged due to the strong influence of common law tradition, the civil law tradition is still deeply rooted in most aspects of Louisiana private law. Thus property, contractual, business entities structure, much of civil procedure, and family law, as well as some aspects of criminal law, are still based mostly on traditional Roman legal thinking.
In 1997, Louisiana became the first state to offer the option of a traditional marriage or a covenant marriage. In a covenant marriage, the couple waives their right to a "no-fault" divorce after six months of separation, which is available in a traditional marriage. To divorce under a covenant marriage, a couple must demonstrate cause. Marriages between ascendants and descendants, and marriages between collaterals within the fourth degree (i.e., siblings, aunt and nephew, uncle and niece, first cousins) are prohibited. Same-sex marriages were prohibited by statute, but the Supreme Court declared such bans unconstitutional in 2015, in its ruling in "Obergefell v. Hodges". Same-sex marriages are now performed statewide. Louisiana is a community property state.
From 1898 to 1965, a period when Louisiana had effectively disfranchised most African Americans and many poor whites by provisions of a new constitution, this was essentially a one-party state dominated by white Democrats. Elites had control in the early 20th century, before populist Huey Long came to power as governor. In multiple acts of resistance, blacks left behind the segregation, violence and oppression of the state and moved out to seek better opportunities in northern and western industrial cities during the Great Migrations of 1910–1970, markedly reducing their proportion of population in Louisiana. The franchise for whites was expanded somewhat during these decades, but blacks remained essentially disfranchised until after the civil rights movement of the mid-20th century, gaining enforcement of their constitutional rights through passage by Congress of the Voting Rights Act of 1965.
Since the 1960s, when civil rights legislation was passed under President Lyndon Johnson to protect voting and civil rights, most African Americans in the state have affiliated with the Democratic Party. In the same years, many white social conservatives have moved to support Republican Party candidates in national, gubernatorial and statewide elections. In 2004, David Vitter was the first Republican in Louisiana to be popularly elected as a U.S. senator. The previous Republican senator, John S. Harris, who took office in 1868 during Reconstruction, was chosen by the state legislature under the rules of the 19th century.
Louisiana is unique among U.S. states in using a system for its state and local elections similar to that of modern France. All candidates, regardless of party affiliation, run in a nonpartisan blanket primary (or "jungle primary") on Election Day. If no candidate has more than 50% of the vote, the two candidates with the highest vote totals compete in a runoff election approximately one month later. This run-off method does not take into account party identification; therefore, it is not uncommon for a Democrat to be in a runoff with a fellow Democrat or a Republican to be in a runoff with a fellow Republican.
Congressional races have also been held under the jungle primary system. All other states (except Washington, California, and Maine) use single-party primaries followed by a general election between party candidates, each conducted by either a plurality voting system or runoff voting, to elect senators, representatives, and statewide officials. Between 2008 and 2010, federal congressional elections were run under a closed primary system—limited to registered party members. However, upon the passage of House Bill 292, Louisiana again adopted a nonpartisan blanket primary for its federal congressional elections.
Louisiana has six seats in the U.S. House of Representatives, five of which are currently held by Republicans and one by a Democrat. The state lost a House seat at the end of the 112th Congress due to stagnant population growth as recorded by the 2010 United States Census. Louisiana is not classified as a "swing state" for future presidential elections, as since the late 20th century, it has regularly supported Republican candidates. The state's two U.S. senators are Bill Cassidy (R) John Neely Kennedy (R).
Louisiana's statewide police force is the Louisiana State Police. It began in 1922 with the creation of the Highway Commission. In 1927, a second branch, the Bureau of Criminal Investigations, was formed. In 1932, the State Highway Patrol was authorized to carry weapons.
On July 28, 1936, the two branches were consolidated to form the Louisiana Department of State Police; its motto was "courtesy, loyalty, service". In 1942, this office was abolished and became a division of the Department of Public Safety, called the Louisiana State Police. In 1988, the Criminal Investigation Bureau was reorganized. Its troopers have statewide jurisdiction with power to enforce all laws of the state, including city and parish ordinances. Each year, they patrol over 12 million miles (20 million km) of roadway and arrest about 10,000 impaired drivers. The State Police are primarily a traffic enforcement agency, with other sections that delve into trucking safety, narcotics enforcement, and gaming oversight.
The elected sheriff in each parish is the chief law enforcement officer in the parish. They are the keepers of the local parish prisons, which house felony and misdemeanor prisoners. They are the primary criminal patrol and first responder agency in all matters criminal and civil. They are also the official tax collectors in each parish. The sheriffs are responsible for general law enforcement in their respective parishes. Orleans Parish is an exception, as the general law enforcement duties fall to the New Orleans Police Department. Before 2010, Orleans parish was the only parish to have two sheriff's offices. Orleans Parish divided sheriffs' duties between criminal and civil, with a different elected sheriff overseeing each aspect. In 2006, a bill was passed which eventually consolidated the two sheriff's departments into one parish sheriff responsible for both civil and criminal matters.
In 2015, Louisiana had a higher murder rate (10.3 per 100,000) than any other state in the country for the 27th straight year. Louisiana is the only state with an annual average murder rate (13.6 per 100,000) at least twice as high as the U.S. annual average (6.6 per 100,000) during that period, according to Bureau of Justice Statistics from FBI Uniform Crime Reports. In a different kind of criminal activity, the "Chicago Tribune" reports that Louisiana is the most corrupt state in the United States.
According to the "Times Picayune", Louisiana is the prison capital of the world. Many for-profit private prisons and sheriff-owned prisons have been built and operate here. Louisiana's incarceration rate is nearly five times Iran's, 13 times China's and 20 times Germany's. Minorities are incarcerated at rates disproportionate to their share of the state's population.
The New Orleans Police Department began a new sanctuary policy to "no longer cooperate with federal immigration enforcement" beginning on February 28, 2016.
The judiciary of Louisiana is defined under the Constitution and law of Louisiana and is composed of the Louisiana Supreme Court, the Louisiana Circuit Courts of Appeal, the District Courts, the Justice of the Peace Courts, the Mayor's Courts, the City Courts, and the Parish Courts. The chief justice of the Louisiana Supreme Court is the chief administrator of the judiciary. Its administration is aided by the Judiciary Commission of Louisiana, the Louisiana Attorney Disciplinary Board, and the Judicial Council of the Supreme Court of Louisiana.
Louisiana has more than 9,000 soldiers in the Louisiana Army National Guard, including the 225th Engineer Brigade and the 256th Infantry Brigade. Both these units have served overseas during the War on Terror. The Louisiana Air National Guard has more than 2,000 airmen, and its 159th Fighter Wing has likewise seen combat.
Training sites in the state include Camp Beauregard near Pineville, Camp Villere near Slidell, Camp Minden near Minden, England Air Park (formerly England Air Force Base) near Alexandria, Gillis Long Center near Carville, and Jackson Barracks in New Orleans.
Louisiana is home to several notable public and private colleges and universities, which include Louisiana State University in Baton Rouge and Tulane University in New Orleans. Louisiana State University is the largest and most comprehensive university in Louisiana. Tulane University is a major private research university and the wealthiest university in Louisiana with an endowment over $1.1 billion. Tulane is also highly regarded for its academics nationwide, ranked fortieth on "U.S. News & World Report's" 2018 list of best national universities.
Louisiana's two oldest and largest HBCUs (Historically black colleges and universities) are Southern University in Baton Rouge and Grambling State University in Grambling. Both these Southwestern Athletic Conference (SWAC) schools compete against each other in football annually in the much anticipated Bayou Classic during Thanksgiving weekend in the Mercedes-Benz Superdome.
The Louisiana Science Education Act is a controversial law passed by the Louisiana Legislature on June 11, 2008, and signed into law by Governor Bobby Jindal on June 25. The act allows public school teachers to use supplemental materials in the science classroom which are critical of established science on such topics as the theory of evolution and global warming.
In 2000, of all of the states, Louisiana had the highest percentage of students in private schools. Danielle Dreilinger of "The Times Picayune" wrote in 2014 that "Louisiana parents have a national reputation for favoring private schools." The number of students in enrolled in private schools in Louisiana declined by 9% from circa 2000-2005 until 2014, due to the proliferation of charter schools, the 2008 recession and Hurricane Katrina. Ten parishes in the Baton Rouge and New Orleans area had a combined 17% decline in private school enrollment in that period. This prompted private schools to lobby for school vouchers.
Louisiana's school voucher program is known as the Louisiana Scholarship Program. It was available in the New Orleans area beginning in 2008 and in the rest of the state beginning in 2012. In 2013 the number of students using school vouchers to attend private schools was 6,751, and for 2014 it was projected to over 8,800. As per a ruling from Ivan Lemelle, a U.S. district judge, the federal government has the right to review the charter school placements to ensure they do not further racial segregation.
Louisiana is nominally the least populous state with more than one major professional sports league franchise: the National Basketball Association's New Orleans Pelicans and the National Football League's New Orleans Saints.
Louisiana has 12 collegiate NCAA Division I programs, a high number given its population. The state has no NCAA Division II teams and only two NCAA Division III teams. The LSU Tigers football team has won 11 Southeastern Conference titles, six Sugar Bowls and four national championships.
Each year New Orleans plays host to the Sugar Bowl, the Bayou Classic, and the New Orleans Bowl college football games, and Shreveport hosts the Independence Bowl. Also, New Orleans has hosted the Super Bowl a record seven times, as well as the BCS National Championship Game, NBA All-Star Game and NCAA Men's Division I Basketball Championship.
The Zurich Classic of New Orleans, is a PGA Tour golf tournament held since 1938. The Rock 'n' Roll Mardi Gras Marathon and Crescent City Classic are two road running competitions held at New Orleans.
As of 2016, Louisiana was the birthplace of the most NFL players per capita for the eighth year in a row.
Louisiana is home to many, especially notable are the distinct culture of the Louisiana Creoles, typically people of color, descendants of free mixed-race families of the colonial and early statehood periods.
The French colony of "La Louisiane" struggled for decades to survive. Conditions were harsh, the climate and soil were unsuitable for certain crops the colonists knew, and they suffered from regional tropical diseases. Both colonists and the slaves they imported had high mortality rates. The settlers kept importing slaves, which resulted in a high proportion of native Africans from West Africa, who continued to practice their culture in new surroundings. As described by historian Gwendolyn Midlo Hall, they developed a marked Afro-Creole culture in the colonial era.
At the turn of the 18th century and in the early 1800s, New Orleans received a major influx of white and mixed-race refugees fleeing the violence of the Haitian Revolution, many of whom brought their slaves with them. This added another infusion of African culture to the city, as more slaves in Saint-Domingue were from Africa than in the United States. They strongly influenced the African-American culture of the city in terms of dance, music and religious practices.
Creole culture is an amalgamation of French, African, Spanish (and other European), and Native American cultures. Creole comes from the Portuguese word "crioulo"; originally it referred to a colonist of European (specifically French) descent who was born in the New World, in comparison to immigrants from France. The oldest Louisiana manuscript to use the word "Creole", from 1782, applied it to a slave born in the French colony. But originally it referred more generally to the French colonists born in Louisiana.
Over time, there developed in the French colony a relatively large group of Creoles of Color ("gens de couleur libres"), who were primarily descended from African slave women and French men (later other Europeans became part of the mix, as well as some Native Americans.) Often the French would free their concubines and mixed-race children, and pass on social capital to them. They might educate sons in France, for instance, and help them enter the French Army for a career. They also settled capital or property on their mistresses and children. The free people of color gained more rights in the colony and sometimes education; they generally spoke French and were Roman Catholic. Many became artisans and property owners. Over time, the term "Creole" became associated with this class of Creoles of Color, many of whom achieved freedom long before the Civil War.
Wealthy French Creoles generally maintained town houses in New Orleans as well as houses on their large sugar plantations outside town along the Mississippi River. New Orleans had the largest population of free people of color in the region; they could find work there and created their own culture, marrying among themselves for decades.
The ancestors of Cajuns immigrated mostly from west central France to New France, where they settled in the Atlantic provinces of New Brunswick, Nova Scotia and Prince Edward Island, known originally as Acadia. After the British defeated France in the French and Indian War (Seven Years' War) in 1763, France ceded its territory east of the Mississippi River to Britain. The British forcibly separated families and evicted them from Acadia because they refused to vow loyalty to the new British regime. The Acadians were deported to England, New England, and France. Some escaped the British remained in French Canada.
Others scattered, to France, Canada, Mexico, or the Falkland Islands. Many Acadian refugees settled in south Louisiana in the region around Lafayette and the LaFourche Bayou country. They developed a distinct rural culture there, different from the French Creole colonists of New Orleans. Intermarrying with others in the area, they developed what was called Cajun music, cuisine and culture. Until the 1970s, the term "Cajun" was considered somewhat derogatory.
A third distinct culture in Louisiana is that of the Isleños. Its members are descendants of colonists from the Canary Islands who settled in Spanish Louisiana between 1778 and 1783 and intermarried with other communities such as Frenchman, Acadians, Creoles, Spaniards, and other groups, mainly through the 19th and early 20th centuries.
In Louisiana, the Isleños originally settled in four communities which included Galveztown, Valenzuela, Barataria, and San Bernardo. Of those settlements, Valenzuela and San Bernardo were the most successful as the other two were plagued with both disease and flooding. The large migration of Acadian refugees to Bayou Lafourche led to the rapid gallicization of the Valenzuela community while the community of San Bernardo (Saint Bernard) was able to preserve much of its unique culture and language into the 21st century. This being said, the transmission of Spanish and other customs has completely halted in St. Bernard with those having competency in Spanish being octogenarians.
Through the centuries, the various Isleño communities of Louisiana have kept alive different elements of their Canary Islander heritage while also adopting and building upon the customs and traditions of the communities that surround them. Today two heritage associates exist for the communities: Los Isleños Heritage and Cultural Society of St. Bernard as well as the Canary Islanders Heritage Society of Louisiana. The Fiesta de los Isleños is celebrated annually in St. Bernard Parish which features heritage performances from local groups and the Canary Islands.
According to a 2010 study by the Modern Language Association, among persons five years old and older, 91.26% of Louisiana residents speak only English at home, 3.45% speak French (standard French, French Creole, or Cajun French), 3.30% speak Spanish, and 0.59% speak Vietnamese.
Historically, Native American peoples in the area at the time of European encounter were seven tribes distinguished by their languages: Caddo, Tunica, Natchez, Houma, Choctaw, Atakapa, and Chitimacha. Of these, only Tunica, Caddo and Choctaw still have living native speakers, although several other tribes are working to teach and revitalize their languages. Other Native American peoples migrated into the region, escaping from European pressure from the east. Among these were Alabama, Biloxi, Koasati, and Ofo peoples.
Starting in the 1700s, French colonists began to settle along the coast and founded New Orleans. They established French culture and language institutions. They imported thousands of slaves from tribes of West Africa, who spoke several different languages. In the creolization process, the slaves developed a Louisiana Creole dialect incorporating both French and African forms, which colonists adopted to communicate with them, and which persisted beyond slavery. In the 20th century, there were still people of mixed race, particularly, who spoke Louisiana Creole French.
During the 19th century after the Louisiana Purchase by the United States, English gradually gained prominence for business and government due to the shift in population with settlement by numerous Americans who were English speakers. Many ethnic French families continued to use French in private. Slaves and some free people of color also spoke Louisiana Creole French. The State Constitution of 1812 gave English official status in legal proceedings, but use of French remained widespread. Subsequent state constitutions reflect the diminishing importance of French. The 1868 constitution, passed during the Reconstruction era before Louisiana was re-admitted to the Union, banned laws requiring the publication of legal proceedings in languages other than English. Subsequently, the legal status of French recovered somewhat, but it never regained its pre-Civil War prominence.
Several unique dialects of French, Creole, and English are spoken in Louisiana. Dialects of the French language are: Colonial French and Houma French. Louisiana Creole French is the term for one of the Creole languages. Two unique dialects developed of the English language: Louisiana English, a French-influenced variety of English in which dropping of postvocalic /r/ is common ; and what is informally known as Yat, which resembles the New York City dialect sometimes with southern influences, particularly that of historical Brooklyn. Both accents were influenced by large communities of immigrant Irish and Italians, but the Yat dialect, which developed in New Orleans, was also influenced by French and Spanish.
Colonial French was the dominant language of white settlers in Louisiana during the French colonial period; it was spoken primarily by the French Creoles (native-born). In addition to this dialect, the mixed-race people and slaves developed Louisiana Creole, with a base in West African languages. The limited years of Spanish rule at the end of the 18th century did not result in widespread adoption of the Spanish language. French and Louisiana Creole are still used in modern-day Louisiana, often in family gatherings. English and its associated dialects became predominant after the Louisiana Purchase of 1803, after which the area became dominated by numerous English speakers. In some regions, English was influenced by French, as seen with Louisiana English. Colonial French, although mistakenly named Cajun French by some Cajuns, has persisted alongside English.
Renewed interest in the French language in Louisiana has led to the establishment of Canadian-modeled French immersion schools, as well as bilingual signage in the historic French neighborhoods of New Orleans and Lafayette. In addition to private organizations, since 1968 the state has maintained the Council for the Development of French in Louisiana (CODOFIL), which promotes use of the French language in the state's tourism, economic development, culture, education and international relations. Through that office's efforts, in 2018 the state became the first in the nation to join the Organisation internationale de la Francophonie as an observer. | https://en.wikipedia.org/wiki?curid=18130 |
Los Angeles International Airport
Los Angeles International Airport , commonly referred to as LAX (with each of its letters pronounced individually), is the primary international airport serving Los Angeles and its surrounding metropolitan area.
LAX is located in the Westchester neighborhood of Los Angeles, southwest of Downtown Los Angeles, with the commercial and residential areas of Westchester to the north, the city of El Segundo to the south and the city of Inglewood to the east. LAX is the closest airport to the Westside and the South Bay.
Owned and operated by Los Angeles World Airports (LAWA), an agency of the government of Los Angeles, formerly known as the Department of Airports, the airport covers of land. LAX has four parallel runways.
In 2019, LAX handled 88,068,013 passengers, making it the world's third busiest and the United States' second busiest airport following Hartsfield–Jackson Atlanta International Airport. As the largest and busiest international airport on the U.S. West Coast, LAX is a major international gateway to the United States, and also serves a connection point for passengers traveling internationally. The airport holds the record for the world's busiest origin and destination airport, since relative to other airports, many more travelers begin or end their trips in Los Angeles than use it as a connection. It is also the only airport to rank among the top five U.S. airports for both passenger and cargo traffic.
LAX serves as a major hub or focus city for more passenger airlines than any other airport in the United States. It is the only airport that four U.S. legacy carriers (Alaska, American, Delta and United) have designated as a hub and is a focus city for Air New Zealand, Allegiant Air, Norwegian Air Shuttle, Qantas, Southwest Airlines, and Volaris. While LAX is the busiest airport in the Greater Los Angeles Area, several other airports, including Hollywood Burbank Airport, John Wayne Airport, Long Beach Airport, as well as Ontario International Airport, also serve the area.
In 1928, the Los Angeles City Council selected in the southern part of Westchester for a new airport. The fields of wheat, barley and lima beans were converted into dirt landing strips without any terminal buildings. It was named Mines Field for William W. Mines, the real estate agent who arranged the deal. The first structure, Hangar No. 1, was erected in 1929 and is in the National Register of Historic Places.
Mines Field opened as the airport of Los Angeles in 1930 and the city purchased it to be a municipal airfield in 1937. The name became Los Angeles Airport in 1941 and Los Angeles International Airport in 1949. In the 1930s the main airline airports were Burbank Airport (then known as Union Air Terminal, and later Lockheed) in Burbank and the Grand Central Airport in Glendale. (In 1940 the airlines were all at Burbank except for Mexicana's three departures a week from Glendale; in late 1946 most airline flights moved to LAX, but Burbank always retained a few.)
Mines Field did not extend west of Sepulveda Boulevard; Sepulveda was rerouted circa 1950 to loop around the west ends of the extended east–west runways (now runways 25L and 25R), which by November 1950 were long. A tunnel was completed in 1953 allowing Sepulveda Boulevard to revert to straight and pass beneath the two runways; it was the first tunnel of its kind. For the next few years the two runways were long.
Before the 1930s, existing airports used a two-letter abbreviation based on the weather stations at the airports. At that time, "LA" served as the designation for Los Angeles Airport. But with the rapid growth in the aviation industry the designations expanded to three letters in 1947, and "LA" became "LAX." "LAX" is also used for the Port of Los Angeles in San Pedro and by Amtrak for Union Station in downtown Los Angeles.
The distinctive white Googie Theme Building, designed by Pereira & Luckman architect Paul Williams and constructed in 1961 by Robert E. McKee Construction Co., resembles a flying saucer that has landed on its four legs. A restaurant with a sweeping view of the airport is suspended beneath two arches that form the legs. The Los Angeles City Council designated the building a Los Angeles Historic-Cultural Monument in 1992. A $4 million renovation, with retro-futuristic interior and electric lighting designed by Walt Disney Imagineering, was completed before the Encounter Restaurant opened there in 1997 but is no longer in business. Visitors are able to take the elevator up to the Observation Deck of the "Theme Building", which closed after the September 11, 2001 attacks for security reasons. A memorial to the victims of the 9/11 attacks is located on the grounds, as three of the four hijacked planes were originally destined for LAX. The Bob Hope USO expanded and relocated to the first floor of the Theme Building in 2018.
24R/06L and 24L/06R (designated the North Airfield Complex) are north of the airport terminals, and 25R/07L and 25L/07R (designated the South Airfield Complex) are south of the airport terminals.
Since 1972, Los Angeles World Airports has adopted the "Preferential Runway Use Policy" to minimize noise. During daylight hours (0630 to 0000), the normal air traffic pattern is the "Westerly Operations" plan, named for the prevailing west winds. Under "Westerly Operations", departing aircraft take off to the west, and arriving aircraft approach from the east. To reduce noise from arriving aircraft during night hours (0000 to 0630), the air traffic pattern becomes "Over-Ocean Operations". Under "Over-Ocean", departing aircraft continue to take off to the west, but arriving aircraft approach from the west unless otherwise required to approach from the east due to reduced visibility or easterly winds. As the name implies, "Easterly Operations" is used when prevailing winds have shifted to originate from the east, typically during inclement weather and Santa Ana conditions. Under "Easterly Operations", departing aircraft take off to the east, and arriving aircraft approach from the west.
The "inboard" runways (06R/24L and 07L/25R, closest to the central terminal area) are preferred for departures, and the "outboard" runways are preferred for arrivals. During noise-sensitive hours (2200 to 0700) and "Over-Ocean Operations", the "inboard" runways are used preferentially, with arrivals shifting primarily to 06R/24L and departures from 07L/25R. Historically, over 90% of flights have used the "inboard" departures and "outboard" arrivals scheme.
During westbound operations during the daytime, airplanes parked on the north complex tend to use Runway 6R/24L for almost all departures, and airplanes parked on the south complex use Runway 7L/25R for all departures requiring the left turn, and Runway 24L if they are making an immediate right turn. For arrivals, flights coming from the north tend to use Runway 6L/24R, and flights coming from the south tend to use Runway 7R/25L. For flights having a long final westbound, it could depend.
The South Airfield Complex tends to see more operations than the North, due to a larger number of passenger gates and air cargo operations. Runways in the North Airfield Complex are separated by . Plans have been advanced and approved to increase the separation by , which would allow a central taxiway between runways, despite opposition from residents living north of LAX. The separation between the two runways in the South Airfield Complex has already increased by to accommodate a central taxiway.
During westbound operations during the daytime, airplanes taking off to the west with an eastbound destination will generally depart the south runways and make a left turn over the Palos Verdes Peninsula, due to terrain and airspace conflicts with the nearby Santa Monica Airport and Burbank Airport. Meanwhile, northbound flights primarily depart the north runways, climbing over the Santa Monica Bay. Westbound flights may depart either complex, as air traffic demands dictate.
LAX has nine passenger terminals with a total of 132 gates arranged in the shape of the letter U or a horseshoe that are identified by numbers except for the Tom Bradley International Terminal. The Midfield Satellite Concourse North, an expansion for international flights reached through the Tom Bradley Terminal, is scheduled to open by the summer of 2020. There are of cargo facilities at LAX, as well as a heliport operated by Bravo Aviation.
LAWA currently has several plans to modernize LAX. These include terminal and runway improvements, which will enhance the passenger experience, reduce overcrowding, and provide airport access to the latest class of very large passenger aircraft.
These improvements include:
A 24-hour automated people mover is under construction. This small train will include three stations in the central terminal area and three outside east of the terminals at a new intermodal transportation facility hub, connecting passengers between the central terminal area and the Metro Green Line, the future Metro Crenshaw/LAX Line regional, local bus lines and a consolidated car rental facility.
It is the world's fourth-busiest airport by passenger traffic and eleventh-busiest by cargo traffic, serving over 87 million passengers and 2 million tons of freight and mail in 2018. It is the busiest airport in the state of California, and the second-busiest airport by passenger boardings in the United States. In terms of international passengers, the second busiest airport for international traffic in the United States, behind only JFK in New York City.
The number of aircraft movements (landings and takeoffs) was 700,362 in 2017, the third most of any airport in the world.
Shuttles operate to and from the terminals, providing frequent service for connecting passengers. However, connecting passengers who use these shuttles must leave and then later reenter security. Tunnels connect between terminals 4, 5, 6, 7, and 8, and an above-ground connector between and terminal 4 opened in February 2016. People don't generally have to leave and reenter through security checkpoints.
The closest bus stops to the terminals are the pair of opposites on Sepulveda Boulevard and Century Boulevard, served by Metro 117, Torrance 8, Metro 232, Commuter Express 574, Metro 102 to USC and the Metro Expo line, and Metro 40 to Los Angeles Union Station (owl service only).
In addition, out of a number of bus systems, many routes (local, rapid and express) of the LACMTA Metro 232 to Long Beach, Line 8 of Torrance Transit, Line 109 of Beach Cities Transit, the Santa Monica Big Blue Bus system's Line 3 and Rapid 3 via Lincoln Boulevard to Santa Monica and the Culver CityBus's Line 6 and Rapid 6 via Sepulveda Blvd to Culver City and UCLA, LADOT Commuter Express 438 to Downtown LA (Monday-Friday Rush hours AM) and Commuter Express 439 to Downtown LA (Monday-Friday Rush hours PM), all make stops at the LAX Transit Center in Parking Lot C. on 96th St., where shuttle bus "C" offers free connections to and from every LAX terminal, and at the Green Line, where shuttle bus "G" connects to and from the terminals.
The Taiwanese airline China Airlines operates a bus service from LAX to Monterey Park and Rowland Heights. This service is only available for China Airlines customers.
The FlyAway Bus is a nonstop motorcoach/shuttle service run by the LAWA, which provides scheduled service between LAX and Downtown Los Angeles (Union Station), the San Fernando Valley (Van Nuys), Hollywood, and Long Beach. The shuttle service stops at every LAX terminal. The service hours vary based on the line. All lines use the regional system of High Occupancy Vehicle lanes to expedite their trips. The Los Angeles Union Station service and a late-night branch of Metro Local route 40 are the only direct transit links between the airport and Downtown Los Angeles.
Discontinued routes for the FlyAway include West Los Angeles (Westwood), Santa Monica, and Irvine.
Shuttle bus "G" offers a free connection to and from the Aviation/LAX station on the Los Angeles Metro Rail Green Line.
The LAX automated people mover (APM) is an electric train system currently under construction by LAWA. The LAX APM will be in traveling distance and will have six stations serving the central area, terminals 1–8, and the Tom Bradley International Terminal.
Once leaving the three terminal stations, heading east, the first station is a ground transportation parking structure called the "Intermodal Transportation Facility-West" that will serve employee parking, surrounding hotel access and long term airport parking. The next station will be a second car/bus/bike transport facility called the "Intermodal Transport Facility-East" as well as LA Metro Rail's platform, the under construction ground infill transit transfer station on the LAX/Crenshaw Metro Line. At this multi-station stop, the first (floor) level will be ground transportation. The second level will be a bridge from the main hub to the light rail platform and APM platform. The third level will be the APM platform. The last stop on the APM will be a rental car hub station called the Consolidated Rent-A-Car-Center (CONRAC). All the car rental companies and rentals will be here. The APM was designed to decrease the need for shuttle bus services and reduce traffic within the terminals World Way.
The APM will have nine total trains, each operating in four car sets with capacity of containing up to 200 passengers. The APM will operate every two minutes, with a ten minute end-to-end travel time.
Los Angeles had bid for the 2024 Summer Olympics in 2016 and was one of two city finalists, due to decreasing demand to host the Olympics, the IOC awarded both Los Angeles and the city of Paris with Olympic games each, Los Angeles being awarded the latter, the 2028 Summer Olympics. The project will be completed in time for the 2028 Summer Olympics in 2023, as its original projected completion date was by 2024 before the awarding, and the decision was made to retain the original deadline. LAWA has split the project in three phases. The project has been approved and the construction and operating bidding process was commenced. Three firms submitted bids and LAWA announced scoring for the project would be based on "technical merit, visual appeal, user experience and price". LAWA proposed a public private partnership wherein a private sector partner would responsible for the construction and operation of the people mover. Los Angeles City Council gave final approval on April 11, 2018 to "LAX Integrated Express Solutions". The joint bid that included manufacturer Bombardier Transportation at 4.895 Billion over 30 years to build and operate. The three phase project is estimated to cost $5.5 billion, and have a completion date of 2023.
Dallas based building firm Austin Commercial was awarded a five year contract to commence construction in the first quarter of 2018 on phase one of the APM project. The project consists of bridges to connect passengers between the three proposed APM stations inside World Way and the terminals. The bridges will also house restrooms, Airport lounges offices and other spaces. The project is expected to finish by 2021, followed by phases two and three that will consist of the people mover and off site buildings. In January 2018, a consortium led by Hochtief and Bombardier Transportation was selected as the preferred developer to be awarded the $1.95 billion design/build/operate contract.
In 2018, 2,100 parking spaces in lot C were removed to reconfigure the area for phase two, the parking structures. Utility relocation started in the second quarter of 2018. The guideway started construction in spring 2019, It will take up to three years to complete. Groundbreaking was held in March 2019. The "Intermodal Transportation Facility - West" began construction in summer of 2019. The Consolidated Car Rental Facility "CONRAC" broke ground on September 2019.
LAX's terminals are immediately west of the interchange between Century Boulevard and Sepulveda Boulevard (State Route 1). Interstate 405 can be reached to the east via Century Boulevard. Interstate 105 is to the south via Sepulveda Boulevard, through the Airport Tunnel that crosses under the airport runways.
Arriving passengers take a shuttle or walk to the LAXit waiting area near Terminal 1 for taxi or ride-share pickups. Taxicab services are operated by nine city-authorized taxi companies and regulated by Authorized Taxicab Supervision Inc. (ATS). ATS queues up taxis at the LAXit waiting area.
A number of private shuttle companies also offer limousine and bus services to LAX Airport.
The airport also functioned as a joint civil-military facility, providing a base for the United States Coast Guard and its Coast Guard Air Station Los Angeles facility, operating four HH-65 Dolphin helicopters, which covers Coast Guard operations in various Southern California locations, including Catalina Island. Missions include search and rescue (SAR), law enforcement, aids to navigation support (such as operating lighthouses) and various military operations. In addition, Coast Guard helicopters assigned to the air station deploy to Coast Guard cutters.
The air station relocated by May 18, 2016 from LAX to accommodate the planned improvements for LAX's midfield, including the Midfield Satellite Concourse North (MSC North) terminal. The air station moved to U.S. Navy's Naval Air Station Point Mugu, part of the Naval Base Ventura County (NBVC) in Point Mugu.
The Flight Path Learning Center is a museum located at 6661 Imperial Highway and was formerly known as the "West Imperial Terminal". This building used to house some charter flights (e.g. Condor Airlines, Martinair Holland, World Airways) and regular scheduled flights by MGM Grand Air. It sat empty for 10 years until it was re-opened as a learning center for LAX.
The center contains information on the history of aviation, several pictures of the airport, as well as aircraft scale models, flight attendant uniforms, and general airline memorabilia such as playing cards, china, magazines, signs, even a TWA gate information sign. The museum also offers school tours and a guest speaker program.
The museum's library contains an extensive collection of rare items such as aircraft manufacturer company newsletters/magazines, technical manuals for both military and civilian aircraft, industry magazines dating back to World War II and before, historic photographs and other invaluable references on aircraft operation and manufacturing.
The museum has on display "The Spirit of Seventy-Six," which is a DC-3 (DC-3-262, Serial No. 3269). After being in commercial airline service, the plane served as a corporate aircraft for Union Oil Company for 32 years. The plane was built in the Douglas Aircraft Company plant in Santa Monica in January 1941, which was a major producer of both commercial and military aircraft.
The museum claims to be "the only aviation museum and research center situated at a major airport and the only facility with a primary emphasis on contributions of civil aviation to the history and development of Southern California". There are other museums at major airports, however, including the Udvar-Hazy Center of the National Air and Space Museum adjacent to Washington Dulles Airport, the Royal Thai Air Force Museum at Don Mueang Airport, the Suomen ilmailumuseo (Finnish Aviation Museum) at Helsinki-Vantaa Airport, the Frontiers of Flight Museum at Dallas Love Field, the Tulsa Air and Space Museum & Planetarium at Tulsa International Airport and others.
The airport has the administrative offices of Los Angeles World Airports.
Continental Airlines once had its corporate headquarters on the airport property. At a 1962 press conference in the office of Mayor of Los Angeles Sam Yorty, Continental Airlines announced that it planned to move its headquarters to Los Angeles in July 1963. In 1963 Continental Airlines headquarters moved to a two-story, $2.3 million building on the grounds of the airport. The July 2009 "Continental Magazine" issue stated that the move "underlined Continental Airlines western and Pacific orientation". On July 1, 1983 the airline's headquarters were relocated to the America Tower in the Neartown area of Houston.
In addition to Continental Airlines, Western Airlines and Flying Tiger Line also had their headquarters at LAX.
During its history there have been numerous incidents, but only the most notable are summarized below:
The "Imperial Hill" area in El Segundo is a prime location for aircraft spotting, especially for takeoffs. Part of the Imperial Hill area has been set aside as a city park, Clutter's Park.
Another popular spotting location sits under the final approach for runways 24 L&R on a lawn next to the Westchester In-N-Out Burger on Sepulveda Boulevard. This is one of the few remaining locations in Southern California from which spotters may watch such a wide variety of low-flying commercial airliners from directly underneath a flight path.
One can also do aircraft spotting at a small park in the take-off pattern that (normally) goes out over the Pacific. The park is on the East side of the street Vista Del Mar from where it takes its name, Vista Del Mar Park.
At 12:51 p.m. on Friday, September 21, 2012, a Shuttle Carrier Aircraft carrying the Space Shuttle "Endeavour" landed at LAX on runway 25L. An estimated 10,000 people saw the shuttle land. Interstate 105 was backed up for miles at a standstill. Imperial Highway was shut down for spectators. It was quickly taken off the Shuttle Carrier Aircraft, a modified Boeing 747, and was moved to a United Airlines hangar. The shuttle spent about a month in the hangar while it was prepared to be transported to the California Science Center.
Numerous films and television shows have been set or filmed partially at LAX, at least partly due to the airport's proximity to Hollywood studios and Los Angeles. Film shoots at the Los Angeles airports, including LAX, produced $590 million for the Los Angeles region from 2002 to 2005. | https://en.wikipedia.org/wiki?curid=18131 |
La Tène culture
The La Tène culture (; ) was a European Iron Age culture.
It developed and flourished during the late Iron Age (from about 450 BCE to the Roman conquest in the 1st century BCE), succeeding the early Iron Age Hallstatt culture without any definite cultural break, under the impetus of considerable Mediterranean influence from the Greeks in pre-Roman Gaul, the Etruscans, and Golasecca culture.
Its territorial extent corresponded to what is now France, Belgium, Switzerland, Austria, Southern Germany, the Czech Republic, parts of Northern Italy, Slovenia and Hungary, as well as adjacent parts of the Netherlands, Slovakia, Croatia, Transylvania (western Romania), and Transcarpathia (western Ukraine).
The Celtiberians of western Iberia shared many aspects of the culture, though not generally the artistic style. To the north extended the contemporary Pre-Roman Iron Age of Northern Europe, including the Jastorf culture of Northern Germany.
Centered on ancient Gaul, the culture became very widespread, and encompasses a wide variety of local differences. It is often distinguished from earlier and neighbouring cultures mainly by the La Tène style of Celtic art, characterized by curving "swirly" decoration, especially of metalwork.
It is named after the type site of La Tène on the north side of Lake Neuchâtel in Switzerland, where thousands of objects had been deposited in the lake, as was discovered after the water level dropped in 1857. La Tène is the type site and the term archaeologists use for the later period of the culture and art of the ancient Celts, a term that is firmly entrenched in the popular understanding, but presents numerous problems for historians and archaeologists.
Extensive contacts through trade are recognized in foreign objects deposited in elite burials; stylistic influences on La Tène material culture can be recognized in Etruscan, Italic, Greek, Dacian and Scythian sources. Dateable Greek pottery and analysis employing scientific techniques such as dendrochronology and thermoluminescence help provide date ranges for an absolute chronology at some La Tène sites.
La Tène history was originally divided into "early", "middle" and "late" stages based on the typology of the metal finds (Otto Tischler 1885), with the Roman occupation greatly disrupting the culture, although many elements remain in Gallo-Roman and Romano-British culture. A broad cultural unity was not paralleled by overarching social-political unifying structures, and the extent to which the material culture can be linguistically linked is debated. The art history of La Tène culture has various schemes of periodization.
The archaeological period is now mostly divided into four sub-periods, following Paul Reinecke.
The preceding final phase of the Hallstatt culture, HaD, c. 650–450 BC, was also widespread across Central Europe, and the transition over this area was gradual, being mainly detected through La Tène style elite artefacts, which first appear on the western edge of the old Hallstatt region.
Though there is no agreement on the precise region in which La Tène culture first developed, there is a broad consensus that the centre of the culture lay on the northwest edges of Hallstatt culture, north of the Alps, within the region between in the West the valleys of the Marne and Moselle, and the part of the Rhineland nearby. In the east the western end of the old Hallstatt core area in modern Bavaria, Austria and Switzerland formed a somewhat separate "eastern style Province" in the early La Tène, joining with the western area in Alsace.
In 1994 a prototypical ensemble of elite grave sites of the early 5th century BCE was excavated at Glauberg in Hesse, northeast of Frankfurt-am-Main, in a region that had formerly been considered peripheral to the La Tène sphere. The site at La Tène itself was therefore near the southern edge of the original "core" area (as is also the case for the Hallstatt site for its core).
The establishment of a Greek colony, soon very successful, at Massalia (modern Marseilles) on the Mediterranean coast of France led to great trade with the Hallstatt areas up the Rhone and Saone river systems, and early La Tène elite burials like the Vix Grave in Burgundy contain imported luxury goods along with artifacts produced locally. Most areas were probably controlled by tribal chiefs living in hilltop forts, while the bulk of the population lived in small villages or farmsteads in the countryside.
By 500 BCE the Etruscans expanded to border Celts in north Italy, and trade across the Alps began to overhaul trade with the Greeks, and the Rhone route declined. Booming areas included the middle Rhine, with large iron ore deposits, the Marne and Champagne regions, and also Bohemia, although here trade with the Mediterranean area was much less important. Trading connections and wealth no doubt played a part in the origin of the La Tène style, though how large a part remains much discussed; specific Mediterranean-derived motifs are evident, but the new style does not depend on them.
Barry Cunliffe notes localization of La Tène culture during the 5th century BCE when there arose "two zones of power and innovation: a Marne – Moselle zone in the west with trading links to the Po Valley via the central Alpine passes and the Golasecca culture, and a Bohemian zone in the east with separate links to the Adriatic via the eastern Alpine routes and the Venetic culture".
From their homeland, La Tène culture expanded in the 4th century BCE to more of modern France, Germany, and Central Europe, and beyond to Hispania, northern and central Italy, the Balkans, and even as far as Asia Minor, in the course of several major migrations. La Tène style artefacts start to appear in Britain around the same time, and Ireland rather later. The style of "Insular La Tène" art is somewhat different and the artefacts are initially found in some parts of the islands but not others. Migratory movements seem at best only partly responsible for the diffusion of La Tène culture there, and perhaps other parts of Europe.
By about 400 BCE the evidence for Mediterranean trade become sparse; this may have been because the expanding Celtic populations began to migrate south and west, coming into violent conflict with the established populations, including the Etruscans and Romans.
The settled life in much of the La Tène homelands also seems to have become much more unstable and prone to wars. In about 387 BCE the Celts under Brennus defeated the Romans and then sacked Rome, establishing themselves as the most prominent threats to the Roman homeland, a status they would retain through a series of Roman-Gallic wars until Julius Caesar's final conquest of Gaul in 58-50 BCE. The Romans prevented the Celts from reaching very far south of Rome, but on the other side of the Adriatic Sea groups passed through the Balkans to reach Greece, where Delphi was attacked in 279 BCE, and Asia, where Galatia was established as a Celtic area of Anatolia. By this time the La Tène style was spreading to the British Isles, though apparently without any significant movements in population.
After about 275 BCE, Roman expansion into the La Tène areal began, at first with the conquest of Gallia Cisalpina.
The conquest of Celtic Gaul began in 121 BCE and was complete with the Gallic Wars of the 50s BCE.
Gaulish culture now quickly assimilated to Roman culture, giving rise to the hybrid Gallo-Roman culture of Late Antiquity.
La Tène metalwork in bronze, iron and gold, developing technologically out of Hallstatt culture, is stylistically characterized by inscribed and inlaid intricate spirals and interlace, on fine bronze vessels, helmets and shields, horse trappings and elite jewelry, especially the neck rings called torcs and elaborate clasps called "fibulae". It is characterized by elegant, stylized curvilinear animal and vegetal forms, allied with the Hallstatt traditions of geometric patterning.
The Early Style of La Tène art and culture mainly featured static, geometric decoration, while the transition to the Developed Style constituted a shift to movement-based forms, such as triskeles. Some subsets within the Developed Style contain more specific design trends, such as the recurrent serpentine scroll of the Waldalgesheim Style
Initially La Tène people lived in open settlements that were dominated by the chieftains’ hill forts. The development of towns—"oppida"—appears in mid-La Tène culture. La Tène dwellings were carpenter-built rather than of masonry. La Tène peoples also dug ritual shafts, in which votive offerings and even human sacrifices were cast. Severed heads appear to have held great power and were often represented in carvings. Burial sites included weapons, carts, and both elite and household goods, evoking a strong continuity with an afterlife.
Elaborate burials also reveal a wide network of trade. In Vix, France, an elite woman of the 6th century BCE was buried with a very large bronze "wine-mixer" made in Greece. Exports from La Tène cultural areas to the Mediterranean cultures were based on salt, tin, copper, amber, wool, leather, furs and gold.
Artefacts typical of the La Tène culture were also discovered in stray finds as far afield as Scandinavia, Northern Germany, Poland and in the Balkans. It is therefore common to also talk of the "La Tène period" in the context of those regions even though they were never part of the La Tène culture proper, but connected to its core area via trade.
The bearers of the La Tène culture were the people known as Celts or Gauls to ancient ethnographers.
Ancient Celtic culture had no written literature of its own, but rare examples of epigraphy in the Greek or Latin alphabets
exist allowing the fragmentary reconstruction of Continental Celtic.
Our knowledge of this cultural area derives from three sources: from archaeological evidence, from Greek and Latin literary evidence, and from ethnographical evidence suggesting some La Tène artistic and cultural survivals in traditionally Celtic regions of far western Europe.
Some of the societies that are archaeologically identified with La Tène material culture were identified by Greek and Roman authors from the 5th century onwards as "Keltoi" ("Celts") and "Galli" ("Gauls"). Herodotus (iv.49) correctly placed "Keltoi" at the source of the Ister/Danube, in the heartland of La Tène material culture: "The Ister flows right across Europe, rising in the country of the Celts".
Whether the usage of classical sources means that the whole of La Tène culture can be attributed to a unified Celtic people is difficult to assess; archaeologists have repeatedly concluded that language, material culture, and political affiliation do not necessarily run parallel. Frey (2004) notes that in the 5th century, "burial customs in the Celtic world were not uniform; rather, localised groups had their own beliefs, which, in consequence, also gave rise to distinct artistic expressions".
The La Tène type site is on the northern shore of Lake Neuchâtel, Switzerland, where the small river Thielle, connecting to another lake, enters the Lake Neuchâtel.
In 1857, prolonged drought lowered the waters of the lake by about 2 m.
On the northernmost tip of the lake, between the river and a point south of the village of Epagnier (), Hansli Kopp, looking for antiquities for Colonel Frédéric Schwab, discovered several rows of wooden piles that still reached up about 50 cm into the water. From among these, Kopp collected about forty iron swords.
The Swiss archaeologist Ferdinand Keller published his findings in 1868 in his influential first report on the Swiss pile dwellings ("Pfahlbaubericht"). In 1863 he interpreted the remains as a Celtic village built on piles. Eduard Desor, a geologist from Neuchâtel, started excavations on the lakeshore soon afterwards. He interpreted the site as an armory, erected on platforms on piles over the lake and later destroyed by enemy action. Another interpretation accounting for the presence of cast iron swords that had not been sharpened, was of a site for ritual depositions.
With the first systematic lowering of the Swiss lakes from 1868 to 1883, the site fell completely dry. In 1880, Emile Vouga, a teacher from Marin-Epagnier, uncovered the wooden remains of two bridges (designated "Pont Desor" and "Pont Vouga") originally over 100 m long, that crossed the little Thielle River (today a nature reserve) and the remains of five houses on the shore. After Vouga had finished, F. Borel, curator of the Marin museum, began to excavate as well. In 1885 the canton asked the Société d'Histoire of Neuchâtel to continue the excavations, the results of which were published by Vouga in the same year.
All in all, over 2500 objects, mainly made from metal, have been excavated in La Tène. Weapons predominate, there being 166 swords (most without traces of wear), 270 lanceheads, and 22 shield bosses, along with 385 brooches, tools, and parts of chariots. Numerous human and animal bones were found as well. The site was used from the 3rd century, with a peak of activity around 200 BCE and abandonment by about 60 BCE. Interpretations of the site vary. Some scholars believe the bridge was destroyed by high water, while others see it as a place of sacrifice after a successful battle (there are almost no female ornaments).
An exhibition marking the 150th anniversary of the discovery of the La Tène site opened in 2007 at the Musée Schwab in Biel/Bienne, Switzerland, moving to move to Zürich in 2008 and Mont Beuvray in Burgundy in 2009.
Some sites are:
See .
Some outstanding La Tène artifacts are: | https://en.wikipedia.org/wiki?curid=18133 |
Lorenz curve
In economics, the Lorenz curve is a graphical representation of the distribution of income or of wealth. It was developed by Max O. Lorenz in 1905 for representing inequality of the wealth distribution.
The curve is a graph showing the proportion of overall income or wealth assumed by the bottom "x"% of the people, although this is not rigorously true for a finite population (see below). It is often used to represent income distribution, where it shows for the bottom "x"% of households, what percentage ("y"%) of the total income they have. The percentage of households is plotted on the "x"-axis, the percentage of income on the "y"-axis. It can also be used to show distribution of assets. In such use, many economists consider it to be a measure of social inequality.
The concept is useful in describing inequality among the size of individuals in ecology and in studies of biodiversity, where the cumulative proportion of species is plotted against the cumulative proportion of individuals. It is also useful in business modeling: e.g., in consumer finance, to measure the actual percentage "y"% of delinquencies attributable to the "x"% of people with worst risk scores.
Data from 2005.
Points on the Lorenz curve represent statements such as, "the bottom 20% of all households have 10% of the total income."
A perfectly equal income distribution would be one in which every person has the same income. In this case, the bottom "N"% of society would always have "N"% of the income. This can be depicted by the straight line "y" = "x"; called the "line of perfect equality."
By contrast, a perfectly unequal distribution would be one in which one person has all the income and everyone else has none. In that case, the curve would be at "y" = 0% for all "x" < 100%, and "y" = 100% when "x" = 100%. This curve is called the "line of perfect inequality."
The Gini coefficient is the ratio of the area between the line of perfect equality and the observed Lorenz curve to the area between the line of perfect equality and the line of perfect inequality. The higher the coefficient, the more unequal the distribution is. In the diagram on the right, this is given by the ratio "A"/("A+B"), where "A" and "B" are the areas of regions as marked in the diagram.
The Lorenz curve is a probability plot (a P–P plot) comparing the distribution of a parameter in a population against a hypothetical uniform distribution of that parameter. It can usually be represented by a function "L"("F"), where "F", the cumulative portion of the population, is represented by the horizontal axis, and "L", the cumulative portion of the total wealth or income, is represented by the vertical axis.
For a population of size "n", with a sequence of values "y""i", "i" = 1 to "n", that are indexed in non-decreasing order ( "y""i" ≤ "y""i"+1), the Lorenz curve is the continuous piecewise linear function connecting the points ( "F""i", "L""i" ), "i" = 0 to "n", where "F"0 = 0, "L"0 = 0, and for "i" = 1 to "n":
For a discrete probability function "f"("y"), let "y""i", "i" = 1 to "n", be the points with non-zero probabilities indexed in increasing order ( "y""i" < "y""i"+1). The Lorenz curve is the continuous piecewise linear function connecting the points ( "F""i", "L""i" ), "i" = 0 to "n", where "F"0 = 0, "L"0 = 0, and for "i" = 1 to "n":
For a probability density function "f"("x") with the cumulative distribution function "F"("x"), the Lorenz curve "L" is given by:
where formula_8 denotes the average. The Lorenz curve "L(F)" may then be plotted as a function parametric in x: "L(x)" vs. "F(x)". In other contexts, the quantity computed here is known as the length biased (or size biased) distribution; it also has an important role in renewal theory.
Alternatively, for a cumulative distribution function "F"("x") with inverse "x"("F"), the Lorenz curve "L"("F") is directly given by:
The inverse "x"("F") may not exist because the cumulative distribution function has intervals of constant values. However, the previous formula can still apply by generalizing the definition of "x"("F"):
For an example of a Lorenz curve, see Pareto distribution.
A Lorenz curve always starts at (0,0) and ends at (1,1).
The Lorenz curve is not defined if the mean of the probability distribution is zero or infinite.
The Lorenz curve for a probability distribution is a continuous function. However, Lorenz curves representing discontinuous functions can be constructed as the limit of Lorenz curves of probability distributions, the line of perfect inequality being an example.
The information in a Lorenz curve may be summarized by the Gini coefficient and the Lorenz asymmetry coefficient.
The Lorenz curve cannot rise above the line of perfect equality.
If the variable being measured cannot take negative values, the Lorenz curve:
Note however that a Lorenz curve for net worth would start out by going negative due to the fact that some people have a negative net worth because of debt.
The Lorenz curve is invariant under positive scaling. If X is a random variable, for any positive number "c" the random variable "c" X has the same Lorenz curve as X.
The Lorenz curve is flipped twice, once about F = 0.5 and once about "L" = 0.5, by negation. If X is a random variable with Lorenz curve "L"X("F"), then −X has the Lorenz curve:
The Lorenz curve is changed by translations so that the equality gap "F" − "L"("F") changes in proportion to the ratio of the original and translated means. If X is a random variable with a Lorenz curve "L" X ("F") and mean "μ" X , then for any constant "c" ≠ −"μ" X , X + "c" has a Lorenz curve defined by:
For a cumulative distribution function "F"("x") with mean "μ" and (generalized) inverse "x"("F"), then for any "F" with 0 < "F" < 1 : | https://en.wikipedia.org/wiki?curid=18135 |
Literate programming
Literate programming is a programming paradigm introduced by Donald Knuth in which a computer program is given an explanation of its logic in a natural language, such as English, interspersed with snippets of macros and traditional source code, from which compilable source code can be generated. The approach is used in scientific computing and in data science routinely for reproducible research and open access purposes. Literate programming tools are used by millions of programmers today.
The literate programming paradigm, as conceived by Knuth, represents a move away from writing computer programs in the manner and order imposed by the computer, and instead enables programmers to develop programs in the order demanded by the logic and flow of their thoughts. Literate programs are written as an uninterrupted exposition of logic in an ordinary human language, much like the text of an essay, in which macros are included to hide abstractions and traditional source code.
Literate programming (LP) tools are used to obtain two representations from a literate source file: one suitable for further compilation or execution by a computer, the "tangled" code, and another for viewing as formatted documentation, which is said to be "woven" from the literate source. While the first generation of literate programming tools were computer language-specific, the later ones are language-agnostic and exist above the programming languages.
Literate programming was first introduced by Knuth in 1984. The main intention behind this approach was to treat a program as literature understandable to human beings. This approach was implemented at Stanford University as a part of research on algorithms and digital typography. This implementation was called "WEB" by Knuth since he believed that it was one of the few three-letter words of English that hadn't already been applied to computing. However, it correctly resembles the complicated nature of software delicately pieced together from simple materials.
Literate programming is writing out the program logic in a human language with included (separated by a primitive markup) code snippets and macros. Macros in a literate source file are simply title-like or explanatory phrases in a human language that describe human abstractions created while solving the programming problem, and hiding chunks of code or lower-level macros. These macros are similar to the algorithms in pseudocode typically used in teaching computer science. These arbitrary explanatory phrases become precise new operators, created on the fly by the programmer, forming a "meta-language" on top of the underlying programming language.
A preprocessor is used to substitute arbitrary hierarchies, or rather "interconnected 'webs' of macros", to produce the compilable source code with one command ("tangle"), and documentation with another ("weave"). The preprocessor also provides an ability to write out the content of the macros and to add to already created macros in any place in the text of the literate program source file, thereby disposing of the need to keep in mind the restrictions imposed by traditional programming languages or to interrupt the flow of thought.
According to Knuth,
literate programming provides higher-quality programs, since it forces programmers to explicitly state the thoughts behind the program, making poorly thought-out design decisions more obvious. Knuth also claims that literate programming provides a first-rate documentation system, which is not an add-on, but is grown naturally in the process of exposition of one's thoughts during a program's creation. The resulting documentation allows the author to restart his own thought processes at any later time, and allows other programmers to understand the construction of the program more easily. This differs from traditional documentation, in which a programmer is presented with source code that follows a compiler-imposed order, and must decipher the thought process behind the program from the code and its associated comments. The meta-language capabilities of literate programming are also claimed to facilitate thinking, giving a higher "bird's eye view" of the code and increasing the number of concepts the mind can successfully retain and process. Applicability of the concept to programming on a large scale, that of commercial-grade programs, is proven by an edition of TeX code as a literate program.
Knuth also claims that Literate Programming can lead to easy porting of software to multiple environments, and even cites the implementation of TeX as an example.
Literate programming is very often misunderstood to refer only to formatted documentation produced from a common file with both source code and comments – which is properly called documentation generation – or to voluminous commentaries included with code. This is the converse of literate programming: well-documented code or documentation extracted from code follows the structure of the code, with documentation embedded in the code; while in literate programming, code is embedded in documentation, with the code following the structure of the documentation.
This misconception has led to claims that comment-extraction tools, such as the Perl Plain Old Documentation or Java Javadoc systems, are "literate programming tools". However, because these tools do not implement the "web of abstract concepts" hiding behind the system of natural-language macros, or provide an ability to change the order of the source code from a machine-imposed sequence to one convenient to the human mind, they cannot properly be called literate programming tools in the sense intended by Knuth.
In 1986, Jon Bentley asked Knuth to demonstrate the concept of literate programming by writing a program in WEB. Knuth came up with an 8-pages long monolithic listing that was published together with a critique by Douglas McIlroy of Bell Labs. McIlroy praised intricacy of Knuth's solution, his choice of a data structure (Frank M. Liang's hash trie), but noted that more practical, much faster to implement, debug and modify solution of the problem takes only six lines of shell script by reusing standard Unix utilities. McIlroy concluded:
McIlroy later admitted that his critique was unfair, since he criticized Knuth's program on engineering grounds, while Knuth's purpose was only to demonstrate the literate programming technique. In 1987, "Communications of the ACM" published a followup article which illustrated literate programming with a C program that combined artistic approach of Knuth with engineering approach of McIlroy, with a critique by John Gilbert.
Implementing literate programming consists of two steps:
Weaving and tangling are done on the same source so that they are consistent with each other.
A classic example of literate programming is the literate implementation of the standard Unix codice_1 word counting program. Knuth presented a CWEB version of this example in Chapter 12 of his "Literate Programming" book. The same example was later rewritten for the noweb literate programming tool. This example provides a good illustration of the basic elements of literate programming.
The following snippet of the codice_1 literate program shows how arbitrary descriptive phrases in a natural language are used in a literate program to create macros, which act as new "operators" in the literate programming language, and hide chunks of code or other macros. The mark-up notation consists of double angle brackets ("codice_3") that indicate macros, the "codice_4" symbol which indicates the end of the code section in a noweb file. The "codice_5" symbol stands for the "root", topmost node the literate programming tool will start expanding the web of macros from. Actually, writing out the expanded source code can be done from any section or subsection (i.e. a piece of code designated as "codice_6", with the equal sign), so one literate program file can contain several files with machine source code.
The purpose of wc is to count lines, words, and/or characters in a list of files. The
number of lines in a file is .../more explanations/
Here, then, is an overview of the file wc.c that is defined by the noweb program wc.nw:
We must include the standard I/O definitions, since we want to send formatted output
to stdout and stderr.
The unraveling of the chunks can be done in any place in the literate program text file, not necessarily in the order they are sequenced in the enclosing chunk, but as is demanded by the logic reflected in the explanatory text that envelops the whole program.
Macros are not the same as "section names" in standard documentation. Literate programming macros can hide any chunk of code behind themselves, and be used inside any low-level machine language operators, often inside logical operators such as "codice_7", "codice_8" or "codice_9". This is illustrated by the following snippet of the codice_1 literate program.
The present chunk, which does the counting, was actually one of
the simplest to write. We look at each character and change state if it begins or ends
a word.
In fact, macros can stand for any arbitrary chunk of code or other macros, and are thus more general than top-down or bottom-up "chunking", or than subsectioning. Knuth says that when he realized this, he began to think of a program as a "web" of various parts.
In a noweb literate program besides the free order of their exposition, the chunks behind macros, once introduced with "codice_11", can be grown later in any place in the file by simply writing "codice_6" and adding more content to it, as the following snippet illustrates ("plus" is added by the document formatter for readability, and is not in the code).
If we made these variables local to main, we would have to do this initialization
explicitly; however, C globals are automatically zeroed. (Or rather,``statically
zeroed." (Get it?)
The documentation for a literate program is produced as part of writing the program. Instead of comments provided as side notes to source code a literate program contains the explanation of concepts on each level, with lower level concepts deferred to their appropriate place, which allows for better communication of thought. The snippets of the literate codice_1 above show how an explanation of the program and its source code are interwoven. Such exposition of ideas creates the flow of thought that is like a literary work. Knuth wrote a "novel" which explains the code of the interactive fiction game Colossal Cave Adventure.
The first published literate programming environment was WEB, introduced by Knuth in 1981 for his TeX typesetting system; it uses Pascal as its underlying programming language and TeX for typesetting of the documentation. The complete commented TeX source code was published in Knuth's "TeX: The program", volume B of his 5-volume "Computers and Typesetting". Knuth had privately used a literate programming system called DOC as early as 1979. He was inspired by the ideas of Pierre-Arnoul de Marneffe. The free CWEB, written by Knuth and Silvio Levy, is WEB adapted for C and C++, runs on most operating systems and can produce TeX and PDF documentation.
There are various other implementations of the literate programming concept (some of them don't have macros and hence violate the order of human logic principle):
Other useful tools include
% here text describing the function:
fact 0 = 1
fact (n+1) = (n+1) * fact n
here more text
comp :: (beta -> gamma) -> (alpha -> beta) -> (alpha -> gamma) | https://en.wikipedia.org/wiki?curid=18136 |
Logistic map
The logistic map is a polynomial mapping (equivalently, recurrence relation) of degree 2, often cited as an archetypal example of how complex, chaotic behaviour can arise from very simple non-linear dynamical equations. The map was popularized in a 1976 paper by the biologist Robert May, in part as a discrete-time demographic model analogous to the logistic equation first created by Pierre François Verhulst.
Mathematically, the logistic map is written
where is a number between zero and one that represents the ratio of existing population to the maximum possible population. The values of interest for the parameter (sometimes also denoted ) are those in the interval .
This nonlinear difference equation is intended to capture two effects:
However, as a demographic model the logistic map has the pathological problem that some initial conditions and parameter values (for example, if ) lead to negative population sizes. This problem does not appear in the older Ricker model, which also exhibits chaotic dynamics.
The case of the logistic map is a nonlinear transformation of both the bit-shift map and the case of the tent map.
The image below shows the amplitude and frequency content of some logistic map iterates for parameter values ranging from 2 to 4.
By varying the parameter , the following behavior is observed:
For any value of there is at most one stable cycle. If a stable cycle exists, it is globally stable, attracting almost all points. Some values of with a stable cycle of some period have infinitely many unstable cycles of various periods.
The bifurcation diagram at right summarizes this. The horizontal axis shows the possible values of the parameter while the vertical axis shows the set of values of visited asymptotically from almost all initial conditions by the iterates of the logistic equation with that value.
The bifurcation diagram is a self-similar: if we zoom in on the above-mentioned value and focus on one arm of the three, the situation nearby looks like a shrunk and slightly distorted version of the whole diagram. The same is true for all other non-chaotic points. This is an example of the deep and ubiquitous connection between chaos and fractals.
The relative simplicity of the logistic map makes it a widely used point of entry into a consideration of the concept of chaos. A rough description of chaos is that chaotic systems exhibit a great sensitivity to initial conditions—a property of the logistic map for most values of between about 3.57 and 4 (as noted above). A common source of such sensitivity to initial conditions is that the map represents a repeated folding and stretching of the space on which it is defined. In the case of the logistic map, the quadratic difference equation describing it may be thought of as a stretching-and-folding operation on the interval .
The following figure illustrates the stretching and folding over a sequence of iterates of the map. Figure (a), left, shows a two-dimensional Poincaré plot of the logistic map's state space for , and clearly shows the quadratic curve of the difference equation (). However, we can embed the same sequence in a three-dimensional state space, in order to investigate the deeper structure of the map. Figure (b), right, demonstrates this, showing how initially nearby points begin to diverge, particularly in those regions of corresponding to the steeper sections of the plot.
This stretching-and-folding does not just produce a gradual divergence of the sequences of iterates, but an exponential divergence (see Lyapunov exponents), evidenced also by the complexity and unpredictability of the chaotic logistic map. In fact, exponential divergence of sequences of iterates explains the connection between chaos and unpredictability: a small error in the supposed initial state of the system will tend to correspond to a large error later in its evolution. Hence, predictions about future states become progressively (indeed, exponentially) worse when there are even very small errors in our knowledge of the initial state. This quality of unpredictability and apparent randomness led the logistic map equation to be used as a pseudo-random number generator in early computers.
Since the map is confined to an interval on the real number line, its dimension is less than or equal to unity. Numerical estimates yield a correlation dimension of (Grassberger, 1983), a Hausdorff dimension of about 0.538 (Grassberger 1981), and an information dimension of approximately 0.5170976 (Grassberger 1983) for (onset of chaos). Note: It can be shown that the correlation dimension is certainly between 0.4926 and 0.5024.
It is often possible, however, to make precise and accurate statements about the "likelihood" of a future state in a chaotic system. If a (possibly chaotic) dynamical system has an attractor, then there exists a probability measure that gives the long-run proportion of time spent by the system in the various regions of the attractor. In the case of the logistic map with parameter and an initial state in , the attractor is also the interval and the probability measure corresponds to the beta distribution with parameters and . Specifically, the invariant measure is
Unpredictability is not randomness, but in some circumstances looks very much like it. Hence, and fortunately, even if we know very little about the initial state of the logistic map (or some other chaotic system), we can still say something about the distribution of states arbitrarily far into the future, and use this knowledge to inform decisions based on the state of the system.
Although exact solutions to the recurrence relation are only available in a small number of cases, a closed-form upper bound on the logistic map is known when . There are two aspects of the behavior of the logistic map that should be captured by an upper bound in this regime: the asymptotic geometric decay with constant , and the fast initial decay when is close to 1, driven by the term in the recurrence relation. The following bound captures both of these effects:
The special case of can in fact be solved exactly, as can the case with ; however, the general case can only be predicted statistically.
The solution when is,
where the initial condition parameter is given by
For rational , after a finite number of iterations maps into a periodic sequence. But almost all are irrational, and, for irrational , never repeats itself – it is non-periodic. This solution equation clearly demonstrates the two key features of chaos – stretching and folding: the factor shows the exponential growth of stretching, which results in sensitive dependence on initial conditions, while the squared sine function keeps folded within the range .
For an equivalent solution in terms of complex numbers instead of trigonometric functions is
where is either of the complex numbers
with modulus equal to 1. Just as the squared sine function in the trigonometric solution leads to neither shrinkage nor expansion of the set of points visited, in the latter solution this effect is accomplished by the unit modulus of .
By contrast, the solution when is
for . Since for any value of other than the unstable fixed point 0, the term goes to 0 as goes to infinity, so goes to the stable fixed point .
For the case, from almost all initial conditions the iterate sequence is chaotic. Nevertheless, there exist an infinite number of initial conditions that lead to cycles, and indeed there exist cycles of length for "all" integers . We can exploit the relationship of the logistic map to the dyadic transformation (also known as the "bit-shift map") to find cycles of any length. If follows the logistic map and follows the "dyadic transformation"
then the two are related by a homeomorphism
The reason that the dyadic transformation is also called the bit-shift map is that when is written in binary notation, the map moves the binary point one place to the right (and if the bit to the left of the binary point has become a "1", this "1" is changed to a "0"). A cycle of length 3, for example, occurs if an iterate has a 3-bit repeating sequence in its binary expansion (which is not also a one-bit repeating sequence): 001, 010, 100, 110, 101, or 011. The iterate 001001001… maps into 010010010..., which maps into 100100100..., which in turn maps into the original 001001001...; so this is a 3-cycle of the bit shift map. And the other three binary-expansion repeating sequences give the 3-cycle 110110110… → 101101101… → 011011011… → 110110110.… Either of these 3-cycles can be converted to fraction form: for example, the first-given 3-cycle can be written as → → → . Using the above translation from the bit-shift map to the formula_10 logistic map gives the corresponding logistic cycle 0.611260467… → 0.950484434… → 0.188255099… → 0.611260467.… We could similarly translate the other bit-shift 3-cycle into its corresponding logistic cycle. Likewise, cycles of any length can be found in the bit-shift map and then translated into the corresponding logistic cycles.
However, since almost all numbers in are irrational, almost all initial conditions of the bit-shift map lead to the non-periodicity of chaos. This is one way to see that the logistic map is chaotic for almost all initial conditions.
The number of cycles of (minimal) length for the logistic map with (tent map with ) is a known integer sequence : 2, 1, 2, 3, 6, 9, 18, 30, 56, 99, 186, 335, 630, 1161…. This tells us that the logistic map with has 2 fixed points, 1 cycle of length 2, 2 cycles of length 3 and so on. This sequence takes a particularly simple form for prime : . For example: 2 ⋅ = 630 is the number of cycles of length 13. Since this case of the logistic map is chaotic for almost all initial conditions, all of these finite-length cycles are unstable.
Universality of one-dimensional maps with parabolic maxima and Feigenbaum constants formula_11,formula_12 is well visible with map proposed as a toy
model for discrete laser dynamics:
formula_13,
where formula_14 stands for electric field amplitude, formula_15 is laser gain as bifurcation parameter.
The gradual increase of formula_15 at interval formula_17 changes dynamics from regular to chaotic one with qualitatively the same bifurcation diagram as those for logistic map. | https://en.wikipedia.org/wiki?curid=18137 |
League of Nations mandate
A League of Nations mandate was a legal status for certain territories transferred from the control of one country to another following World War I, or the legal instruments that contained the internationally agreed-upon terms for administering the territory on behalf of the League of Nations. These were of the nature of both a treaty and a constitution, which contained minority rights clauses that provided for the rights of petition and adjudication by the International Court.
The mandate system was established under Article 22 of the Covenant of the League of Nations, entered into force on 28 June 1919. With the dissolution of the League of Nations after World War II, it was stipulated at the Yalta Conference that the remaining Mandates should be placed under the trusteeship of the United Nations, subject to future discussions and formal agreements. Most of the remaining mandates of the League of Nations (with the exception of South-West Africa) thus eventually became United Nations Trust Territories.
Two governing principles formed the core of the Mandate System, being non-annexation of the territory and its administration as a “sacred trust of civilization” to develop the territory for the benefit of its native people.
The mandate system was established by Article 22 of the Covenant of the League of Nations, drafted by the victors of World War I. The article referred to territories which after the war were no longer ruled by their previous sovereign, but their peoples were not considered "able to stand by themselves under the strenuous conditions of the modern world". The article called for such people's tutelage to be "entrusted to advanced nations who by reason of their resources, their experience or their geographical position can best undertake this responsibility".
All of the territories subject to League of Nations mandates were previously controlled by states defeated in World War I, principally Imperial Germany and the Ottoman Empire. The mandates were fundamentally different from the protectorates in that the Mandatory power undertook obligations to the inhabitants of the territory and to the League of Nations.
The process of establishing the mandates consisted of two phases:
The divestiture of Germany's overseas colonies, along with three territories disentangled from its European homeland area (the Free City of Danzig, Memel Territory, and Saar), was accomplished in the Treaty of Versailles (1919), with the territories being allotted among the Allies on 7 May of that year. Ottoman territorial claims were first addressed in the Treaty of Sèvres (1920) and finalized in the Treaty of Lausanne (1923). The Turkish territories were allotted among the Allied Powers at the San Remo conference in 1920.
The League of Nations decided the exact level of control by the Mandatory power over each mandate on an individual basis. However, in every case the Mandatory power was forbidden to construct fortifications or raise an army within the territory of the mandate, and was required to present an annual report on the territory to the Permanent Mandates Commission of the League of Nations.
The mandates were divided into three distinct groups based upon the level of development each population had achieved at that time.
The first group, or "Class A mandates", were territories formerly controlled by the Ottoman Empire that were deemed to "... have reached a stage of development where their existence as independent nations can be provisionally recognized subject to the rendering of administrative advice and assistance by a Mandatory until such time as they are able to stand alone. The wishes of these communities must be a principal consideration in the selection of the Mandatory."
The second group of mandates, or "Class B mandates", were all former (German territories) in West and Central Africa which were deemed to require a greater level of control by the mandatory power: "...the Mandatory must be responsible for the administration of the territory under conditions which will guarantee freedom of conscience and religion." The mandatory power was forbidden to construct military or naval bases within the mandates.
The "Class C mandates", including South West Africa and certain of the South Pacific Islands, were considered to be "best administered under the laws of the Mandatory as integral portions of its territory"
According to the Council of the League of Nations, meeting of August 1920: "draft mandates adopted by the Allied and Associated Powers would not be definitive until they had been considered and approved by the League ... the legal title held by the mandatory Power must be a double one: one conferred by the Principal Powers and the other conferred by the League of Nations,"
Three steps were required to establish a Mandate under international law:
(1) The Principal Allied and Associated Powers confer a mandate on one of their number or on a third power; (2) the principal powers officially notify the council of the League of Nations that a certain power has been appointed mandatory for such a certain defined territory; and (3) the council of the League of Nations takes official cognisance of the appointment of the mandatory power and informs the latter that it [the council] considers it as invested with the mandate, and at the same time notifies it of the terms of the mandate, after ascertaining whether they are in conformance with the provisions of the covenant."
The U.S. State Department "Digest of International Law" says that the terms of the Treaty of Lausanne provided for the application of the principles of state succession to the "A" Mandates. The Treaty of Versailles (1920) provisionally recognized the former Ottoman communities as independent nations. It also required Germany to recognize the disposition of the former Ottoman territories and to recognize the new states laid down within their boundaries. The terms of the Treaty of Lausanne (1923) required the newly created states that acquired the territory detached from the Ottoman Empire to pay annuities on the Ottoman public debt and to assume responsibility for the administration of concessions that had been granted by the Ottomans. The treaty also let the States acquire, without payment, all the property and possessions of the Ottoman Empire situated within their territory. The treaty provided that the League of Nations was responsible for establishing an arbitral court to resolve disputes that might arise and stipulated that its decisions were final.
A disagreement regarding the legal status and the portion of the annuities to be paid by the "A" mandates was settled when an Arbitrator ruled that some of the mandates contained more than one State:The difficulty arises here how one is to regard the Asiatic countries under the British and French mandates. Iraq is a Kingdom in regard to which Great Britain has undertaken responsibilities equivalent to those of a Mandatory Power. Under the British mandate, Palestine and Transjordan have each an entirely separate organization. We are, therefore, in the presence of three States sufficiently separate to be considered as distinct Parties. France has received a single mandate from the Council of the League of Nations, but in the countries subject to that mandate, one can distinguish two distinct States: Syria and the Lebanon, each State possessing its own constitution and a nationality clearly different from the other.
After the United Nations was founded in 1945 and the League of Nations was disbanded, all but one of the mandated territories that remained under the control of the mandatory power became United Nations trust territories, a roughly equivalent status. In each case, the colonial power that held the mandate on each territory became the administering power of the trusteeship, except that Japan, which had been defeated in World War II, lost its mandate over the South Pacific islands, which became a "strategic trust territory" known as the Trust Territory of the Pacific Islands under United States administration.
The sole exception to the transformation of the League of Nations mandates into UN trusteeships was that South Africa refused to place South-West Africa under trusteeship. Instead, South Africa proposed that it be allowed to annex South-West Africa, a proposal rejected by the United Nations General Assembly. The International Court of Justice held that South Africa continued to have international obligations under the mandate for South-West Africa. The territory finally attained independence in 1990 as Namibia, after a long guerrilla war of independence against the apartheid regime.
Nearly all the former League of Nations mandates had become sovereign states by 1990, including all of the former United Nations Trust Territories with the exception of a few successor entities of the gradually dismembered Trust Territory of the Pacific Islands (formerly Japan's South Pacific Trust Mandate). These exceptions include the Northern Mariana Islands which is a commonwealth in political union with the United States with the status of unincorporated organized territory. The Northern Mariana Islands does elect its own governor to serve as territorial head of government, but it remains a U.S. territory with its head of state being the President of the United States and federal funds to the Commonwealth administered by the Office of Insular Affairs of the United States Department of the Interior.
Remnant Micronesia and the Marshall Islands, the heirs of the last territories of the Trust, attained final independence on 22 December 1990. (The UN Security Council ratified termination of trusteeship, effectively dissolving trusteeship status, on 10 July 1987). The Republic of Palau, split off from the Federated States of Micronesia, became the last to get its independence effectively on 1 October 1994. | https://en.wikipedia.org/wiki?curid=18139 |
Lincoln, New Hampshire
Lincoln is a town in Grafton County, New Hampshire, United States. It is the second-largest town by area in New Hampshire. The population was 1,662 at the 2010 census. The town is home to the New Hampshire Highland Games and to a portion of Franconia Notch State Park. Set in the White Mountains, large portions of the town are within the White Mountain National Forest. The Appalachian Trail crosses in the northeast. Lincoln is the location of the Loon Mountain ski resort and associated recreation-centered development.
The primary settlement in town, where 993 people resided at the 2010 census, is defined as the Lincoln census-designated place (CDP) and is located along New Hampshire Route 112 east of Interstate 93. The town also includes the former village sites of Stillwater and Zealand (sometimes known as Pullman) in the town's remote eastern and northern sections respectively, which are now within the White Mountain National Forest.
In 1764, Colonial Governor Benning Wentworth granted to a group of approximately 70 land investors from Connecticut. Lincoln was named after Henry Fiennes Pelham-Clinton, 2nd Duke of Newcastle, 9th Earl of Lincoln – a cousin of the Wentworth governors. He held the position of comptroller of customs for the port of London under George II and George III, which was important to trade between America and England.
The town was settled about 1782. The 1790 census indicates that it had 22 inhabitants. Rocky soil yielded poor farming, but the area's abundant timber, combined with water power to run sawmills on the Pemigewasset River and its East Branch, helped Lincoln develop into a center for logging. By 1853, the Merrimack River Lumber Company was operating. The railroad transported freight, and increasingly brought tourists to the beautiful mountain region. In 1892, James E. Henry bought approximately of virgin timber and established a logging enterprise at what is today the center of Lincoln. In 1902, he built a pulp and paper mill. He erected the Lincoln House hotel in 1903, although a 1907 fire would nearly raze the community. Until he died in 1912, Henry controlled his company town, installing relatives in positions of civic authority.
In 1917, Henry's heirs sold the business to the Parker Young Company, which in turn sold it to the Marcalus Manufacturing Company in 1946. Franconia Paper took over in 1950, producing 150 tons of paper a day until bankruptcy in 1971, at which time new river classification standards discouraged further papermaking in Lincoln.
Tourism is today the principal business. Nearby Loon Mountain has long drawn skiers, and in recent years has attempted to convert itself into a four-season attraction. The Flume is one of the most visited attractions in the state. Discovered in 1808, it is a natural canyon extending at the base of Mount Liberty. Walls of Conway granite rise to a height of and are only apart.
According to the United States Census Bureau, the town has a total area of , of which is land and is water, comprising 0.43% of the town. It is the second-largest town in area in New Hampshire, after Pittsburg.
Lincoln is drained by the Pemigewasset River and its East Branch. Lincoln lies almost fully within the Merrimack River watershed, with the western edge of town in the Connecticut River watershed. Kancamagus Pass, elevation , is on the Kancamagus Highway at the eastern boundary. The highest point in Lincoln is either the summit of Mount Carrigain, at above sea level, plus or minus , or the summit of Mount Bond at .
As of the census of 2010, there were 1,662 people, 794 households, and 439 families residing in the town. There were 2,988 housing units, of which 2,194, or 73.4%, were vacant. 2,083 of the vacant units were for seasonal or recreational use. The racial makeup of the town was 96.9% white, 0.3% African American, 0.1% Native American, 1.7% Asian, 0.0% Native Hawaiian or Pacific Islander, 0.3% some other race, and 0.6% from two or more races. 1.7% of the population were Hispanic or Latino of any race.
Of the 794 households, 21.5% had children under the age of 18 living with them, 43.1% were headed by married couples living together, 7.8% had a female householder with no husband present, and 44.7% were non-families. 37.0% of all households were made up of individuals, and 13.4% were someone living alone who was 65 years of age or older. The average household size was 2.09, and the average family size was 2.75.
In the town, 18.7% of the population were under the age of 18, 6.8% were from 18 to 24, 19.4% from 25 to 44, 34.8% from 45 to 64, and 20.4% were 65 years of age or older. The median age was 48.5 years. For every 100 females, there were 105.2 males. For every 100 females age 18 and over, there were 103.3 males.
For the period 2011-2015, the estimated median annual income for a household was $37,095, and the median income for a family was $55,326. Male full-time workers had a median income of $31,106 versus $27,381 for females. The per capita income for the town was $24,109. 21.0% of the population and 9.1% of families were below the poverty line. 20.2% of the population under the age of 18 and 8.6% of those 65 or older were living in poverty. | https://en.wikipedia.org/wiki?curid=18143 |
List of laser applications
Many scientific, military, medical and commercial laser applications have been developed since the invention of the laser in 1958. The coherency, high monochromaticity, and ability to reach extremely high powers are all properties which allow for these specialized applications.
In science, lasers are used in many ways, including:
Lasers may also be indirectly used in spectroscopy as a micro-sampling system, a technique termed Laser ablation (LA), which is typically applied to ICP-MS apparatus resulting in the powerful LA-ICP-MS.
The principles of laser spectroscopy are discussed by Demtröder.
Most types of laser are an inherently pure source of light; they emit near-monochromatic light with a very well defined range of wavelengths. By careful design of the laser components, the purity of the laser light (measured as the "linewidth") can be improved more than the purity of any other light source. This makes the laser a very useful source for spectroscopy. The high intensity of light that can be achieved in a small, well collimated beam can also be used to induce a nonlinear optical effect in a sample, which makes techniques such as Raman spectroscopy possible. Other spectroscopic techniques based on lasers can be used to make extremely sensitive detectors of various molecules, able to measure molecular concentrations in the parts-per-1012 (ppt) level. Due to the high power densities achievable by lasers, beam-induced atomic emission is possible: this technique is termed Laser induced breakdown spectroscopy (LIBS).
Heat treating with lasers allows selective surface hardening against wear with little or no distortion of the component. Because this eliminates much part reworking that is currently done, the laser system's capital cost is recovered in a short time. An inert, absorbent coating for laser heat treatment has also been developed that eliminates the fumes generated by conventional paint coatings during the heat-treating process with CO2 laser beams.
One consideration crucial to the success of a heat treatment operation is control of the laser beam irradiance on the part surface. The optimal irradiance distribution is driven by the thermodynamics of the laser-material interaction and by the part geometry.
Typically, irradiances between 500-5000 W/cm^2 satisfy the thermodynamic constraints and allow the rapid surface heating and minimal total heat input required. For general heat treatment, a uniform square or rectangular beam is one of the best options. For some special applications or applications where the heat treatment is done on an edge or corner of the part, it may be better to have the irradiance decrease near the edge to prevent melting.
Research shows that scientists may one day be able to induce rain and lightning storms (as well as micro-manipulating some other weather phenomena) using high energy lasers. Such a breakthrough could potentially eradicate droughts, help alleviate weather related catastrophes, and allocate weather resources to areas in need.
When the Apollo astronauts visited the moon, they planted retroreflector arrays to make possible the Lunar Laser Ranging Experiment. Laser beams are focused through large telescopes on Earth aimed toward the arrays, and the time taken for the beam to be reflected back to Earth measured to determine the distance between the Earth and Moon with high accuracy.
Some laser systems, through the process of mode locking, can produce extremely brief pulses of light - as short as picoseconds or femtoseconds (10−12 - 10−15 seconds). Such pulses can be used to initiate and analyze chemical reactions, a technique known as "photochemistry". The short pulses can be used to probe the process of the reaction at a very high temporal resolution, allowing the detection of short-lived intermediate molecules. This method is particularly useful in biochemistry, where it is used to analyse details of protein folding and function.
Laser barcode scanners are ideal for applications that require high speed reading of linear codes or stacked symbols.
A technique that has recent success is "laser cooling". This involves atom trapping, a method where a number of atoms are confined in a specially shaped arrangement of electric and magnetic fields. Shining particular wavelengths of light at the ions or atoms slows them down, thus "cooling" them. As this process is continued, they all are slowed and have the same energy level, forming an unusual arrangement of matter known as a Bose–Einstein condensate.
Some of the world's most powerful and complex arrangements of multiple lasers and optical amplifiers are used to produce extremely high intensity pulses of light of extremely short duration, e.g. laboratory for laser energetics, National Ignition Facility, GEKKO XII, Nike laser, Laser Mégajoule, HiPER. These pulses are arranged such that they impact pellets of tritium–deuterium simultaneously from all directions, hoping that the squeezing effect of the impacts will induce atomic fusion in the pellets. This technique, known as "inertial confinement fusion", so far has not been able to achieve "breakeven", that is, so far the fusion reaction generates less power than is used to power the lasers, but research continues.
Confocal laser scanning microscopy and Two-photon excitation microscopy make use of lasers to obtain blur-free images of thick specimens at various depths. Laser capture microdissection use lasers to procure specific cell populations from a tissue section under microscopic visualization.
Additional laser microscopy techniques include harmonic microscopy, four-wave mixing microscopy and interferometric microscopy.
Military uses of lasers include applications such as target designation and ranging, defensive countermeasures, communications and directed energy weapons.
A laser weapon is directed-energy weapon based on lasers.
Defensive countermeasure applications can range from compact, low power infrared countermeasures to high power, airborne laser systems. IR countermeasure systems use lasers to confuse the seeker heads on infrared homing missiles.
Some weapons simply use a laser to disorient a person. One such weapon is the Thales Green Laser Optical Warner.
Laser guidance is a technique of guiding a missile or other projectile or vehicle to a target by means of a laser beam.
Another military use of lasers is as a "laser target designator". This is a low-power laser pointer used to indicate a target for a precision-guided munition, typically launched from an aircraft. The guided munition adjusts its flight-path to home in to the laser light reflected by the target, enabling a great precision in aiming. The beam of the laser target designator is set to a pulse rate that matches that set on the guided munition to ensure munitions strike their designated targets and do not follow other laser beams which may be in use in the area. The laser designator can be shone onto the target by an aircraft or nearby infantry. Lasers used for this purpose are usually infrared lasers, so the enemy cannot easily detect the guiding laser light.
The laser has in most firearms applications been used as a tool to enhance the targeting of other weapon systems. For example, a "laser sight" is a small, usually visible-light laser placed on a handgun or a rifle and aligned to emit a beam parallel to the barrel. Since a laser beam has low divergence, the laser light appears as a small spot even at long distances; the user places the spot on the desired target and the barrel of the gun is aligned (but not necessarily allowing for bullet drop, windage, distance between the direction of the beam and the axis of the barrel, and the target mobility while the bullet travels).
Most laser sights use a red laser diode. Others use an infrared diode to produce a dot invisible to the naked human eye but detectable with night vision devices. The firearms adaptive target acquisition module LLM01 laser light module combines visible and infrared laser diodes. In the late 1990s, green diode pumped solid state laser (DPSS) laser sights (532 nm) became available. Modern laser sights are small and light enough for attachment to firearms.
A non-lethal laser weapon was developed by the U.S. Air Force to temporarily impair an adversary's ability to fire a weapon or to otherwise threaten enemy forces. This unit illuminates an opponent with harmless low-power laser light and can have the effect of dazzling or disorienting the subject or causing them to flee. Several types of dazzlers are now available, and some have been used in combat.
There remains the possibility of using lasers to blind, since this requires such lower power levels, and is easily achievable in a man-portable unit. However, most nations regard the deliberate permanent blinding of the enemy as forbidden by the rules of war (see Protocol on Blinding Laser Weapons). Although several nations have developed blinding laser weapons, such as China's ZM-87, none of these are believed to have made it past the prototype stage.
In addition to the applications that cross over with military applications, a widely known law enforcement use of lasers is for lidar to measure the speed of vehicles.
A holographic weapon sight uses a laser diode to illuminate a hologram of a reticle built into a flat glass optical window of the sight. The user looks through the optical window and sees a cross hair reticle image superimposed at a distance on the field of view.
Industrial laser applications can be divided into two categories depending on the power of the laser: material processing and micro-material processing.
In material processing, lasers with average optical power above 1 kilowatt are used mainly for industrial materials processing applications. Beyond this power threshold there are thermal issues related to the optics that separate these lasers from their lower-power counterparts. Laser systems in the 50-300W range are used primarily for pumping, plastic welding and soldering applications. Lasers above 300W are used in brazing, thin metal welding, and sheet metal cutting applications. The required brightness (as measured in by the beam parameter product) is higher for cutting applications than for brazing and thin metal welding. High power applications, such as hardening, cladding, and deep penetrating welding, require multiple kW of optical power, and are used in a broad range of industrial processes.
Micro material processing is a category that includes all laser material processing applications under 1 kilowatt. The use of lasers in Micro Materials Processing has found broad application in the development and manufacturing of screens for smartphones, tablet computers, and LED TVs.
A detailed list of industrial and commercial laser applications includes:
In surveying and construction, the laser level is affixed to a tripod, leveled and then spun to illuminate a horizontal plane. The laser beam projector employs a rotating head with a mirror for sweeping the laser beam about a vertical axis. If the mirror is not self-leveling, it is provided with visually readable level vials and manually adjustable screws for orienting the projector. A staff carried by the operator is equipped with a movable sensor, which can detect the laser beam and gives a signal when the sensor is in line with the beam (usually an audible beep). The position of the sensor on the graduated staff allows comparison of elevations between different points on the terrain.
A tower-mounted laser level is used in combination with a sensor on a wheel tractor-scraper in the process of land laser leveling to bring land (for example, an agricultural field) to near-flatness with a slight grade for drainage. The laser line level was invented in 1996 by Steve J. Orosz, Jr.[1] This type of level does not require a heavy motor to create the illusion of a line from a dot, rather, it uses a lens to transform the dot into a line.
Laser beams are used to disperse birds from agricultural land, industrial sites, rooftops and from airport runways. Birds tend to perceive the laser beam as a physical stick. By moving the laser beam towards the birds, they get scared and fly away. On the market are manual operated laser torches or automated robots to move the laser beam automatically. | https://en.wikipedia.org/wiki?curid=18145 |
Laser construction
A laser is constructed from three principal parts:
The "pump source" is the part that provides energy to the laser system. Examples of pump sources include electrical discharges, flashlamps, arc lamps, light from another laser, chemical reactions and even explosive devices. The type of pump source used principally depends on the "gain medium", and this also determines how the energy is transmitted to the medium. A helium–neon (HeNe) laser uses an electrical discharge in the helium-neon gas mixture, a Nd:YAG laser uses either light focused from a xenon flash lamp or diode lasers, and excimer lasers use a chemical reaction.
The "gain medium" is the major determining factor of the wavelength of operation, and other properties, of the laser. "Gain media" in different materials have linear spectra or wide spectra. "Gain media" with wide spectra allow tuning of the laser frequency. There are hundreds if not thousands of different gain media in which laser operation has been achieved (see list of laser types for a list of the most important ones). The gain medium is excited by the pump source to produce a population inversion, and it is in the gain medium where spontaneous and stimulated emission of photons takes place, leading to the phenomenon of optical gain, or amplification.
Examples of different gain media include:
The "optical resonator", or "optical cavity", in its simplest form is two parallel mirrors placed around the gain medium, which provide feedback of the light. The mirrors are given optical coatings which determine their reflective properties. Typically, one will be a high reflector, and the other will be a partial reflector. The latter is called the output coupler, because it allows some of the light to leave the cavity to produce the laser's output beam.
Light from the medium, produced by spontaneous emission, is reflected by the mirrors back into the medium, where it may be amplified by stimulated emission. The light may reflect from the mirrors and thus pass through the gain medium many hundreds of times before exiting the cavity. In more complex lasers, configurations with four or more mirrors forming the cavity are used. The design and alignment of the mirrors with respect to the medium is crucial for determining the exact operating wavelength and other attributes of the laser system.
Other optical devices, such as spinning mirrors, modulators, filters, and absorbers, may be placed within the optical resonator to produce a variety of effects on the laser output, such as altering the wavelength of operation or the production of pulses of laser light.
Some lasers do not use an optical cavity, but instead rely on very high optical gain to produce significant amplified spontaneous emission (ASE) without needing feedback of the light back into the gain medium. Such lasers are said to be superluminescent, and emit light with low coherence but high bandwidth. Since they do not use optical feedback, these devices are often not categorized as lasers. | https://en.wikipedia.org/wiki?curid=18151 |
Logical conjunction
In logic, mathematics and linguistics, And (∧) is the truth-functional operator of logical conjunction; the "and" of a set of operands is true if and only if "all" of its operands are true. The logical connective that represents this operator is typically written as or .
formula_1 is true if and only if formula_2 is true and formula_3 is true.
An operand of a conjunction is a conjunct.
The term "logical conjunction" is also used for the greatest lower bound in lattice theory.
Related concepts in other fields are:
And is usually denoted by an infix operator: in mathematics and logic, it is denoted by ', ' or '; in electronics, '; and in programming languages, codice_1, codice_2, or codice_3. In Jan Łukasiewicz's prefix notation for logic, the operator is K, for Polish "koniunkcja".
Logical conjunction is an operation on two logical values, typically the values of two propositions, that produces a value of "true" if and only if both of its operands are true.
The conjunctive identity is true, which is to say that AND-ing an expression with true will never change the value of the expression. In keeping with the concept of vacuous truth, when conjunction is defined as an operator or function of arbitrary arity, the empty conjunction (AND-ing over an empty set of operands) is often defined as having the result true.
The truth table of formula_1:
In systems where logical conjunction is not a primitive, it may be defined as
or
As a rule of inference, conjunction introduction is a classically valid, simple argument form. The argument form has two premises, "A" and "B". Intuitively, it permits the inference of their conjunction.
or in logical operator notation:
Here is an example of an argument that fits the form "conjunction introduction":
Conjunction elimination is another classically valid, simple argument form. Intuitively, it permits the inference from any conjunction of either element of that conjunction.
...or alternatively,
In logical operator notation:
...or alternatively,
A conjunction formula_14 is be proven false by establishing either formula_15 or formula_16.
In terms of the object language, this reads
This formula can be seen as a special case of
when formula_19 is a false proposition.
If formula_2 implies formula_16, then both formula_15 as well as formula_2 prove the conjunction false:
In other words, a conjunction can actually be proven false just by knowing about the relation of its conjuncts and not necessary about their truth values.
This formula can be seen as a special case of
when formula_19 is a false proposition.
Either of the above are constructively valid proofs by contradiction.
commutativity: yes
associativity: yes
distributivity: with various operations, especially with "or"
idempotency: yes
monotonicity: yes
truth-preserving: yes
When all inputs are true, the output is true.
falsehood-preserving: yes
When all inputs are false, the output is false.
Walsh spectrum: (1,-1,-1,1)
Nonlinearity: 1 (the function is bent)
If using binary values for true (1) and false (0), then "logical conjunction" works exactly like normal arithmetic multiplication.
In high-level computer programming and digital electronics, logical conjunction is commonly represented by an infix operator, usually as a keyword such as "codice_4", an algebraic multiplication, or the ampersand symbol codice_1 (sometimes doubled as in codice_2). Many languages also provide short-circuit control structures corresponding to logical conjunction.
Logical conjunction is often used for bitwise operations, where codice_7 corresponds to false and codice_8 to true:
The operation can also be applied to two binary words viewed as bitstrings of equal length, by taking the bitwise AND of each pair of bits at corresponding positions. For example:
This can be used to select part of a bitstring using a bit mask. For example, codice_19 = codice_20 extracts the fifth bit of an 8-bit bitstring.
In computer networking, bit masks are used to derive the network address of a subnet within an existing network from a given IP address, by ANDing the IP address and the subnet mask.
Logical conjunction "codice_4" is also used in SQL operations to form database queries.
The Curry–Howard correspondence relates logical conjunction to product types.
The membership of an element of an intersection set in set theory is defined in terms of a logical conjunction: "x" ∈ "A" ∩ "B" if and only if ("x" ∈ "A") ∧ ("x" ∈ "B"). Through this correspondence, set-theoretic intersection shares several properties with logical conjunction, such as associativity, commutativity, and idempotence.
As with other notions formalized in mathematical logic, the logical conjunction "and" is related to, but not the same as, the grammatical conjunction "and" in natural languages.
English "and" has properties not captured by logical conjunction. For example, "and" sometimes implies order having the sense of "then". For example, "They got married and had a child" in common discourse means that the marriage came before the child.
The word "and" can also imply a partition of a thing into parts, as "The American flag is red, white, and blue." Here it is not meant that the flag is "at once" red, white, and blue, but rather that it has a part of each color. | https://en.wikipedia.org/wiki?curid=18152 |
Logical connective
In logic, a logical connective (also called a logical operator, sentential connective, or sentential operator) is a symbol or word used to connect two or more sentences (of either a formal or a natural language) in a grammatically valid way, such that the value of the compound sentence produced depends only on that of the original sentences and on the meaning of the connective.
The most common logical connectives are binary connectives (also called dyadic connectives) which join two sentences which can be thought of as the function's operands. Also commonly, negation is considered to be a unary connective.
Logical connectives along with quantifiers are the two main types of logical constants used in formal systems such as propositional logic and predicate logic. Semantics of a logical connective is often, but not always, presented as a truth function.
A logical connective is similar to but not equivalent to a conditional operator.
In the grammar of natural languages two sentences may be joined by a grammatical conjunction to form a "grammatically" compound sentence. Some but not all such grammatical conjunctions are truth functional. For example, consider the following sentences:
The words "and" and "so" are "grammatical" conjunctions joining the sentences (A) and (B) to form the compound sentences (C) and (D). The "and" in (C) is a "logical" connective, since the truth of (C) is completely determined by (A) and (B): it would make no sense to affirm (A) and (B) but deny (C). However, "so" in (D) is not a logical connective, since it would be quite reasonable to affirm (A) and (B) but deny (D): perhaps, after all, Jill went up the hill to fetch a pail of water, not because Jack had gone up the hill at all.
Various English words and word pairs express logical connectives, and some of them are synonymous. Examples are:
In formal languages, truth functions are represented by unambiguous symbols. These symbols are called "logical connectives", "logical operators", "propositional operators", or, in classical logic, "truth-functional connectives". See well-formed formula for the rules which allow new well-formed formulas to be constructed by joining other well-formed formulas using truth-functional connectives.
Logical connectives can be used to link more than two statements, so one can speak about "-ary logical connective".
Commonly used logical connectives include
Alternative names for biconditional are "iff", "xnor", and "bi-implication".
For example, the meaning of the statements "it is raining" and "I am indoors" is transformed when the two are combined with logical connectives. For statement "P" = "It is raining" and "Q" = "I am indoors":
It is also common to consider the "always true" formula and the "always false" formula to be connective:
Some authors used letters for connectives at some time of the history: u. for conjunction (German's "und" for "and") and o. for disjunction (German's "oder" for "or") in earlier works by Hilbert (1904); N"p for negation, K"pq for conjunction, D"pq for alternative denial, A"pq for disjunction, X"pq for joint denial, C"pq for implication, E"pq" for biconditional in Łukasiewicz (1929); cf. Polish notation.
Such a logical connective as converse implication "←" is actually the same as material conditional with swapped arguments; thus, the symbol for converse implication is redundant. In some logical calculi (notably, in classical logic) certain essentially different compound statements are logically equivalent. A less trivial example of a redundancy is the classical equivalence between and . Therefore, a classical-based logical system does not need the conditional operator "→" if "¬" (not) and "∨" (or) are already in use, or may use the "→" only as a syntactic sugar for a compound having one negation and one disjunction.
There are sixteen Boolean functions associating the input truth values and with four-digit binary outputs. These correspond to possible choices of binary logical connectives for classical logic. Different implementations of classical logic can choose different functionally complete subsets of connectives.
One approach is to choose a "minimal" set, and define other connectives by some logical form, as in the example with the material conditional above.
The following are the minimal functionally complete sets of operators in classical logic whose arities do not exceed 2:
Another approach is to use with equal rights connectives of a certain convenient and functionally complete, but "not minimal" set. This approach requires more propositional axioms, and each equivalence between logical forms must be either an axiom or provable as a theorem.
The situation, however, is more complicated in intuitionistic logic. Of its five connectives, {∧, ∨, →, ¬, ⊥}, only negation "¬" can be reduced to other connectives (see details). Neither conjunction, disjunction, nor material conditional has an equivalent form constructed of the other four logical connectives.
Some logical connectives possess properties which may be expressed in the theorems containing the connective. Some of those properties that a logical connective may have are:
For classical and intuitionistic logic, the "=" symbol means that corresponding implications "…→…" and "…←…" for logical compounds can be both proved as theorems, and the "≤" symbol means that "…→…" for logical compounds is a consequence of corresponding "…→…" connectives for propositional variables. Some many-valued logics may have incompatible definitions of equivalence and order (entailment).
Both conjunction and disjunction are associative, commutative and idempotent in classical logic, most varieties of many-valued logic and intuitionistic logic. The same is true about distributivity of conjunction over disjunction and disjunction over conjunction, as well as for the absorption law.
In classical logic and some varieties of many-valued logic, conjunction and disjunction are dual, and negation is self-dual, the latter is also self-dual in intuitionistic logic.
As a way of reducing the number of necessary parentheses, one may introduce precedence rules: ¬ has higher precedence than ∧, ∧ higher than ∨, and ∨ higher than →. So for example, formula_36 is short for formula_37.
Here is a table that shows a commonly used precedence of logical operators.
However, not all compilers use the same order; for instance, an ordering in which disjunction is lower precedence than implication or bi-implication has also been used. Sometimes precedence between conjunction and disjunction is unspecified requiring to provide it explicitly in given formula with parentheses. The order of precedence determines which connective is the "main connective" when interpreting a non-atomic formula.
A truth-functional approach to logical operators is implemented as logic gates in digital circuits. Practically all digital circuits (the major exception is DRAM) are built up from NAND, NOR, NOT, and transmission gates; see more details in Truth function in computer science. Logical operators over bit vectors (corresponding to finite Boolean algebras) are bitwise operations.
But not every usage of a logical connective in computer programming has a Boolean semantic. For example, lazy evaluation is sometimes implemented for and , so these connectives are not commutative if either or both of the expressions , have side effects. Also, a conditional, which in some sense corresponds to the material conditional connective, is essentially non-Boolean because for codice_1 the consequent Q is not executed if the antecedent P is false (although a compound as a whole is successful ≈ "true" in such case). This is closer to intuitionist and constructivist views on the material conditional, rather than to classical logic's ones. | https://en.wikipedia.org/wiki?curid=18153 |
Propositional calculus
Propositional calculus is a branch of logic. It is also called propositional logic, statement logic, sentential calculus, sentential logic, or sometimes zeroth-order logic. It deals with propositions (which can be true or false) and argument flow. Compound propositions are formed by connecting propositions by logical connectives. The propositions without logical connectives are called atomic propositions.
Unlike first-order logic, propositional logic does not deal with non-logical objects, predicates about them, or quantifiers. However, all the machinery of propositional logic is included in first-order logic and higher-order logics. In this sense, propositional logic is the foundation of first-order logic and higher-order logic.
Logical connectives are found in natural languages. In English for example, some examples are "and" (conjunction), "or" (disjunction), "not" (negation) and "if" (but only when used to denote material conditional).
The following is an example of a very simple inference within the scope of propositional logic:
Both premises and the conclusion are propositions. The premises are taken for granted and then with the application of modus ponens (an inference rule) the conclusion follows.
As propositional logic is not concerned with the structure of propositions beyond the point where they can't be decomposed any more by logical connectives, this inference can be restated replacing those "atomic" statements with statement letters, which are interpreted as variables representing statements:
The same can be stated succinctly in the following way:
When is interpreted as "It's raining" and as "it's cloudy" the above symbolic expressions can be seen to exactly correspond with the original expression in natural language. Not only that, but they will also correspond with any other inference of this "form", which will be valid on the same basis that this inference is.
Propositional logic may be studied through a formal system in which formulas of a formal language may be interpreted to represent propositions. A system of axioms and inference rules allows certain formulas to be derived. These derived formulas are called theorems and may be interpreted to be true propositions. A constructed sequence of such formulas is known as a "derivation" or "proof" and the last formula of the sequence is the theorem. The derivation may be interpreted as proof of the proposition represented by the theorem.
When a formal system is used to represent formal logic, only statement letters are represented directly. The natural language propositions that arise when they're interpreted are outside the scope of the system, and the relation between the formal system and its interpretation is likewise outside the formal system itself.
In classical truth-functional propositional logic, formulas are interpreted as having precisely one of two possible truth values, the truth value of "true" or the truth value of "false". The principle of bivalence and the law of excluded middle are upheld. Truth-functional propositional logic defined as such and systems isomorphic to it are considered to be zeroth-order logic. However, alternative propositional logics are possible. See Other logical calculi below.
Although propositional logic (which is interchangeable with propositional calculus) had been hinted by earlier philosophers, it was developed into a formal logic (Stoic logic) by Chrysippus in the 3rd century BC and expanded by his successor Stoics. The logic was focused on propositions. This advancement was different from the traditional syllogistic logic which was focused on terms. However, later in antiquity, the propositional logic developed by the Stoics was no longer understood . Consequently, the system was essentially reinvented by Peter Abelard in the 12th century.
Propositional logic was eventually refined using symbolic logic. The 17th/18th-century mathematician Gottfried Leibniz has been credited with being the founder of symbolic logic for his work with the calculus ratiocinator. Although his work was the first of its kind, it was unknown to the larger logical community. Consequently, many of the advances achieved by Leibniz were recreated by logicians like George Boole and Augustus De Morgan completely independent of Leibniz.
Just as propositional logic can be considered an advancement from the earlier syllogistic logic, Gottlob Frege's predicate logic was an advancement from the earlier propositional logic. One author describes predicate logic as combining "the distinctive features of syllogistic logic and propositional logic." Consequently, predicate logic ushered in a new era in logic's history; however, advances in propositional logic were still made after Frege, including Natural Deduction, Truth-Trees and Truth-Tables. Natural deduction was invented by Gerhard Gentzen and Jan Łukasiewicz. Truth-Trees were invented by Evert Willem Beth. The invention of truth-tables, however, is of uncertain attribution.
Within works by Frege and Bertrand Russell, are ideas influential to the invention of truth tables. The actual tabular structure (being formatted as a table), itself, is generally credited to either Ludwig Wittgenstein or Emil Post (or both, independently). Besides Frege and Russell, others credited with having ideas preceding truth-tables include Philo, Boole, Charles Sanders Peirce, and Ernst Schröder. Others credited with the tabular structure include Jan Łukasiewicz, Ernst Schröder, Alfred North Whitehead, William Stanley Jevons, John Venn, and Clarence Irving Lewis. Ultimately, some have concluded, like John Shosky, that "It is far from clear that any one person should be given the title of 'inventor' of truth-tables.".
In general terms, a calculus is a formal system that consists of a set of syntactic expressions ("well-formed formulas"), a distinguished subset of these expressions (axioms), plus a set of formal rules that define a specific binary relation, intended to be interpreted as logical equivalence, on the space of expressions.
When the formal system is intended to be a logical system, the expressions are meant to be interpreted as statements, and the rules, known to be "inference rules", are typically intended to be truth-preserving. In this setting, the rules (which may include axioms) can then be used to derive ("infer") formulas representing true statements from given formulas representing true statements.
The set of axioms may be empty, a nonempty finite set, or a countably infinite set (see axiom schema). A formal grammar recursively defines the expressions and well-formed formulas of the language. In addition a semantics may be given which defines truth and valuations (or interpretations).
The language of a propositional calculus consists of
A "well-formed formula" is any atomic formula, or any formula that can be built up from atomic formulas by means of operator symbols according to the rules of the grammar.
Mathematicians sometimes distinguish between propositional constants, propositional variables, and schemata. Propositional constants represent some particular proposition, while propositional variables range over the set of all atomic propositions. Schemata, however, range over all propositions. It is common to represent propositional constants by , , and , propositional variables by , , and , and schematic letters are often Greek letters, most often , , and .
The following outlines a standard propositional calculus. Many different formulations exist which are all more or less equivalent but differ in the details of:
Any given proposition may be represented with a letter called a 'propositional constant', analogous to representing a number by a letter in mathematics, for instance, . All propositions require exactly one of two truth-values: true or false. For example, let be the proposition that it is raining outside. This will be true () if it is raining outside and false otherwise ().
It is extremely helpful to look at the truth tables for these different operators, as well as the method of analytic tableaux.
Propositional logic is closed under truth-functional connectives. That is to say, for any proposition , is also a proposition. Likewise, for any propositions and , is a proposition, and similarly for disjunction, conditional, and biconditional. This implies that, for instance, is a proposition, and so it can be conjoined with another proposition. In order to represent this, we need to use parentheses to indicate which proposition is conjoined with which. For instance, is not a well-formed formula, because we do not know if we are conjoining with or if we are conjoining with . Thus we must write either to represent the former, or to represent the latter. By evaluating the truth conditions, we see that both expressions have the same truth conditions (will be true in the same cases), and moreover that any proposition formed by arbitrary conjunctions will have the same truth conditions, regardless of the location of the parentheses. This means that conjunction is associative, however, one should not assume that parentheses never serve a purpose. For instance, the sentence does not have the same truth conditions of , so they are different sentences distinguished only by the parentheses. One can verify this by the truth-table method referenced above.
Note: For any arbitrary number of propositional constants, we can form a finite number of cases which list their possible truth-values. A simple way to generate this is by truth-tables, in which one writes , , ..., , for any list of propositional constants—that is to say, any list of propositional constants with entries. Below this list, one writes rows, and below one fills in the first half of the rows with true (or T) and the second half with false (or F). Below one fills in one-quarter of the rows with T, then one-quarter with F, then one-quarter with T and the last quarter with F. The next column alternates between true and false for each eighth of the rows, then sixteenths, and so on, until the last propositional constant varies between T and F for each row. This will give a complete listing of cases or truth-value assignments possible for those propositional constants.
The propositional calculus then defines an "argument" to be a list of propositions. A valid argument is a list of propositions, the last of which follows from—or is implied by—the rest. All other arguments are invalid. The simplest valid argument is modus ponens, one instance of which is the following list of propositions:
This is a list of three propositions, each line is a proposition, and the last follows from the rest. The first two lines are called premises, and the last line the conclusion. We say that any proposition follows from any set of propositions formula_6, if must be true whenever every member of the set formula_6 is true. In the argument above, for any and , whenever and are true, necessarily is true. Notice that, when is true, we cannot consider cases 3 and 4 (from the truth table). When is true, we cannot consider case 2. This leaves only case 1, in which is also true. Thus is implied by the premises.
This generalizes schematically. Thus, where and may be any propositions at all,
Other argument forms are convenient, but not necessary. Given a complete set of axioms (see below for one such set), modus ponens is sufficient to prove all other argument forms in propositional logic, thus they may be considered to be a derivative. Note, this is not true of the extension of propositional logic to other logics like first-order logic. First-order logic requires at least one additional rule of inference in order to obtain completeness.
The significance of argument in formal logic is that one may obtain new truths from established truths. In the first example above, given the two premises, the truth of is not yet known or stated. After the argument is made, is deduced. In this way, we define a deduction system to be a set of all propositions that may be deduced from another set of propositions. For instance, given the set of propositions formula_9, we can define a deduction system, , which is the set of all propositions which follow from . Reiteration is always assumed, so formula_10. Also, from the first element of , last element, as well as modus ponens, is a consequence, and so formula_11. Because we have not included sufficiently complete axioms, though, nothing else may be deduced. Thus, even though most deduction systems studied in propositional logic are able to deduce formula_12, this one is too weak to prove such a proposition.
A propositional calculus is a formal system formula_13, where:
The "language" of formula_15, also known as its set of "formulas", "well-formed formulas", is inductively defined by the following rules:
Repeated applications of these rules permits the construction of complex formulas. For example:
Let formula_37, where formula_14, formula_39, formula_25, formula_26 are defined as follows:
Let formula_58, where formula_14, formula_39, formula_25, formula_26 are defined as follows:
In the following example of a propositional calculus, the transformation rules are intended to be interpreted as the inference rules of a so-called "natural deduction system". The particular system presented here has no initial points, which means that its interpretation for logical applications derives its theorems from an empty axiom set.
Our propositional calculus has eleven inference rules. These rules allow us to derive other true formulas given a set of formulas that are assumed to be true. The first ten simply state that we can infer certain well-formed formulas from other well-formed formulas. The last rule however uses hypothetical reasoning in the sense that in the premise of the rule we temporarily assume an (unproven) hypothesis to be part of the set of inferred formulas to see if we can infer a certain other formula. Since the first ten rules don't do this they are usually described as "non-hypothetical" rules, and the last one as a "hypothetical" rule.
In describing the transformation rules, we may introduce a metalanguage symbol formula_70. It is basically a convenient shorthand for saying "infer that". The format is formula_71, in which is a (possibly empty) set of formulas called premises, and is a formula called conclusion. The transformation rule formula_71 means that if every proposition in is a theorem (or has the same truth value as the axioms), then is also a theorem. Note that considering the following rule Conjunction introduction, we will know whenever has more than one formula, we can always safely reduce it into one formula using conjunction. So for short, from that time on we may represent as one formula instead of a set. Another omission for convenience is when is an empty set, in which case may not appear.
One of the main uses of a propositional calculus, when interpreted for logical applications, is to determine relations of logical equivalence between propositional formulas. These relationships are determined by means of the available transformation rules, sequences of which are called "derivations" or "proofs".
In the discussion to follow, a proof is presented as a sequence of numbered lines, with each line consisting of a single formula followed by a "reason" or "justification" for introducing that formula. Each premise of the argument, that is, an assumption introduced as an hypothesis of the argument, is listed at the beginning of the sequence and is marked as a "premise" in lieu of other justification. The conclusion is listed on the last line. A proof is complete if every line follows from the previous ones by the correct application of a transformation rule. (For a contrasting approach, see proof-trees).
Interpret formula_110 as "Assuming , infer ". Read formula_111 as "Assuming nothing, infer that implies ", or "It is a tautology that implies ", or "It is always true that implies ".
The crucial properties of this set of rules are that they are "sound" and "complete". Informally this means that the rules are correct and that no other rules are required. These claims can be made more formal as follows.
We define a "truth assignment" as a function that maps propositional variables to true or false. Informally such a truth assignment can be understood as the description of a possible state of affairs (or possible world) where certain statements are true and others are not. The semantics of formulas can then be formalized by defining for which "state of affairs" they are considered to be true, which is what is done by the following definition.
We define when such a truth assignment satisfies a certain well-formed formula with the following rules:
With this definition we can now formalize what it means for a formula to be implied by a certain set of formulas. Informally this is true if in all worlds that are possible given the set of formulas the formula also holds. This leads to the following formal definition: We say that a set of well-formed formulas "semantically entails" (or "implies") a certain well-formed formula if all truth assignments that satisfy all the formulas in also satisfy .
Finally we define "syntactical entailment" such that is syntactically entailed by if and only if we can derive it with the inference rules that were presented above in a finite number of steps. This allows us to formulate exactly what it means for the set of inference rules to be sound and complete:
Soundness: If the set of well-formed formulas "syntactically" entails the well-formed formula then "semantically" entails .
Completeness: If the set of well-formed formulas "semantically" entails the well-formed formula then "syntactically" entails .
For the above set of rules this is indeed the case.
Notational conventions: Let be a variable ranging over sets of sentences. Let and range over sentences. For " syntactically entails " we write " proves ". For " semantically entails " we write " implies ".
We want to show: (if proves , then implies ).
We note that " proves " has an inductive definition, and that gives us the immediate resources for demonstrating claims of the form "If proves , then ...". So our proof proceeds by induction.
Notice that Basis Step II can be omitted for natural deduction systems because they have no axioms. When used, Step II involves showing that each of the axioms is a (semantic) logical truth.
The Basis steps demonstrate that the simplest provable sentences from are also implied by , for any . (The proof is simple, since the semantic fact that a set implies any of its members, is also trivial.) The Inductive step will systematically cover all the further sentences that might be provable—by considering each case where we might reach a logical conclusion using an inference rule—and shows that if a new sentence is provable, it is also logically implied. (For example, we might have a rule telling us that from "" we can derive " or ". In III.a We assume that if is provable it is implied. We also know that if is provable then " or " is provable. We have to show that then " or " too is implied. We do so by appeal to the semantic definition and the assumption we just made. is provable from , we assume. So it is also implied by . So any semantic valuation making all of true makes true. But any valuation making true makes " or " true, by the defined semantics for "or". So any valuation which makes all of true makes " or " true. So " or " is implied.) Generally, the Inductive step will consist of a lengthy but simple case-by-case analysis of all the rules of inference, showing that each "preserves" semantic implication.
By the definition of provability, there are no sentences provable other than by being a member of , an axiom, or following by a rule; so if all of those are semantically implied, the deduction calculus is sound.
We adopt the same notational conventions as above.
We want to show: If implies , then proves . We proceed by contraposition: We show instead that if does not prove then does not imply . If we show that there is a model where does not hold despite being true, then obviously does not imply . The idea is to build such a model out of our very assumption that does not prove .
QED
If a formula is a tautology, then there is a truth table for it which shows that each valuation yields the value true for the formula. Consider such a valuation. By mathematical induction on the length of the subformulas, show that the truth or falsity of the subformula follows from the truth or falsity (as appropriate for the valuation) of each propositional variable in the subformula. Then combine the lines of the truth table together two at a time by using "( is true implies ) implies (( is false implies ) implies )". Keep repeating this until all dependencies on propositional variables have been eliminated. The result is that we have proved the given tautology. Since every tautology is provable, the logic is complete.
An interpretation of a truth-functional propositional calculus formula_112 is an assignment to each propositional symbol of formula_112 of one or the other (but not both) of the truth values truth (T) and falsity (F), and an assignment to the connective symbols of formula_112 of their usual truth-functional meanings. An interpretation of a truth-functional propositional calculus may also be expressed in terms of truth tables.
For formula_115 distinct propositional symbols there are formula_116 distinct possible interpretations. For any particular symbol formula_117, for example, there are formula_118 possible interpretations:
For the pair formula_117, formula_122 there are formula_123 possible interpretations:
Since formula_112 has formula_129, that is, denumerably many propositional symbols, there are formula_130, and therefore uncountably many distinct possible interpretations of formula_112.
If and are formulas of formula_112 and formula_133 is an interpretation of formula_112 then the following definitions apply:
Some consequences of these definitions:
It is possible to define another version of propositional calculus, which defines most of the syntax of the logical operators by means of axioms, and which uses only one inference rule.
Let , , and stand for well-formed formulas. (The well-formed formulas themselves would not contain any Greek letters, but only capital Roman letters, connective operators, and parentheses.) Then the axioms are as follows:
The inference rule is modus ponens:
Let a demonstration be represented by a sequence, with hypotheses to the left of the turnstile and the conclusion to the right of the turnstile. Then the deduction theorem can be stated as follows:
This deduction theorem (DT) is not itself formulated with propositional calculus: it is not a theorem of propositional calculus, but a theorem about propositional calculus. In this sense, it is a meta-theorem, comparable to theorems about the soundness or completeness of propositional calculus.
On the other hand, DT is so useful for simplifying the syntactical proof process that it can be considered and used as another inference rule, accompanying modus ponens. In this sense, DT corresponds to the natural conditional proof inference rule which is part of the first version of propositional calculus introduced in this article.
The converse of DT is also valid:
in fact, the validity of the converse of DT is almost trivial compared to that of DT:
The converse of DT has powerful implications: it can be used to convert an axiom into an inference rule. For example, the axiom AND-1,
can be transformed by means of the converse of the deduction theorem into the inference rule
which is conjunction elimination, one of the ten inference rules used in the first version (in this article) of the propositional calculus.
The following is an example of a (syntactical) demonstration, involving only axioms and :
Prove: formula_167 (Reflexivity of implication).
Proof:
The preceding alternative calculus is an example of a Hilbert-style deduction system. In the case of propositional systems the axioms are terms built with logical connectives and the only inference rule is modus ponens. Equational logic as standardly used informally in high school algebra is a different kind of calculus from Hilbert systems. Its theorems are equations and its inference rules express the properties of equality, namely that it is a congruence on terms that admits substitution.
Classical propositional calculus as described above is equivalent to Boolean algebra, while intuitionistic propositional calculus is equivalent to Heyting algebra. The equivalence is shown by translation in each direction of the theorems of the respective systems. Theorems formula_176 of classical or intuitionistic propositional calculus are translated as equations formula_177 of Boolean or Heyting algebra respectively. Conversely theorems formula_178 of Boolean or Heyting algebra are translated as theorems formula_179 of classical or intuitionistic calculus respectively, for which formula_180 is a standard abbreviation. In the case of Boolean algebra formula_178 can also be translated as formula_182, but this translation is incorrect intuitionistically.
In both Boolean and Heyting algebra, inequality formula_183 can be used in place of equality. The equality formula_178 is expressible as a pair of inequalities formula_183 and formula_186. Conversely the inequality formula_183 is expressible as the equality formula_188, or as formula_189. The significance of inequality for Hilbert-style systems is that it corresponds to the latter's deduction or entailment symbol formula_70. An entailment
is translated in the inequality version of the algebraic framework as
Conversely the algebraic inequality formula_183 is translated as the entailment
The difference between implication formula_195 and inequality or entailment formula_183 or formula_194 is that the former is internal to the logic while the latter is external. Internal implication between two terms is another term of the same kind. Entailment as external implication between two terms expresses a metatruth outside the language of the logic, and is considered part of the metalanguage. Even when the logic under study is intuitionistic, entailment is ordinarily understood classically as two-valued: either the left side entails, or is less-or-equal to, the right side, or it is not.
Similar but more complex translations to and from algebraic logics are possible for natural deduction systems as described above and for the sequent calculus. The entailments of the latter can be interpreted as two-valued, but a more insightful interpretation is as a set, the elements of which can be understood as abstract proofs organized as the morphisms of a category. In this interpretation the cut rule of the sequent calculus corresponds to composition in the category. Boolean and Heyting algebras enter this picture as special categories having at most one morphism per homset, i.e., one proof per entailment, corresponding to the idea that existence of proofs is all that matters: any proof will do and there is no point in distinguishing them.
It is possible to generalize the definition of a formal language from a set of finite sequences over a finite basis to include many other sets of mathematical structures, so long as they are built up by finitary means from finite materials. What's more, many of these families of formal structures are especially well-suited for use in logic.
For example, there are many families of graphs that are close enough analogues of formal languages that the concept of a calculus is quite easily and naturally extended to them. Indeed, many species of graphs arise as "parse graphs" in the syntactic analysis of the corresponding families of text structures. The exigencies of practical computation on formal languages frequently demand that text strings be converted into pointer structure renditions of parse graphs, simply as a matter of checking whether strings are well-formed formulas or not. Once this is done, there are many advantages to be gained from developing the graphical analogue of the calculus on strings. The mapping from strings to parse graphs is called "parsing" and the inverse mapping from parse graphs to strings is achieved by an operation that is called "traversing" the graph.
Propositional calculus is about the simplest kind of logical calculus in current use. It can be extended in several ways. (Aristotelian "syllogistic" calculus, which is largely supplanted in modern logic, is in "some" ways simpler – but in other ways more complex – than propositional calculus.) The most immediate way to develop a more complex logical calculus is to introduce rules that are sensitive to more fine-grained details of the sentences being used.
First-order logic (a.k.a. first-order predicate logic) results when the "atomic sentences" of propositional logic are broken up into terms, variables, predicates, and quantifiers, all keeping the rules of propositional logic with some new ones introduced. (For example, from "All dogs are mammals" we may infer "If Rover is a dog then Rover is a mammal".) With the tools of first-order logic it is possible to formulate a number of theories, either with explicit axioms or by rules of inference, that can themselves be treated as logical calculi. Arithmetic is the best known of these; others include set theory and mereology. Second-order logic and other higher-order logics are formal extensions of first-order logic. Thus, it makes sense to refer to propositional logic as ""zeroth-order logic"", when comparing it with these logics.
Modal logic also offers a variety of inferences that cannot be captured in propositional calculus. For example, from "Necessarily " we may infer that . From we may infer "It is possible that ". The translation between modal logics and algebraic logics concerns classical and intuitionistic logics but with the introduction of a unary operator on Boolean or Heyting algebras, different from the Boolean operations, interpreting the possibility modality, and in the case of Heyting algebra a second operator interpreting necessity (for Boolean algebra this is redundant since necessity is the De Morgan dual of possibility). The first operator preserves 0 and disjunction while the second preserves 1 and conjunction.
Many-valued logics are those allowing sentences to have values other than "true" and "false". (For example, "neither" and "both" are standard "extra values"; "continuum logic" allows each sentence to have any of an infinite number of "degrees of truth" between "true" and "false".) These logics often require calculational devices quite distinct from propositional calculus. When the values form a Boolean algebra (which may have more than two or even infinitely many values), many-valued logic reduces to classical logic; many-valued logics are therefore only of independent interest when the values form an algebra that is not Boolean.
Finding solutions to propositional logic formulas is an NP-complete problem. However, practical methods exist (e.g., DPLL algorithm, 1962; Chaff algorithm, 2001) that are very fast for many useful cases. Recent work has extended the SAT solver algorithms to work with propositions containing arithmetic expressions; these are the SMT solvers. | https://en.wikipedia.org/wiki?curid=18154 |
Lazy evaluation
In programming language theory, lazy evaluation, or call-by-need is an evaluation strategy which delays the evaluation of an expression until its value is needed (non-strict evaluation) and which also avoids repeated evaluations (sharing). The sharing can reduce the running time of certain functions by an exponential factor over other non-strict evaluation strategies, such as call-by-name.
However, for lengthy operations, it would be more appropriate to perform before any time-sensitive operations, such as handling user inputs in a video game.
The benefits of lazy evaluation include:
Lazy evaluation is often combined with memoization, as described in Jon Bentley's "Writing Efficient Programs". After a function's value is computed for that parameter or set of parameters, the result is stored in a lookup table that is indexed by the values of those parameters; the next time the function is called, the table is consulted to determine whether the result for that combination of parameter values is already available. If so, the stored result is simply returned. If not, the function is evaluated and another entry is added to the lookup table for reuse.
Lazy evaluation can lead to reduction in memory footprint, since values are created when needed. However, lazy evaluation is difficult to combine with imperative features such as exception handling and input/output, because the order of operations becomes indeterminate. Lazy evaluation can introduce memory leaks.
The opposite of lazy evaluation is eager evaluation, sometimes known as strict evaluation. Eager evaluation is the evaluation strategy employed in most programming languages.
Lazy evaluation was introduced for lambda calculus by Christopher Wadsworth and employed by the Plessey System 250 as a critical part of a Lambda-Calculus Meta-Machine, reducing the resolution overhead for access to objects in a capability-limited address space. For programming languages, it was independently introduced by Peter Henderson and James H. Morris and by Daniel P. Friedman and David S. Wise.
Delayed evaluation is used particularly in functional programming languages. When using delayed evaluation, an expression is not evaluated as soon as it gets bound to a variable, but when the evaluator is forced to produce the expression's value. That is, a statement such as codice_1 (i.e. the assignment of the result of an expression to a variable) clearly calls for the expression to be evaluated and the result placed in codice_2, but what actually is in codice_2 is irrelevant until there is a need for its value via a reference to codice_2 in some later expression whose evaluation could itself be deferred, though eventually the rapidly growing tree of dependencies would be pruned to produce some symbol rather than another for the outside world to see.
Delayed evaluation has the advantage of being able to create calculable infinite lists without infinite loops or size matters interfering in computation. For example, one could create a function that creates an infinite list (often called a "stream") of Fibonacci numbers. The calculation of the "n"-th Fibonacci number would be merely the extraction of that element from the infinite list, forcing the evaluation of only the first n members of the list.
For example, in the Haskell programming language, the list of all Fibonacci numbers can be written as:
In Haskell syntax, "codice_5" prepends an element to a list, codice_6 returns a list without its first element, and codice_7 uses a specified function (in this case addition) to combine corresponding elements of two lists to produce a third.
Provided the programmer is careful, only the values that are required to produce a particular result are evaluated. However, certain calculations may result in the program attempting to evaluate an infinite number of elements; for example, requesting the length of the list or trying to sum the elements of the list with a fold operation would result in the program either failing to terminate or running out of memory.
In almost all common "eager" languages, "if" statements evaluate in a lazy fashion.
evaluates (a), then if and only if (a) evaluates to true does it evaluate (b), otherwise it evaluates (c). That is, either (b) or (c) will not be evaluated. Conversely, in an eager language the expected behavior is that
will still evaluate (e) when computing the value of f(d, e) even though (e) is unused in function f. However, user-defined control structures depend on exact syntax, so for example
(i) and (j) would both be evaluated in an eager language. While in a lazy language,
(i) or (j) would be evaluated, but never both.
Lazy evaluation allows control structures to be defined normally, and not as primitives or compile-time techniques. If (i) or (j) have side effects or introduce run time errors, the subtle differences between (l) and (l') can be complex. It is usually possible to introduce user-defined lazy control structures in eager languages as functions, though they may depart from the language's syntax for eager evaluation: Often the involved code bodies (like (i) and (j)) need to be wrapped in a function value, so that they are executed only when called.
Short-circuit evaluation of Boolean control structures is sometimes called "lazy".
Many languages offer the notion of "infinite data-structures". These allow definitions of data to be given in terms of infinite ranges, or unending recursion, but the actual values are only computed when needed. Take for example this trivial program in Haskell:
numberFromInfiniteList :: Int -> Int
numberFromInfiniteList n = infinity !! n - 1
main = print $ numberFromInfiniteList 4
In the function numberFromInfiniteList, the value of infinity is an infinite range, but until an actual value (or more specifically, a specific value at a certain index) is needed, the list is not evaluated, and even then it is only evaluated as needed (that is, until the desired index.)
A compound expression might be in the form "EasilyComputed or LotsOfWork" so that if the easy part gives true a lot of work could be avoided. For instance, suppose a large number N is to be checked to determine if it is a prime number and a function IsPrime(N) is available, but alas, it can require a lot of computation to evaluate. Perhaps "N=2 or [Mod(N,2)≠0 and IsPrime(N)]" will help if there are to be many evaluations with arbitrary values for N.
A compound expression might be in the form "SafeToTry and Expression" whereby if "SafeToTry" is false there should be no attempt at evaluating the "Expression" lest a run-time error be signalled, such as divide-by-zero or index-out-of-bounds, etc. For instance, the following pseudocode locates the last non-zero element of an array:
Should all elements of the array be zero, the loop will work down to L = 0, and in this case the loop must be terminated without attempting to reference element zero of the array, which does not exist.
In computer windowing systems, the painting of information to the screen is driven by "expose events" which drive the display code at the last possible moment. By doing this, windowing systems avoid computing unnecessary display content updates.
Another example of laziness in modern computer systems is copy-on-write page allocation or demand paging, where memory is allocated only when a value stored in that memory is changed.
Laziness can be useful for high performance scenarios. An example is the Unix mmap function, which provides "demand driven" loading of pages from disk, so that only those pages actually touched are loaded into memory, and unneeded memory is not allocated.
MATLAB implements "copy on edit", where arrays which are copied have their actual memory storage replicated only when their content is changed, possibly leading to an "out of memory" error when updating an element afterwards instead of during the copy operation.
Some programming languages delay evaluation of expressions by default, and some others provide functions or special syntax to delay evaluation. In Miranda and Haskell, evaluation of function arguments is delayed by default. In many other languages, evaluation can be delayed by explicitly suspending the computation using special syntax (as with Scheme's "codice_8" and "codice_9" and OCaml's "codice_10" and "codice_11") or, more generally, by wrapping the expression in a thunk. The object representing such an explicitly delayed evaluation is called a "lazy future." Raku uses lazy evaluation of lists, so one can assign infinite lists to variables and use them as arguments to functions, but unlike Haskell and Miranda, Raku does not use lazy evaluation of arithmetic operators and functions by default.
In lazy programming languages such as Haskell, although the default is to evaluate expressions only when they are demanded, it is possible in some cases to make code more eager—or conversely, to make it more lazy again after it has been made more eager. This can be done by explicitly coding something which forces evaluation (which may make the code more eager) or avoiding such code (which may make the code more lazy). "Strict" evaluation usually implies eagerness, but they are technically different concepts.
However, there is an optimisation implemented in some compilers called strictness analysis, which, in some cases, allows the compiler to infer that a value will always be used. In such cases, this may render the programmer's choice of whether to force that particular value or not, irrelevant, because strictness analysis will force strict evaluation.
In Haskell, marking constructor fields strict means that their values will always be demanded immediately. The codice_12 function can also be used to demand a value immediately and then pass it on, which is useful if a constructor field should generally be lazy. However, neither of these techniques implements "recursive" strictness—for that, a function called codice_13 was invented.
Also, pattern matching in Haskell 98 is strict by default, so the codice_14 qualifier has to be used to make it lazy.
In Python 2.x the codice_15 function computes a list of integers. The entire list is stored in memory when the first assignment statement is evaluated, so this is an example of eager or immediate evaluation:
»> r = range(10)
»> print r
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
»> print r[3]
3
In Python 3.x the codice_15 function returns a special range object which computes elements of the list on demand. Elements of the range object are only generated when they are needed (e.g., when codice_17 is evaluated in the following example), so this is an example of lazy or deferred evaluation:
»> r = range(10)
»> print(r)
range(0, 10)
»> print(r[3])
3
In Python 2.x is possible to use a function called codice_18 which returns an object that generates the numbers in the range on demand. The advantage of codice_19 is that generated object will always take the same amount of memory.
»> r = xrange(10)
»> print(r)
xrange(10)
»> lst = [x for x in r]
»> print(lst)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
From version 2.2 forward, Python manifests lazy evaluation by implementing iterators (lazy sequences) unlike tuple or list sequences. For instance (Python 2):
»> numbers = range(10)
»> iterator = iter(numbers)
»> print numbers
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
»> print iterator
»> print iterator.next()
0
In the .NET Framework it is possible to do lazy evaluation using the class System.Lazy. The class can be easily exploited in F# using the lazy keyword, while the force method will force the evaluation. There are also specialized collections like Microsoft.FSharp.Collections.Seq that provide built-in support for lazy evaluation.
let fibonacci = Seq.unfold (fun (x, y) -> Some(x, (y, x + y))) (0I,1I)
fibonacci |> Seq.nth 1000
In C# and VB.NET, the class System.Lazy is directly used.
public int Sum()
Or with a more practical example:
// recursive calculation of the n'th fibonacci number
public int Fib(int n)
public void Main()
Another way is to use the yield keyword:
// eager evaluation
public IEnumerable Fibonacci(int x)
// lazy evaluation
public IEnumerable LazyFibonacci(int x) | https://en.wikipedia.org/wiki?curid=18155 |
Lemuridae
Lemuridae is a family of strepsirrhine primates native to Madagascar and the Comoros Islands. They are represented by the Lemuriformes in Madagascar with one of the highest concentration of the lemurs. One of five families commonly known as lemurs. These animals were once thought to be the evolutionary predecessors of monkeys and apes, but this is no longer considered correct.
Lemurids are medium-sized arboreal primates, ranging from 32 to 56 cm in length, excluding the tail, and weighing from 0.7 to 5 kg. They have long, bushy tails and soft, woolly fur of varying coloration. The hindlegs are slightly longer than the forelegs, although not enough to hamper fully quadrupedal movement (unlike the sportive lemurs). Most species are highly agile, and regularly leap several metres between trees. They have a good sense of smell and binocular vision. Unlike most other lemurs, all but one species of lemurid (the ring-tailed lemur) lack a tapetum lucidum, a reflective layer in the eye that improves night vision. Historically among mammals, activity cycles are either strictly diurnal or nocturnal, however, these can widely vary across species. Lemur activity has in general evolved from nocturnal to diurnal. Some lemurs are also cathemeral, an activity pattern where an animal is neither strictly diurnal nor nocturnal.
Lemurids are herbivorous, eating fruit, leaves, and, in some cases, nectar. For the most part, they have the dental formula: . A lemur’s diet is one that is not restricted since their diet consists of frugivory, granivory, folivory, insectivory, omnivory, and gumnivory foods. Some Subfossil records have contributed to the knowledge of the currently extant lemurs from the Holocene by showing the changes in their dental records in habitats near human activity. This demonstrates that lemur species such as the lemur "catta" and the common brown lemur were forced to switch their primary diet to a group of secondary food sources.
With most lemurids, the mother gives birth to one or two young after a gestation period of between 120 and 140 days, depending on species. The ruffed lemur species are the only lemurids that have true litters, consisting of anywhere from two to six offspring. They are generally sociable animals, living in groups of up to thirty individuals in some species. In some cases, such as the ring-tailed lemur, the groups are long-lasting, with distinct dominance hierarchies, while in others, such as the common brown lemur, the membership of the groups varies from day to day, and seems to have no clear social structure.
Some of the lemur traits include low basal metabolic rate, highly seasonal breeders, adaptations to unpredictable climate and female dominance. Female dominance amongst lemurs is when the females are sexually monomorphic and have priority access to food. Lemurs live in groups of 11 to 17 animals, where females tend to stay within their natal groups and the males migrate. Male lemurs are competitive to win their mates which causes instability among the other organisms. Lemurs are able to mark their territory by using scents from local areas.
A number of lemur species are considered threatened; two species are critically endangered, one species is endangered, and five species are rated as vulnerable.
The highly seasonal dry deciduous forest of Madagascar alternates between dry and wet seasons, making it uniquely suitable for lemurs. Lemur species diversity increases as the number of tree species in an area increase and is also higher in forests that have been disturbed over undisturbed areas. Evidence from the Subfossil records show that many of the now extinct lemurs actually lived in much drier climates than the currently extant lemurs.
The family Lemuridae contains 21 extant species in five genera.
FAMILY LEMURIDAE
This family was once broken into two subfamilies, Hapalemurinae (bamboo lemurs and the greater bamboo lemur) and Lemurinae (the rest of the family), but molecular evidence and the similarity of the scent glands have since placed the ring-tailed lemur with the bamboo lemurs and the greater bamboo lemur.
Lemur species in the genus "Eulemur" are known to interbreed, despite having dramatically different chromosome numbers. Red-fronted (2N=60) and collared (2N=50–52) brown lemurs were found to hybridize at Berenty Reserve, Madagascar. | https://en.wikipedia.org/wiki?curid=18156 |
History of Egypt
The history of Egypt has been long and wealthy, due to the flow of the Nile River with its fertile banks and delta, as well as the accomplishments of Egypt's native inhabitants and outside influence. Much of Egypt's ancient history was a mystery until Egyptian hieroglyphs were deciphered with the discovery and help of the Rosetta Stone. Among the Seven Wonders of the Ancient World, is the Great Pyramid of Giza. The Library of Alexandria was the only one of its kind for centuries.
One of the earliest human structures in the world was found in Egypt, dating to about 100,000 BC. Ancient Egyptian civilization coalesced around 3150 BC with the political unification of Upper and Lower Egypt under the first pharaoh of the First Dynasty, Narmer. Predominantly native Egyptian rule lasted until the conquest by the Achaemenid Empire in the sixth century BC.
In 332 BC, Macedonian ruler Alexander the Great conquered Egypt as he toppled the Achaemenids and established the Hellenistic Ptolemaic Kingdom, whose first ruler was one of Alexander's former generals, Ptolemy I Soter. The Ptolemies had to fight native rebellions and were involved in foreign and civil wars that led to the decline of the kingdom and its final annexation by Rome. The death of Cleopatra ended the nominal independence of Egypt resulting in Egypt's becoming one of the provinces of the Roman Empire.
Roman rule in Egypt (including Byzantine) lasted from 30 BC to 641 AD, with a brief interlude of control by the Sasanian Empire between 619–629, known as Sasanian Egypt. After the Muslim conquest of Egypt, parts of Egypt became provinces of successive Caliphates and other Muslim dynasties: Rashidun Caliphate (632-661), Umayyad Caliphate (661–750), Abbasid Caliphate (750–935), Fatimid Caliphate (909–1171), Ayyubid Sultanate (1171–1260), and the Mamluk Sultanate (1250–1517). In 1517, Ottoman sultan Selim I captured Cairo, absorbing Egypt into the Ottoman Empire.
Egypt remained entirely Ottoman until 1867, except during French occupation from 1798 to 1801. Starting in 1867, Egypt became a nominally autonomous tributary state called the Khedivate of Egypt. However, Khedivate Egypt fell under British control in 1882 following the Anglo-Egyptian War. After the end of World War I and following the Egyptian revolution of 1919, the Kingdom of Egypt was established. While a "de jure" independent state, the United Kingdom retained control over foreign affairs, defense, and other matters. British occupation lasted until 1954, with the Anglo-Egyptian agreement of 1954.
The modern Republic of Egypt was founded in 1953, and with the complete withdrawal of British forces from the Suez Canal in 1956, it marked the first time in 2500 years that Egypt was both fully independent and ruled by native Egyptians. President Gamal Abdel Nasser (president from 1956 to 1970) introduced many reforms and created the short-lived United Arab Republic (with Syria). His terms also saw the Six-Day War and the creation of the international Non-Aligned Movement. His successor, Anwar Sadat (president from 1970 to 1981) changed Egypt's trajectory, departing from many of the political, and economic tenets of Nasserism, re-instituting a multi-party system and launching the Infitah economic policy. He led Egypt in the Yom Kippur War of 1973 to regain Egypt's Sinai Peninsula, which Israel had occupied since the Six-Day War of 1967. This later led to the Egypt–Israel Peace Treaty.
Recent Egyptian history has been dominated by events following nearly thirty years of rule by the former president Hosni Mubarak. The Egyptian revolution of 2011 deposed Mubarak and resulted in the first democratically elected president in Egyptian history, Mohamed Morsi. Unrest after the 2011 revolution and related disputes led to the 2013 Egyptian coup d'état.
There is evidence of petroglyphs along the Nile terraces and in desert oases. In the 10th millennium BC, a culture of hunter-gatherers and fishermen was replaced by a grain-grinding culture. Climate changes and/or overgrazing around 6000 BC began to desiccate the pastoral lands of Egypt, forming the Sahara. Early tribal peoples migrated to the Nile River, where they developed a settled agricultural economy and more centralized society.
By about 6000 BC, a Neolithic culture rooted in the Nile Valley. During the Neolithic era, several predynastic cultures developed independently in Upper and Lower Egypt. The Badari culture and the successor Naqada series are generally regarded as precursors to dynastic Egypt. The earliest known Lower Egyptian site, Merimda, predates the Badarian by about seven hundred years. Contemporaneous Lower Egyptian communities coexisted with their southern counterparts for more than two thousand years, remaining culturally distinct, but maintaining frequent contact through trade. The earliest known evidence of Egyptian hieroglyphic inscriptions appeared during the predynastic period on Naqada III pottery vessels, dated to about 3200 BC.
A unified kingdom was formed in 3150 BC by King Menes, leading to a series of dynasties that ruled Egypt for the next three millennia. Egyptian culture flourished during this long period and remained distinctively Egyptian in its religion, arts, language and customs. The first two ruling dynasties of a unified Egypt set the stage for the Old Kingdom period ("c". 2700–2200 BC), which constructed many pyramids, most notably the Third Dynasty pyramid of Djoser and the Fourth Dynasty Giza Pyramids.
The First Intermediate Period ushered in a time of political upheaval for about 150 years. Stronger Nile floods and stabilization of government, however, brought back renewed prosperity for the country in the Middle Kingdom "c". 2040 BC, reaching a peak during the reign of Pharaoh Amenemhat III. A second period of disunity heralded the arrival of the first foreign ruling dynasty in Egypt, that of the Semitic-speaking Hyksos. The Hyksos invaders took over much of Lower Egypt around 1650 BC and founded a new capital at Avaris. They were driven out by an Upper Egyptian force led by Ahmose I, who founded the Eighteenth Dynasty and relocated the capital from Memphis to Thebes.
The New Kingdom ("c". 1550–1070 BC) began with the Eighteenth Dynasty, marking the rise of Egypt as an international power that expanded during its greatest extension to an empire as far south as Tombos in Nubia, and included parts of the Levant in the east. This period is noted for some of the most well known Pharaohs, including Hatshepsut, Thutmose III, Akhenaten and his wife Nefertiti, Tutankhamun and Ramesses II. The first historically attested expression of monotheism came during this period as Atenism, although some consider Atenism to be a form of monolatry rather than of monotheism. Frequent contacts with other nations brought new ideas to the New Kingdom. The country was later invaded and conquered by Libyans, Nubians and Assyrians, but native Egyptians eventually drove them out and regained control of their country.
In the sixth century BC, the Achaemenid Empire conquered Egypt. The entire Twenty-seventh Dynasty of Egypt, from 525 BC to 402 BC, save for Petubastis III, was an entirely Persian-ruled period, with the Achaemenid kings being granted the title of pharaoh. The Thirtieth Dynasty was the last native ruling dynasty during the Pharaonic epoch. It fell to the Persians again in 343 BC after the last native Pharaoh, King Nectanebo II, was defeated in battle.
The Thirty-first Dynasty of Egypt, also known as the Second Egyptian Satrapy, was effectively a short-living province of the Achaemenid Empire between 343 BC to 332 BC. After an interval of independence, during which three indigenous dynasties reigned (the 28th, 29th and 30th dynasty), Artaxerxes III (358–338 BC) reconquered the Nile valley for a brief second period (343–332 BC), which is called the Thirty-first Dynasty of Egypt, thus starting another period of pharaohs of Persian origin.
A team led by Johannes Krause managed the first reliable sequencing of the genomes of 90 mummified individuals in 2017. Whilst not conclusive, because of the non-exhaustive time frame and restricted location that the mummies represent, their study nevertheless showed that these Ancient Egyptians "closely resembled ancient and modern Near Eastern populations, especially those in the Levant, and had almost no DNA from sub-Saharan Africa. What's more, the genetics of the mummies remained remarkably consistent even as different powers—including Nubians, Greeks, and Romans—conquered the empire".
The Ptolemaic Kingdom was a powerful Hellenistic state extending from southern Syria in the east, to Cyrene to the west, and south to the frontier with Nubia. Alexandria became the capital city and a center of Greek culture and trade. To gain recognition by the native Egyptian populace, they named themselves as the successors to the Pharaohs. The later Ptolemies took on Egyptian traditions, had themselves portrayed on public monuments in Egyptian style and dress, and participated in Egyptian religious life.
The last ruler from the Ptolemaic dynasty was Cleopatra, who committed suicide following the burial of her lover Mark Antony, who had died in her arms (from a self-inflicted stab wound) after Augustus had captured Alexandria and her mercenary forces had fled.
The Ptolemies faced rebellions of native Egyptians, often caused by an unwanted regime, and were involved in foreign and civil wars that led to the decline of the kingdom and its annexation by Rome. Nevertheless, Hellenistic culture continued to thrive in Egypt well after the Muslim conquest. The native Egyptian/Coptic culture continued to exist as well (the Coptic language itself was Egypt's most widely spoken language until at least the 10th century).
Egypt quickly became the Empire's breadbasket supplying the greater portion of the Empire's grain in addition to flax, papyrus, glass, and many other finished goods. The city of Alexandria became a key trading outpost for the Roman Empire (by some accounts, the most important for a time). Shipping from Egypt regularly reached India and Ethiopia among other international destinations. It was also a leading (perhaps "the" leading) scientific and technological center of the Empire. Scholars such as Ptolemy, Hypatia, and Heron broke new ground in astronomy, mathematics, and other disciplines. Culturally, the city of Alexandria at times rivaled Rome in its importance.
Christianity reached Egypt relatively early in the evangelist period of the first century (traditionally credited to Mark the Evangelist). Alexandria, Egypt and Antioch, Syria quickly became the leading centers of Christianity. Diocletian's reign marked the transition from the classical Roman to the Late antique/Byzantine era in Egypt, when a great number of Egyptian Christians were persecuted. The New Testament had by then been translated into Egyptian. After the Council of Chalcedon in AD 451, a distinct Egyptian Coptic Church was firmly established.
Sasanian Egypt (known in Middle Persian sources as "Agiptus") refers to the brief rule of Egypt and parts of Libya by the Sasanian Empire, which lasted from 619 to 629, until the Sasanian rebel Shahrbaraz made an alliance with the Byzantine emperor Heraclius and had control over Egypt returned to him.
The Byzantines were able to regain control of the country after a brief Persian invasion early in the 7th century, until 639–42, when Egypt was invaded and conquered by the Arab Islamic Empire. The final loss of Egypt was of incalculable significance to the Byzantine Empire, which had relied on Egypt for many agricultural and manufactured goods.
When they defeated the Byzantine Armies in Egypt, the Arabs brought Sunni Islam to the country. Early in this period, Egyptians began to blend their new faith with their Christian traditions as well as other indigenous beliefs and practices, leading to various Sufi orders that have flourished to this day. These earlier rites had survived the period of Coptic Christianity.
Muslim rulers nominated by the Islamic Caliphate remained in control of Egypt for the next six centuries, with Cairo as the seat of the Caliphate under the Fatimids. With the end of the Kurdish Ayyubid dynasty, the Mamluks, a Turco-Circassian military caste, took control about AD 1250. By the late 13th century, Egypt linked the Red Sea, India, Malaya, and East Indies. The Greek and Coptic languages and cultures went into a steep decline in favor of Arabic culture (though Coptic managed to last as a spoken language until the 17th century and remains a liturgical language today).
The Mamluks continued to govern the country until the conquest of Egypt by the Ottoman Turks in 1517, after which it became a province of the Ottoman Empire. The mid-14th-century Black Death killed about 40% of the Egypt's population.
After the 15th century, the Ottoman invasion pushed the Egyptian system into decline. The defensive militarization damaged its civil society and economic institutions. The weakening of the economic system combined with the effects of plague left Egypt vulnerable to foreign invasion. Portuguese traders took over their trade. Egypt suffered six famines between 1687 and 1731. The 1784 famine cost it roughly one-sixth of its population.
The brief French invasion of Egypt led by Napoleon Bonaparte began in 1798. The campaign eventually led to the discovery of the Rosetta Stone, creating the field of Egyptology. Despite early victories and an initially successful expedition into Syria, Napoleon and his Armée d'Orient were eventually defeated and forced to withdraw, especially after suffering the defeat of the supporting French fleet at the Battle of the Nile.
The expulsion of the French in 1801 by Ottoman, Mamluk, and British forces was followed by four years of anarchy in which Ottomans, Mamluks, and Albanians — who were nominally in the service of the Ottomans – wrestled for power. Out of this chaos, the commander of the Albanian regiment, Muhammad Ali (Kavalali Mehmed Ali Pasha) emerged as a dominant figure and in 1805 was acknowledged by the Sultan in Istanbul as his viceroy in Egypt; the title implied subordination to the Sultan but this was in fact a polite fiction: Ottoman power in Egypt was finished and Muhammad Ali, an ambitious and able leader, established a dynasty that was to rule Egypt until the revolution of 1952. In later years, the dynasty became a British puppet.
His primary focus was military: he annexed Northern Sudan (1820–1824), Syria (1833), and parts of Arabia and Anatolia; but in 1841 the European powers, fearful lest he topple the Ottoman Empire itself, forced him to return most of his conquests to the Ottomans, but he kept the Sudan and his title to Egypt was made hereditary. A more lasting result of his military ambition is that it required him to modernize the country. Eager to adopt the military (and therefore industrial) techniques of the great powers, he sent students to the West and invited training missions to Egypt. He built industries, a system of canals for irrigation and transport, and reformed the civil service.
The introduction in 1820 of long-staple cotton, the Egyptian variety of which became notable, transformed its agriculture into a cash-crop monoculture before the end of the century. The social effects of this were enormous: land ownership became concentrated and many foreigners arrived, shifting production towards international markets.
British indirect rule lasted from 1882, when the British succeeded in defeating the Egyptian Army at Tel el-Kebir in September and took control of the country, to the 1952 Egyptian revolution which made Egypt a republic and when British advisers were expelled.
Muhammad Ali was succeeded briefly by his son Ibrahim (in September 1848), then by a grandson Abbas I (in November 1848), then by Said (in 1854), and Isma'il (in 1863).
Abbas I was cautious. Said and Ismail were ambitious developers, but they spent beyond their means. The Suez Canal, built in partnership with the French, was completed in 1869. The cost of this and other projects had two effects: it led to enormous debt to European banks, and caused popular discontent because of the onerous taxation it required. In 1875 Ismail sold Egypt's 44% share in the canal to the British Government. Within three years this led to the imposition of British and French controllers who sat in the Egyptian cabinet, and, "with the financial power of the bondholders behind them, were the real power in the Government."
Local dissatisfaction with Ismail and with European intrusion led to the formation of the first nationalist groupings in 1879, with Ahmad Urabi a prominent figure. In 1882 he became head of a nationalist-dominated ministry committed to democratic reforms including parliamentary control of the budget. Fearing a reduction of their control, Britain and France intervened militarily, bombarding Alexandria and crushing the Egyptian army at the battle of Tel el-Kebir. They reinstalled Ismail's son Tewfik as figurehead of a "de facto" British protectorate.
In 1914, the Protectorate was made official, and the Ottoman Empire no longer had a role. The title for the head of state, which in 1867 had changed from "pasha" to "khedive", was changed again to "sultan". Abbas II was deposed as khedive and replaced by his uncle, Hussein Kamel, as sultan.
In 1906, the Dinshaway Incident prompted many neutral Egyptians to join the nationalist movement. After the First World War, Saad Zaghlul and the Wafd Party led the Egyptian nationalist movement to a majority at the local Legislative Assembly. When the British exiled Zaghlul and his associates to Malta on 8 March 1919, the country arose in its first modern revolution. The revolt led the UK government to issue a unilateral declaration of Egypt's independence on 22 February 1922.
The new government drafted and implemented a constitution in 1923 based on a parliamentary system. Saad Zaghlul was popularly elected as Prime Minister of Egypt in 1924. In 1936, the Anglo-Egyptian Treaty was concluded. Continued instability due to remaining British influence and increasing political involvement by the king led to the dissolution of the parliament in a military "coup d'état" known as the 1952 Revolution. The Free Officers Movement forced King Farouk to abdicate in support of his son Fuad.
British military presence in Egypt lasted until 1954.
On 18 June 1953, the Egyptian Republic was declared, with General Muhammad Naguib as the first President of the Republic. Naguib was forced to resign in 1954 by Gamal Abdel Nasserthe real architect of the 1952 movementand was later put under house arrest.
Nasser assumed power as President in June 1956. British forces completed their withdrawal from the occupied Suez Canal Zone on 13 June 1956. He nationalized the Suez Canal on 26 July 1956, prompting the 1956 Suez Crisis.
In 1958, Egypt and Syria formed a sovereign union known as the United Arab Republic. The union was short-lived, ending in 1961 when Syria seceded, thus ending the union. During most of its existence, the United Arab Republic was also in a loose confederation with North Yemen (formerly the Mutawakkilite Kingdom of Yemen) known as the United Arab States.
In the 1967 Six-Day War, Israel invaded and occupied Egypt's Sinai Peninsula and the Gaza Strip, which Egypt had occupied since the 1948 Arab–Israeli War. Three years later (1970), President Nasser died and was succeeded by Anwar Sadat.
Sadat switched Egypt's Cold War allegiance from the Soviet Union to the United States, expelling Soviet advisors in 1972. He launched the Infitah economic reform policy, while clamping down on religious and secular opposition.
In 1973, Egypt, along with Syria, launched the October War, a surprise attack against the Israeli forces occupying the Sinai Peninsula and the Golan Heights. It was an attempt to regain part of the Sinai territory that Israel had captured six years earlier. Sadat hoped to seize some territory through military force, and then regain the rest of the peninsula by diplomacy. The conflict sparked an international crisis between the US and the USSR, both of whom intervened. The second UN-mandated ceasefire halted military action. While the war ended with a military stalemate, it presented Sadat with a political victory that later allowed him to regain the Sinai in return for peace with Israel.
Sadat made a historic visit to Israel in 1977, which led to the 1979 peace treaty in exchange for Israeli withdrawal from Sinai. Sadat's initiative sparked enormous controversy in the Arab world and led to Egypt's expulsion from the Arab League, but it was supported by most Egyptians. On 6 October 1981, Sadat and six diplomats were assassinated while observing a military parade commemorating the eighth anniversary of the October 1973 War. He was succeeded by Hosni Mubarak.
In 1980s, 1990s, and 2000s, terrorist attacks in Egypt became numerous and severe, and began to target Copts and foreign tourists as well as government officials. Some scholars and authors have credited Islamist writer Sayyid Qutb, who was executed in 1967, as the inspiration for the new wave of attacks.
The 1990s saw an Islamist group, al-Gama'a al-Islamiyya, engage in an extended campaign of violence, from the murders and attempted murders of prominent writers and intellectuals, to the repeated targeting of tourists and foreigners. Serious damage was done to the largest sector of Egypt's economy—tourism—and in turn to the government, but it also devastated the livelihoods of many of the people on whom the group depended for support.
Victims of the campaign against the Egyptian state from 1992–1997 exceeded 1,200 and included the head of the counter-terrorism police (Major General Raouf Khayrat), a speaker of parliament (Rifaat el-Mahgoub), dozens of European tourists and Egyptian bystanders, and over 100 Egyptian police. At times, travel by foreigners in parts of Upper Egypt was severely restricted and dangerous. On 17 November 1997, 62 people, mostly tourists, were killed near Luxor. The assailants trapped the people in the Mortuary Temple of Hatshepsut. During this period, Al-Gama'a al-Islamiyya was given support by the governments of Iran and Sudan, as well as al-Qaeda. The Egyptian government received support during that time from the United States.
In 2003, the "Kefaya" ("Egyptian Movement for Change"), was launched to oppose the Mubarak regime and to establish democratic reforms and greater civil liberties.
On 25 January 2011, widespread protests began against Mubarak's government. The objective of the protest was the removal of Mubarak from power. These took the form of an intensive campaign of civil resistance supported by a very large number of people and mainly consisting of continuous mass demonstrations. By 29 January, it was becoming clear that Mubarak's government had lost control when a curfew order was ignored, and the army took a semi-neutral stance on enforcing the curfew decree.
On 11 February 2011, Mubarak resigned and fled Cairo. Vice President Omar Suleiman announced that Mubarak had stepped down and that the Egyptian military would assume control of the nation's affairs in the short term. Jubilant celebrations broke out in Tahrir Square at the news. Mubarak may have left Cairo for Sharm el-Sheikh the previous night, before or shortly after the airing of a taped speech in which Mubarak vowed he would not step down or leave.
On 13 February 2011, the high level military command of Egypt announced that both the constitution and the parliament of Egypt had been dissolved. The parliamentary election was to be held in September.
A constitutional referendum was held on 19 March 2011. On 28 November 2011, Egypt held its first parliamentary election since the Mubarak regime fell. Turnout was high and there were no reports of violence, although members of some parties broke the ban on campaigning at polling places by handing out pamphlets and banners. There were, however, complaints of irregularities.
The first round of a presidential election was held in Egypt on 23 and 24 May 2012. Mohamed Morsi won 25% of the vote and Ahmed Shafik, the last prime minister under deposed leader Hosni Mubarak, 24%. A second round was held on 16 and 17 June. On 24 June 2012, the election commission announced that Mohamed Morsi had won the election, making him the first democratically elected president of Egypt. According to official results, Morsi took 51.7 percent of the vote while Shafik received 48.3 percent.
On 8 July 2012, Egypt's new president Mohamed Morsi announced he was overriding the military edict that dissolved the country's elected parliament and called lawmakers back into session.
On 10 July 2012, the Supreme Constitutional Court of Egypt negated the decision by Morsi to call the nation's parliament back into session. On 2 August 2012, Egypt's Prime Minister Hisham Qandil announced his 35-member cabinet, including 28 newcomers, of whom four came from the influential Muslim Brotherhood while six and the former interim military ruler Mohamed Hussein Tantawi as the Defence Minister came from the previous Government.
On 22 November 2012, Morsi issued a declaration immunizing his decrees from challenge and seeking to protect the work of the constituent assembly drafting the new constitution. The declaration also requires a retrial of those accused in the Mubarak-era killings of protesters, who had been acquitted, and extends the mandate of the constituent assembly by two months. Additionally, the declaration authorizes Morsi to take any measures necessary to protect the revolution. Liberal and secular groups previously walked out of the constitutional constituent assembly because they believed that it would impose strict Islamic practices, while Muslim Brotherhood backers threw their support behind Morsi.
The move was criticized by Mohamed ElBaradei, the leader of Egypt's Constitution Party, who stated "Morsi today usurped all state powers & appointed himself Egypt's new pharaoh" on his Twitter feed. The move led to massive protests and violent action throughout Egypt. On 5 December 2012, Tens of thousands of supporters and opponents of Egypt's president clashed, hurling rocks and Molotov cocktails and brawling in Cairo's streets, in what was described as the largest violent battle between Islamists and their foes since the country's revolution. Six senior advisors and three other officials resigned from the government and the country's leading Islamic institution called on Morsi to stem his powers. Protesters also clamored from coastal cities to desert towns.
Morsi offered a "national dialogue" with opposition leaders but refused to cancel a 15 December vote on a draft constitution written by an Islamist-dominated assembly that has ignited two weeks of political unrest.
A constitutional referendum was held in two rounds on 15 and 22 December 2012, with 64% support, and 33% against. It was signed into law by a presidential decree issued by Morsi on 26 December 2012. On 3 July 2013, the constitution was suspended by order of the Egyptian army.
On 30 June 2013, on the first anniversary of the election of Morsi, millions of protesters across Egypt took to the streets and demanded the immediate resignation of the president. On 1 July, the Egyptian Armed Forces issued a 48-hour ultimatum that gave the country's political parties until 3 July to meet the demands of the Egyptian people. The presidency rejected the Egyptian Army's 48-hour ultimatum, vowing that the president would pursue his own plans for national reconciliation to resolve the political crisis. On 3 July, General Abdel Fattah el-Sisi, head of the Egyptian Armed Forces, announced that he had removed Morsi from power, suspended the constitution and would be calling new presidential and Shura Council elections and named Supreme Constitutional Court's leader, Adly Mansour as acting president. Mansour was sworn in on 4 July 2013.
During the months after the coup d'état, a new constitution was prepared, which took effect on 18 January 2014. After that, presidential and parliamentary elections have to be held in June 2014. On 24 March 2014, 529 Morsi's supporters were sentenced to death, while the trial of Morsi himself was still ongoing. Having delivered a final judgement, 492 sentences were commuted to life imprisonment with 37 death sentences being upheld. On 28 April, another mass trial took place with 683 Morsi supporters sentenced to death for killing 1 police officer. In 2015, Egypt participated in the Saudi Arabian-led intervention in Yemen.
In the elections of June 2014 El-Sisi won with a percentage of 96.1%. Under President el-Sisi, Egypt has implemented a rigorous policy of controlling the border to the Gaza Strip, including the dismantling of tunnels between the Gaza strip and Sinai. | https://en.wikipedia.org/wiki?curid=13588 |
House
A house is a single-unit residential building, which may range in complexity from a rudimentary hut to a complex, structure of wood, masonry, concrete or other material, outfitted with plumbing, electrical, and heating, ventilation, and air conditioning systems. Houses use a range of different roofing systems to keep precipitation such as rain from getting into the dwelling space. Houses may have doors or locks to secure the dwelling space and protect its inhabitants and contents from burglars or other trespassers. Most conventional modern houses in Western cultures will contain one or more bedrooms and bathrooms, a kitchen or cooking area, and a living room. A house may have a separate dining room, or the eating area may be integrated into another room. Some large houses in North America have a recreation room. In traditional agriculture-oriented societies, domestic animals such as chickens or larger livestock (like cattle) may share part of the house with humans.
The social unit that lives in a house is known as a household. Most commonly, a household is a family unit of some kind, although households may also be other social groups, such as roommates or, in a rooming house, unconnected individuals. Some houses only have a dwelling space for one family or similar-sized group; larger houses called townhouses or row houses may contain numerous family dwellings in the same structure. A house may be accompanied by outbuildings, such as a garage for vehicles or a shed for gardening equipment and tools. A house may have a backyard or front yard, which serve as additional areas where inhabitants can relax or eat.
The English word "house" derives directly from the Old English "hus" meaning "dwelling, shelter, home, house," which in turn derives from Proto-Germanic "husan" (reconstructed by etymological analysis) which is of unknown origin. The house itself gave rise to the letter 'B' through an early Proto-Semitic hieroglyphic symbol depicting a house. The symbol was called "bayt", "bet" or "beth" in various related languages, and became "beta", the Greek letter, before it was used by the Romans. "Beit" in Arabic means house, while in Maltese "bejt" refers to the roof of the house.
Ideally, architects of houses design rooms to meet the needs of the people who will live in the house. Feng shui, originally a Chinese method of moving houses according to such factors as rain and micro-climates, has recently expanded its scope to address the design of interior spaces, with a view to promoting harmonious effects on the people living inside the house, although no actual effect has ever been demonstrated. Feng shui can also mean the "aura" in or around a dwelling, making it comparable to the real estate sales concept of "indoor-outdoor flow".
The square footage of a house in the United States reports the area of "living space", excluding the garage and other non-living spaces. The "square metres" figure of a house in Europe reports the area of the walls enclosing the home, and thus includes any attached garage and non-living spaces. The number of floors or levels making up the house can affect the square footage of a home.
Humans often build houses for domestic or wild animals, often resembling smaller versions of human domiciles. Familiar animal houses built by humans include birdhouses, henhouses and doghouses, while housed agricultural animals more often live in barns and stables.
Many houses have several large rooms with specialized functions and several very small rooms for other various reasons. These may include a living/eating area, a sleeping area, and (if suitable facilities and services exist) separate or combined washing and lavatory areas. Some larger properties may also feature rooms such as a spa room, indoor pool, indoor basketball court, and other 'non-essential' facilities. In traditional agriculture-oriented societies, domestic animals such as chickens or larger livestock often share part of the house with humans. Most conventional modern houses will at least contain a bedroom, bathroom, kitchen or cooking area, and a living room.
The names of parts of a house often echo the names of parts of other buildings, but could typically include:
Little is known about the earliest origin of the house and its interior, however it can be traced back to the simplest form of shelters. Roman architect Vitruvius' theories have claimed the first form of architecture as a frame of timber branches finished in mud, also known as the primitive hut.
Philip Tabor later states the contribution of 17th century Dutch houses as the foundation of houses today.
In the Middle Ages, the Manor Houses facilitated different activities and events. Furthermore, the houses accommodated numerous people, including family, relatives, employees, servants and their guests. Their lifestyles were largely communal, as areas such as the Great Hall enforced the custom of dining and meetings and the Solar intended for shared sleeping beds.
During the 15th and 16th centuries, the Italian Renaissance Palazzo consisted of plentiful rooms of connectivity. Unlike the qualities and uses of the Manor Houses, most rooms of the palazzo contained no purpose, yet were given several doors. These doors adjoined rooms in which Robin Evans describes as a "matrix of discrete but thoroughly interconnected chambers." The layout allowed occupants to freely walk room to room from one door to another, thus breaking the boundaries of privacy.
An early example of the segregation of rooms and consequent enhancement of privacy may be found in 1597 at the Beaufort House built in Chelsea, London. It was designed by English architect John Thorpe who wrote on his plans, "A Long Entry through all". The separation of the passageway from the room developed the function of the corridor. This new extension was revolutionary at the time, allowing the integration of one door per room, in which all universally connected to the same corridor. English architect Sir Roger Pratt states "the common way in the middle through the whole length of the house, [avoids] the offices from one molesting the other by continual passing through them." Social hierarchies within the 17th century were highly regarded, as architecture was able to epitomize the servants and the upper class. More privacy is offered to the occupant as Pratt further claims, "the ordinary servants may never publicly appear in passing to and fro for their occasions there." This social divide between rich and poor favored the physical integration of the corridor into housing by the 19th century.
Sociologist Witold Rybczynski wrote, "the subdivision of the house into day and night uses, and into formal and informal areas, had begun." Rooms were changed from public to private as single entryways forced notions of entering a room with a specific purpose.
Compared to the large scaled houses in England and the Renaissance, the 17th Century Dutch house was smaller, and was only inhabited by up to four to five members. This was because they embraced "self-reliance" in contrast to the dependence on servants, and a design for a lifestyle centered on the family. It was important for the Dutch to separate work from domesticity, as the home became an escape and a place of comfort. This way of living and the home has been noted as highly similar to the contemporary family and their dwellings.
By the end of the 17th century, the house layout was transformed to become employment-free, enforcing these ideas for the future. This came in favour for the industrial revolution, gaining large-scale factory production and workers. The house layout of the Dutch and its functions are still relevant today.
In the American context, some professions, such as doctors, in the 19th and early 20th century typically operated out of the front room or parlor or had a two-room office on their property, which was detached from the house. By the mid-20th-century, the increase in high-tech equipment created a marked shift whereby the contemporary doctor typically worked from an office or hospital.
The introduction of technology and electronic systems within the house has questioned the impressions of privacy as well as the segregation of work from home. Technological advances of surveillance and communications allow insight of personal habits and private lives. As a result, the "private becomes ever more public, [and] the desire for a protective home life increases, fuelled by the very media that undermine it," writes Jonathan Hill. Work has been altered by the increase of communications. The "deluge of information", has expressed the efforts of work, conveniently gaining access inside the house. Although commuting is reduced, the desire to separate working and living remains apparent. On the other hand, some architects have designed homes in which eating, working and living are brought together.
In many parts of the world, houses are constructed using scavenged materials. In Manila's Payatas neighborhood, slum houses are often made of material sourced from a nearby garbage dump. In Dakar, it is common to see houses made of recycled materials standing atop a mixture of garbage and sand which serves as a foundation. The garbage-sand mixture is also used to protect the house from flooding.
In the United States, modern house construction techniques include light-frame construction (in areas with access to supplies of wood) and adobe or sometimes rammed-earth construction (in arid regions with scarce wood-resources). Some areas use brick almost exclusively, and quarried stone has long provided foundations and walls. To some extent, aluminum and steel have displaced some traditional building materials. Increasingly popular alternative construction materials include insulating concrete forms (foam forms filled with concrete), structural insulated panels (foam panels faced with oriented strand board or fiber cement), light-gauge steel, and steel framing. More generally, people often build houses out of the nearest available material, and often tradition or culture govern construction-materials, so whole towns, areas, counties or even states/countries may be built out of one main type of material. For example, a large portion of American houses use wood, while most British and many European houses use stone, brick, or mud.
In the early 20th century, some house designers started using prefabrication. Sears, Roebuck & Co. first marketed their Sears Catalog Homes to the general public in 1908. Prefab techniques became popular after World War II. First small inside rooms framing, then later, whole walls were prefabricated and carried to the construction site. The original impetus was to use the labor force inside a shelter during inclement weather. More recently, builders have begun to collaborate with structural engineers who use finite element analysis to design prefabricated steel-framed homes with known resistance to high wind loads and seismic forces. These newer products provide labor savings, more consistent quality, and possibly accelerated construction processes.
Lesser-used construction methods have gained (or regained) popularity in recent years. Though not in wide use, these methods frequently appeal to homeowners who may become actively involved in the construction process. They include:
In the developed world, energy-conservation has grown in importance in house design. Housing produces a major proportion of carbon emissions (studies have show that it is 30% of the total in the United Kingdom).
Development of a number of types and techniques continues. They include the zero-energy house, the passive solar house, the autonomous buildings, the superinsulated and houses built to the "Passivhaus" standard.
Buildings with historical importance have legal restrictions. New houses in the UK are not covered by the Sale of Goods Act. When purchasing a new house the buyer has different legal protection than when buying other products. New houses in the UK are covered by a National House Building Council guarantee.
With the growth of dense settlement, humans designed ways of identifying houses and parcels of land. Individual houses sometimes acquire proper names, and those names may acquire in their turn considerable emotional connotations. For example, the house of "Howards End" or the castle of "Brideshead Revisited". A more systematic and general approach to identifying houses may use various methods of house numbering.
Houses may express the circumstances or opinions of their builders or their inhabitants. Thus, a vast and elaborate house may serve as a sign of conspicuous wealth whereas a low-profile house built of recycled materials may indicate support of energy conservation. Houses of particular historical significance (former residences of the famous, for example, or even just very old houses) may gain a protected status in town planning as examples of built heritage or of streetscape. Commemorative plaques may mark such structures. Home ownership provides a common measure of prosperity in economics. Contrast the importance of house-destruction, tent dwelling and house rebuilding in the wake of many natural disasters. | https://en.wikipedia.org/wiki?curid=13590 |
Java applet
Java applets were small applications written in the Java programming language, or another programming language that compiles to Java bytecode, and delivered to users in the form of Java bytecode. The user launched the Java applet from a web page, and the applet was then executed within a Java virtual machine (JVM) in a process separate from the web browser itself. A Java applet could appear in a frame of the web page, a new application window, Sun's AppletViewer, or a stand-alone tool for testing applets.
Java applets were introduced in the first version of the Java language, which was released in 1995. Beginning in 2013, major web browsers began to phase out support for the underlying technology applets used to run, with applets becoming completely unable to be run by 2015–2017. Java applets were deprecated since Java 9 in 2017 and removed from Java SE 11 (18.9), released in September 2018.
Java applets were usually written in Java, but other languages such as Jython, JRuby, Pascal, Scala, or Eiffel (via SmartEiffel) may be used as well.
Java applets run at very fast speeds and until 2011, they were many times faster than JavaScript. Unlike JavaScript, Java applets had access to 3D hardware acceleration, making them well-suited for non-trivial, computation-intensive visualizations. As browsers have gained support for hardware-accelerated graphics thanks to the canvas technology (or specifically WebGL in the case of 3D graphics), as well as just-in-time compiled JavaScript, the speed difference has become less noticeable.
Since Java bytecode is cross-platform (or platform independent), Java applets can be executed by browsers (or other clients) for many platforms, including Microsoft Windows, FreeBSD, Unix, macOS and Linux. They cannot be run on modern mobile devices, which do not support Java.
The Applets are used to provide interactive features to web applications that cannot be provided by HTML alone. They can capture mouse input and also have controls like buttons or check boxes. In response to user actions, an applet can change the provided graphic content. This makes applets well-suited for demonstration, visualization, and teaching. There are online applet collections for studying various subjects, from physics to heart physiology.
An applet can also be a text area only; providing, for instance, a cross-platform command-line interface to some remote system. If needed, an applet can leave the dedicated area and run as a separate window. However, applets have very little control over web page content outside the applet's dedicated area, so they are less useful for improving the site appearance in general, unlike other types of browser extensions (while applets like news tickers or WYSIWYG editors are also known). Applets can also play media in formats that are not natively supported by the browser.
Pages coded in HTML may embed parameters within them that are passed to the applet. Because of this, the same applet may have a different appearance depending on the parameters that were passed.
As applets were available before CSS and DHTML were standard, they were also widely used for trivial effects such as rollover navigation buttons. This approach, which posed major problems for accessibility and misused system resources, is no longer in use and was strongly discouraged even at the time.
Java applets are executed in a "sandbox" by most web browsers, preventing them from accessing local data like the clipboard or file system. The code of the applet is downloaded from a web server, after which the browser either embeds the applet into a web page or opens a new window showing the applet's user interface.
A Java applet extends the class , or in the case of a Swing applet, . The class which must override methods from the applet class to set up a user interface inside itself (codice_1) is a descendant of which is a descendant of . As applet inherits from container, it has largely the same user interface possibilities as an ordinary Java application, including regions with user specific visualization.
The first implementations involved downloading an applet class by class. While classes are small files, there are often many of them, so applets got a reputation as slow-loading components. However, since .jars were introduced, an applet is usually delivered as a single file that has a size similar to an image file (hundreds of kilobytes to several megabytes).
The domain from where the applet executable has been downloaded is the only domain to which the usual (unsigned) applet is allowed to communicate. This domain can be different from the domain where the surrounding HTML document is hosted.
Java system libraries and runtimes are backwards-compatible, allowing one to write code that runs both on current and on future versions of the Java virtual machine.
Many Java developers, blogs and magazines are recommending that the Java Web Start technology be used in place of applets. Java Web Start allows the launching of unmodified applet code, which then runs in a separate window (not inside the invoking browser).
A Java Servlet is sometimes informally compared to be "like" a server-side applet, but it is different in its language, functions, and in each of the characteristics described here about applets.
The applet can be displayed on the web page by making use of the deprecated codice_2 HTML element, or the recommended codice_3 element. The codice_4 element can be used with Mozilla family browsers (codice_4 was deprecated in HTML 4 but is included in HTML 5). This specifies the applet's source and location. Both codice_3 and codice_4 tags can also download and install Java virtual machine (if required) or at least lead to the plugin page. codice_2 and codice_3 tags also support loading of the serialized applets that start in some particular (rather than initial) state. Tags also specify the message that shows up in place of the applet if the browser cannot run it due to any reason.
However, despite codice_3 being officially a recommended tag, as of 2010, the support of the codice_3 tag was not yet consistent among browsers and Sun kept recommending the older codice_2 tag for deploying in multibrowser environments, as it remained the only tag consistently supported by the most popular browsers. To support multiple browsers, the codice_3 tag currently requires JavaScript (that recognizes the browser and adjusts the tag), usage of additional browser-specific tags or delivering adapted output from the server side. Deprecating codice_2 tag has been criticized. Oracle now provides a maintained JavaScript code to launch applets with cross platform workarounds.
The Java browser plug-in relies on NPAPI, which many web browser vendors are deprecating due to its age and security issues. In January 2016, Oracle announced that Java runtime environments based on JDK 9 will discontinue the browser plug-in.
The following example illustrates the use of Java applets through the java.applet package. The example also uses classes from the Java Abstract Window Toolkit (AWT) to produce the message "Hello, world!" as output.
import java.applet.*;
import java.awt.*;
// Applet code for the "Hello, world!" example.
// This should be saved in a file named as "HelloWorld.java".
public class HelloWorld extends Applet {
Simple applets are shared freely on the Internet for customizing applications that support plugins.
After compilation, the resulting .class file can be placed on a web server and invoked within an HTML page by using an or an tag. For example:
When the page is accessed it will read as follows:
To minimize download time, applets can be delivered in the form of a jar file. In the case of this example, if all necessary classes are placed in the compressed archive "example.jar", the following embedding code could be used instead:
Applet inclusion is described in detail in Sun's official page about the APPLET tag.
A Java applet can have any or all of the following advantages:
A Java applet may have any of the following disadvantages compared to other client-side web technologies:
Sun made considerable efforts to ensure compatibility is maintained between Java versions as they evolve, enforcing Java portability by law if required. Oracle seems to be continuing the same strategy.
The 1997 lawsuit, was filed after Microsoft created a modified Java Virtual Machine of their own, which shipped with Internet Explorer. Microsoft added about 50 methods and 50 fields into the classes within the "java.awt, java.lang", and "java.io" packages. Other modifications included removal of RMI capability and replacement of Java native interface from JNI to RNI, a different standard. RMI was removed because it only easily supports Java to Java communications and competes with Microsoft DCOM technology. Applets that relied on these changes or just inadvertently used them worked only within Microsoft's Java system. Sun sued for breach of trademark, as the point of Java was that there should be no proprietary extensions and that code should work everywhere. Microsoft agreed to pay Sun $20 million, and Sun agreed to grant Microsoft limited license to use Java without modifications only and for a limited time.
Microsoft continued to ship its own unmodified Java virtual machine. Over the years it became extremely outdated yet still default for Internet Explorer. A later study revealed that applets of this time often contain their own classes that mirror Swing and other newer features in a limited way. In 2002, Sun filed an antitrust lawsuit, claiming that Microsoft's attempts at illegal monopolization had harmed the Java platform. Sun demanded Microsoft distribute Sun's current, binary implementation of Java technology as part of Windows, distribute it as a recommended update for older Microsoft desktop operating systems and stop the distribution of Microsoft's Virtual Machine (as its licensing time, agreed in the prior lawsuit, had expired). Microsoft paid $700 million for pending antitrust issues, another $900 million for patent issues and a $350 million royalty fee to use Sun's software in the future.
There are two applet types with very different security models: signed applets and unsigned applets. As of Java SE 7 Update 21 (April 2013) applets and Web-Start Apps are encouraged to be signed with a trusted certificate, and warning messages appear when running unsigned applets. Further starting with Java 7 Update 51 unsigned applets are blocked by default; they can be run by creating an exception in the Java Control Panel.
Limits on unsigned applets are understood as "draconian": they have no access to the local filesystem and web access limited to the applet download site; there are also many other important restrictions. For instance, they cannot access all system properties, use their own class loader, call native code, execute external commands on a local system or redefine classes belonging to core packages included as part of a Java release. While they can run in a standalone frame, such frame contains a header, indicating that this is an untrusted applet. Successful initial call of the forbidden method does not automatically create a security hole as an access controller checks the entire stack of the calling code to be sure the call is not coming from an improper location.
As with any complex system, many security problems have been discovered and fixed since Java was first released. Some of these (like the Calendar serialization security bug) persisted for many years with nobody being aware. Others have been discovered in use by malware in the wild.
Some studies mention applets crashing the browser or overusing CPU resources but these are classified as nuisances and not as true security flaws. However, unsigned applets may be involved in combined attacks that exploit a combination of multiple severe configuration errors in other parts of the system. An unsigned applet can also be more dangerous to run directly on the server where it is hosted because while code base allows it to talk with the server, running inside it can bypass the firewall. An applet may also try DoS attacks on the server where it is hosted, but usually people who manage the web site also manage the applet, making this unreasonable. Communities may solve this problem via source code review or running applets on a dedicated domain.
The unsigned applet can also try to download malware hosted on originating server. However it could only store such file into a temporary folder (as it is transient data) and has no means to complete the attack by executing it. There were attempts to use applets for spreading Phoenix and Siberia exploits this way, but these exploits do not use Java internally and were also distributed in several other ways.
A signed applet contains a signature that the browser should verify through a remotely running, independent certificate authority server. Producing this signature involves specialized tools and interaction with the authority server maintainers. Once the signature is verified, and the user of the current machine also approves, a signed applet can get more rights, becoming equivalent to an ordinary standalone program. The rationale is that the author of the applet is now known and will be responsible for any deliberate damage. This approach allows applets to be used for many tasks that are otherwise not possible by client-side scripting. However, this approach requires more responsibility from the user, deciding whom he or she trusts. The related concerns include a non-responsive authority server, wrong evaluation of the signer identity when issuing certificates, and known applet publishers still doing something that the user would not approve of. Hence signed applets that appeared from Java 1.1 may actually have more security concerns.
Self-signed applets, which are applets signed by the developer themselves, may potentially pose a security risk; java plugins provide a warning when requesting authorization for a self-signed applet, as the function and safety of the applet is guaranteed only by the developer itself, and has not been independently confirmed. Such self-signed certificates are usually only used during development prior to release where third-party confirmation of security is unimportant, but most applet developers will seek third-party signing to ensure that users trust the applet's safety.
Java security problems are not fundamentally different from similar problems of any client-side scripting platform. In particular, all issues related to signed applets also apply to Microsoft ActiveX components.
As of 2014, self-signed and unsigned applets are no longer accepted by the commonly available Java plugins or Java Web Start. Consequently, developers who wish to deploy Java applets have no alternative but to acquire trusted certificates from commercial sources.
Alternative technologies exist (for example, JavaScript) that satisfy all or more of the scope of what is possible with an applet. JavaScript can coexist with applets in the same page, assist in launching applets (for instance, in a separate frame or providing platform workarounds) and later be called from the applet code. JavaFX is an extension of the Java platform and may also be viewed as an alternative. | https://en.wikipedia.org/wiki?curid=13593 |
Heathrow Airport
Heathrow Airport, originally called London Airport (until 1966) and now known as London Heathrow , is a major international airport in London, United Kingdom. Heathrow is the second busiest airport in the world by international passenger traffic, as well as the busiest airport in Europe by passenger traffic, and the seventh busiest airport in the world by total passenger traffic. It is one of six international airports serving the London region. In 2019, it handled a record 80.8 million passengers, a 0.9% increase from 2018 as well as 475,861 aircraft movements, a decrease of 1,743 from 2018. The airport facility is owned and operated by Heathrow Airport Holdings.
Heathrow lies 14 miles (23 km) west of Central London, and has two parallel east–west runways along with four operational terminals on a site that covers . The airport is the primary hub for British Airways and the primary operating base for Virgin Atlantic.
In September 2012, the Government of the United Kingdom established the Airports Commission, an independent commission chaired by Sir Howard Davies to examine various options for increasing capacity at UK airports. In July 2015, the commission backed a third runway at Heathrow, which the government approved in October 2016. However, the England and Wales Court of Appeal rejected this plan for a third runway at Heathrow, due to concerns about climate change and the environmental impact of aviation.
Heathrow is west of central London, on a parcel of land that is designated part of the Metropolitan Green Belt. It is located west of the town of Hounslow, 3 miles south of Hayes, and 3 miles north-east of Staines-upon-Thames.
The airport is surrounded by the villages of Harlington, Harmondsworth, and Longford to the north and the neighbourhoods of Cranford and Hatton to the east. To the south lie Feltham, Bedfont and Stanwell while to the west Heathrow is separated from Wraysbury, Horton and Windsor in Berkshire by the M25 motorway. Heathrow falls entirely within the boundaries of the London Borough of Hillingdon, and under the Twickenham postcode area, with the postcode TW6. The airport is located within the Hayes and Harlington parliamentary constituency.
As the airport is located west of London and as its runways run east–west, an airliner's landing approach is usually directly over the conurbation of London when the wind is from the west, which is most of the time.
Along with Gatwick, Stansted, Luton, Southend and London City, Heathrow is one of six airports with scheduled services serving the London area.
Heathrow Airport originated in 1929 as a small airfield (Great West Aerodrome) on land south-east of the hamlet of Heathrow from which the airport takes its name. At that time the land consisted of farms, market gardens and orchards; there was a "Heathrow Farm" approximately where the modern Terminal 2 is situated, a "Heathrow Hall" and a "Heathrow House." This hamlet was largely along a country lane (Heathrow Road), which ran roughly along the east and south edges of the present central terminals area.
Development of the whole Heathrow area as a much larger airport began in 1944. It was stated to be for long-distance military aircraft bound for the Far East; by the time the airfield was nearing completion, World War II had ended, and the UK Government continued to develop the airport as a civil airport. The airport was opened on 25 March 1946 as London Airport and was renamed Heathrow Airport in 1966. The layout for the airport was designed by Sir Frederick Gibberd, who designed the original terminals and central area buildings, including the original control tower and the multi-faith Chapel of St George's.
Heathrow Airport is used by over 80 airlines flying to 185 destinations in 84 countries. The airport is the primary hub of British Airways and is a base for Virgin Atlantic. It has four passenger terminals (numbered 2 to 5) and a cargo terminal. Of Heathrow's 78 million passengers in 2017, 94% were international travellers; the remaining 6% were bound for (or arriving from) places in the UK. The busiest single destination in passenger numbers is New York, with over 3 million passengers flying between Heathrow and JFK Airport in 2013.
In the 1950s, Heathrow had six runways, arranged in three pairs at different angles in the shape of a hexagram with the permanent passenger terminal in the middle and the older terminal along the north edge of the field; two of its runways would always be within 30° of the wind direction. As the required length for runways has grown, Heathrow now has only two parallel runways running east–west. These are extended versions of the two east–west runways from the original hexagram. From the air, almost all of the original runways can still be seen, incorporated into the present system of taxiways. North of the northern runway and the former taxiway and aprons, now the site of extensive car parks, is the entrance to the access tunnel and the site of Heathrow's unofficial "gate guardian". For many years the home of a 40% scale model of a British Airways Concorde, G-CONC, the site has been occupied by a model of an Emirates Airbus A380 since 2008.
Heathrow Airport has Anglican, Catholic, Free Church, Hindu, Jewish, Muslim and Sikh chaplains. There is a multi-faith prayer room and counselling room in each terminal, in addition to St. George's Interdenominational Chapel in an underground vault adjacent to the old control tower, where Christian services take place. The chaplains organise and lead prayers at certain times in the prayer room.
The airport has its own resident press corps, consisting of six photographers and one TV crew, serving all the major newspapers and television stations around the world.
Most of Heathrow's internal roads are initial letter coded by area: N in the north (e.g. Newall Road), E in the east (e.g. Elmdon Road), S in the south (e.g. Stratford Road), W in the west (e.g. Walrus Road), C in the centre (e.g. Camborne Road).
Aircraft destined for Heathrow are usually routed to one of four holding points.
Air traffic controllers at Heathrow Approach Control (based in Swanwick, Hampshire) then guide the aircraft to their final approach, merging aircraft from the four holds into a single stream of traffic, sometimes as close as apart. Considerable use is made of continuous descent approach techniques to minimize the environmental effects of incoming aircraft, particularly at night. Once an aircraft is established on its final approach, control is handed over to Heathrow Tower.
When runway alternation was introduced, aircraft generated significantly more noise on departure than when landing, so a preference for westerly operations during daylight was introduced, which continues to this day. In this mode, aircraft take off towards the west and land from the east over London, thereby minimizing the impact of noise on the most densely populated areas. Heathrow's two runways generally operate in segregated mode, whereby landings are allocated to one runway and takeoffs to the other. To further reduce noise nuisance to people beneath the approach and departure routes, the use of runways 27R and 27L is swapped at 15:00 each day if the wind is from the west. When landings are easterly there is no alternation; 09L remains the landing runway and 09R the takeoff runway due to the legacy of the now rescinded Cranford Agreement, pending taxiway works to allow the roles to be reversed. Occasionally, landings are allowed on the nominated departure runway, to help reduce airborne delays and to position landing aircraft closer to their terminal, reducing taxi times.
Night-time flights at Heathrow are subject to restrictions. Between 23:00 and 04:00, the noisiest aircraft (rated QC/8 and QC/16) cannot be scheduled for operation. Also, during the night quota period (23:30–06:00) there are four limits:
A trial of "noise relief zones" ran from December 2012 to March 2013, which concentrated approach flight paths into defined areas compared with the existing paths which were spread out. The zones used alternated weekly, meaning residents in the "no-fly" areas received respite from aircraft noise for set periods. However, it was concluded that some residents in other areas experienced a significant disbenefit as a result of the trial and that it should therefore not be taken forward in its current form. Heathrow received more than 25,000 noise complaints in just three months over the summer of 2016, but around half were made by the same ten people.
Until it was required to sell Gatwick and Stansted Airports, Heathrow Airport Holdings held a dominant position in the London aviation market and has been heavily regulated by the Civil Aviation Authority (CAA) as to how much it can charge airlines to land. The annual increase in landing charge per passenger was capped at inflation minus 3% until 1 April 2003. From 2003 to 2007 charges increased by inflation plus 6.5% per year, taking the fee to £9.28 per passenger in 2007. In March 2008, the CAA announced that the charge would be allowed to increase by 23.5% to £12.80 from 1 April 2008 and by inflation plus 7.5% for each of the following four years. In April 2013, the CAA announced a proposal for Heathrow to charge fees calculated by inflation minus 1.3%, continuing until 2019. Whilst the cost of landing at Heathrow is determined by the CAA and Heathrow Airport Holdings, the allocation of landing slots to airlines is carried out by Airport Co-ordination Limited (ACL).
Until 2008, air traffic between Heathrow and the United States was strictly governed by the countries' bilateral Bermuda II treaty. The treaty originally allowed only British Airways, Pan Am and TWA to fly from Heathrow to the US. In 1991, Pan Am and TWA sold their rights to United Airlines and American Airlines respectively, while Virgin Atlantic was added to the list of airlines allowed to operate on these routes. The Bermuda bilateral agreement conflicted with the Right of Establishment of the United Kingdom concerning its EU membership, and as a consequence, the UK was ordered to drop the agreement in 2004. A new "open skies" agreement was signed by the United States and the European Union on 30 April 2007 and came into effect on 30 March 2008. Shortly afterward, additional US airlines, including Northwest Airlines, Continental Airlines, US Airways and Delta Air Lines started services to Heathrow.
The airport has been criticised in recent years for overcrowding and delays; according to Heathrow Airport Holdings, Heathrow's facilities were originally designed to accommodate 55 million passengers annually. The number of passengers using the airport reached a record 70 million in 2012. In 2007 the airport was voted the world's least favourite, alongside Chicago O'Hare, in a TripAdvisor survey. However, the opening of Terminal 5 in 2008 has relieved some pressure on terminal facilities, increasing the airport's terminal capacity to 90 million passengers per year. A tie-up is also in place with McLaren Applied Technologies to optimize the general procedure, reducing delays and pollution.
With only two runways, operating at over 98% of their capacity, Heathrow has little room for more flights, although the increasing use of larger aircraft such as the Airbus A380 will allow some increase in passenger numbers. It is difficult for existing airlines to obtain landing slots to enable them to increase their services from the airport, or for new airlines to start operations. To increase the number of flights, Heathrow Airport Holdings has proposed using the existing two runways in 'mixed mode' whereby aircraft would be allowed to take off and land on the same runway. This would increase the airport's capacity from its current 480,000 movements per year to as many as 550,000 according to British Airways CEO Willie Walsh. Heathrow Airport Holdings has also proposed building a third runway to the north of the airport, which would significantly increase traffic capacity (see Future expansion below).
Policing of the airport is the responsibility of the aviation security unit of the Metropolitan Police, although the army, including armoured vehicles of the Household Cavalry, has occasionally been deployed at the airport during periods of heightened security.
Full body scanners are now used at the airport, and passengers who object to their use after being selected are required to submit to a hand search in a private room. The scanners display passengers' bodies as a cartoon-style figure, with indicators showing where concealed items may be. The new imagery was introduced initially as a trial in September 2011 following complaints over privacy.
Following widespread disruption caused by reports of drone sightings at Gatwick Airport, and a subsequent incident at Heathrow, a drone detection system was installed airport-wide to combat possible future disruption caused by the illegal use of drones.
During the COVID-19 pandemic in 2020, Heathrow Airport saw a vast reduction in services, and announced that as of Monday 6 April 2020, the airport would be transitioning to single runway operations, which would change on a weekly basis, and that it would be closing Terminals 3 and 4, moving all remaining flights into Terminals 2 or 5.
The airport's newest terminal, officially known as the Queen's Terminal, was opened on 4 June 2014. Designed by Spanish architect Luis Vidal, it was built on the site that had been occupied by the original Terminal 2 and the Queens Building. The main complex was completed in November 2013 and underwent six months of testing before opening to passengers. It includes a satellite pier (T2B), a 1,340-space car park, an energy center and a cooling station to generate chilled water. There are 52 shops and 17 bars and restaurants.
Terminal 2 is used by all Star Alliance members which fly from Heathrow (consolidating the airlines under Star Alliance's co-location policy "Move Under One Roof"). Aer Lingus, Eurowings and Icelandair also operate from the terminal. The airlines moved from their original locations over six months, with only 10% of flights operating from there in the first six weeks (United Airlines' transatlantic flights) to avoid the opening problems seen at Terminal 5. On 4 June 2014, United Airlines became the first airline to move into Terminal 2 from Terminals 1 and 4 followed by All Nippon Airways, Air Canada and Air China from Terminal 3. Air New Zealand, Asiana Airlines, Croatia Airlines, LOT Polish Airlines, South African Airways, and TAP Air Portugal were the last airlines to move in on 22 October 2014.
The original Terminal 2 opened as the Europa Building in 1955 and was the airport's oldest terminal. It had an area of and was designed to handle around 1.2 million passengers annually. In its final years, it accommodated up to 8 million. A total of 316 million passengers passed through the terminal in its lifetime. The building was demolished in 2010, along with the Queens Building which had housed airline company offices.
Terminal 3 opened as the Oceanic Terminal on 13 November 1961 to handle flight departures for long-haul routes for foreign carriers to the United States, Asia and other Far Eastern destinations. At this time the airport had a direct helicopter service to Central London from the gardens on the roof of the terminal building. Renamed Terminal 3 in 1968, it was expanded in 1970 with the addition of an arrivals building. Other facilities added included the UK's first moving walkways. In 2006, the new £105 million Pier 6 was completed to accommodate the Airbus A380 superjumbo; Emirates and Qantas operate regular flights from Terminal 3 using the Airbus A380.
Redevelopment of Terminal 3's forecourt by the addition of a new four-lane drop-off area and a large pedestrianised plaza, complete with canopy to the front of the terminal building, was completed in 2007. These improvements were intended to improve passengers' experience, reduce traffic congestion and improve security. As part of this project, Virgin Atlantic was assigned its own dedicated check-in area, known as 'Zone A', which features a large sculpture and atrium.
, Terminal 3 has an area of and in 2011 it handled 19.8 million passengers on 104,100 flights. Terminal 3 is home to Oneworld members (with the exception of Iberia, which uses Terminal 5, and Malaysia Airlines, Royal Air Maroc and Qatar Airways, All of which use Terminal 4), SkyTeam members Delta Air Lines and Middle East Airlines, all new airlines, and a few unaffiliated carriers.
Opened in 1986, Terminal 4 is situated to the south of the southern runway next to the cargo terminal and is connected to Terminals 2 and 3 by the Heathrow Cargo Tunnel. The terminal has an area of and is now home to the SkyTeam alliance, with the exception of Delta Air Lines and Middle East Airlines, which use Terminal 3, Oneworld carriers Malaysia Airlines and Qatar Airways, and to most unaffiliated carriers. It has undergone a £200m upgrade to enable it to accommodate 45 airlines with an upgraded forecourt to reduce traffic congestion and improve security. Most flights that go to Terminal 4 are flights coming from Central Asia, North Africa and the Middle East as well as a few flights to Europe. An extended check-in area with renovated piers and departure lounges and a new baggage system were installed, and two new stands were built to accommodate the Airbus A380; Etihad Airways, Korean Air, Malaysia Airlines and Qatar Airways operate regular A380 flights. El Al operates regular Boeing 787 flights.
Terminal 5 lies between the northern and southern runways at the western end of the Heathrow site and was opened by Queen Elizabeth II on 14 March 2008, some 19 years after its inception. It opened to the public on 27 March 2008, and British Airways and its partner company Iberia have exclusive use of this terminal. The first passenger to enter Terminal 5 was a UK ex-pat from Kenya who passed through security at 04:30 on the day. He was presented with a boarding pass by the British Airways CEO Willie Walsh for the first departing flight, BA302 to Paris. During the two weeks after its opening, operations were disrupted by problems with the terminal's IT systems, coupled with insufficient testing and staff training, which caused over 500 flights to be cancelled. Until March 2012, Terminal 5 was exclusively used by British Airways as its global hub; however, because of the merger, on 25 March Iberia's operations at Heathrow were moved to the terminal, making it the home of International Airlines Group.
Built at £4.3 billion, the terminal consists of a four-story main terminal building (Concourse A) and two satellite buildings linked to the main terminal by an underground people mover transit system. The second satellite (Concourse C), includes dedicated aircraft stands for the Airbus A380. It became fully operational on 1 June 2011. Terminal 5 was voted Skytrax World's Best Airport Terminal 2014 in the Annual World Airport Awards.
The main terminal building (Concourse A) has an area of while Concourse B covers . It has 60 aircraft stands and capacity for 30 million passengers annually as well as more than 100 shops and restaurants. It is also home to British Airways' Flagship lounge, the Concorde Room, alongside four further British Airways branded lounges.
A further building, designated Concourse D and of similar size to Concourse C, may yet be built to the east of the existing site, providing up to another 16 stands. Following British Airways' merger with Iberia, this may become a priority since the combined business will require accommodation at Heathrow under one roof to maximise the cost savings envisaged under the deal. A proposal for Concourse D featured in Heathrow's most recent capital investment plan.
The transport network around the airport has been extended to cope with the increase in passenger numbers. New branches of both the Heathrow Express and the Underground's Piccadilly line serve a new shared Heathrow Terminal 5 station. A dedicated motorway spur links the terminal to the M25 (between junctions 14 and 15). The terminal has a 3,800 space multi-storey car park. A more distant long-stay car park for business passengers is connected to the terminal by a personal rapid transit system, the Heathrow Pod, which became operational in the spring of 2011. Within the terminal complex, an automated people mover (APM) system, known as the Transit, is used to transport passengers between the satellite buildings.
As of July 2019, Heathrow's four passenger terminals are assigned as follows:
Following the opening of Terminal 5 in March 2008, a complex programme of terminal moves was implemented. This saw many airlines move to be grouped in terminals by airline alliance as far as possible.
Following the opening of Phase 1 of the new Terminal 2 in June 2014, all Star Alliance member airlines (with the exception of new member Air India which moved in early 2017) along with Aer Lingus and Germanwings relocated to Terminal 2 in a phased process completed on 22 October 2014. Additionally, by 30 June 2015 all airlines left Terminal 1 in preparation for its demolition to make room for the construction of Phase 2 of Terminal 2. Some other airlines made further minor moves at a later point, e.g. Delta Air Lines merging all departures in Terminal 3 instead of a split between Terminals 3 and 4.
Terminal 1 opened in 1968 and was inaugurated by Queen Elizabeth II in April 1969. Terminal 1 was the Heathrow base for British Airways' (BA) domestic and European network and a few of its long haul routes before Terminal 5 opened. The acquisition of British Midland International (BMI) in 2012 by BA's owner International Airlines Group meant British Airways took over BMI's short-haul and medium-haul destinations from the terminal. Terminal 1 was also the main base for most Star Alliance members though some were also based at Terminal 3.
Terminal 1 closed at the end of June 2015, the site is now being used to extend Terminal 2 which opened in June 2014. A number of the newer gates used by Terminal 1 were built as part of the Terminal 2 development and are being retained. The last tenants along with British Airways were El Al, Icelandair (moved to Terminal 2 25 March 2015) and LATAM Brasil (the third to move in to Terminal 3 on 27 May 2015). British Airways was the last operator in Terminal 1. Two flights of this carrier, one departing to Hanover and one arriving from Baku, marked the terminal closure on 29 June 2015. British Airways operations have been relocated to Terminals 3 and 5.
The following airlines operate regular scheduled passenger flights at London Heathrow Airport:
When ranked by passenger traffic, Heathrow is the sixth busiest internationally, behind Hartsfield–Jackson Atlanta International Airport, Beijing Capital International Airport, Dubai International Airport, Chicago's O'Hare International Airport, and Tokyo Haneda Airport, for the 12 months ending December 2015.
In 2015, Heathrow was the busiest airport in Europe in total passenger traffic, with 14% more passengers than Paris–Charles de Gaulle Airport and 22% more than Istanbul Atatürk Airport. Heathrow was the fourth busiest European airport by cargo traffic in 2013, after Frankfurt Airport, Paris Charles de Gaulle and Amsterdam Airport Schiphol.
Heathrow Airport processed 80,884,310 passengers in 2019. New York's John F. Kennedy International Airport was the most popular route with 3,192,195 passengers. The table below shows the 40 busiest international routes at the airport in 2019.
The head office of Heathrow Airport Holdings (formerly BAA Limited) is located in the Compass Centre by Heathrow's northern runway, a building that previously served as a British Airways flight crew centre. The World Business Centre Heathrow consists of three buildings. 1 World Business Centre houses offices of Heathrow Airport Holdings, Heathrow Airport itself, and Scandinavian Airlines. Previously International Airlines Group had its head office in 2 World Business Centre.
At one time the British Airways head office was located within Heathrow Airport at Speedbird House before the completion of Waterside, the current BA head office in Harmondsworth, in June 1998.
To the north of the airfield lies the Northern Perimeter Road, along which most of Heathrow's car rental agencies are based, and Bath Road, which runs parallel to it, but outside the airport campus. This is nicknamed "The Strip" by locals, because of its continuous line of airport hotels.
Many buses and coaches operate from the large Heathrow Central bus station serving Terminals 2 and 3, and also from bus stations at Terminals 4 and 5.
All terminals lie within the Heathrow Free Travel Zone with free travel between the terminals. Terminals 2 and 3 are within walking distance of each other. Transfers from Terminals 2 and 3 to Terminal 4 and 5 are provided by Heathrow Express trains and the London Underground Piccadilly line. Direct transfer between Terminals 4 and 5 is provided by London Buses routes 482 and 490.
Transit passengers remaining airside are provided with free dedicated transfer buses between terminals.
The Heathrow Pod personal rapid transit system shuttles passengers between Terminal 5 and the business car park using 21 small, driverless transportation pods. The pods are battery-powered and run on-demand on a four-kilometre track, each able to carry up to four adults, two children, and their luggage. Plans exist to extend the Pod system to connect Terminals 2 and 3 to remote car parks.
An underground automated people mover system known as the "Transit" operates within Terminal 5, linking the main terminal with the satellite Terminals 5B and 5C. The Transit operates entirely airside using Bombardier Innovia APM 200 people mover vehicles.
The Hotel Hoppa bus network connects all terminals to major hotels in the area.
Taxis are available at all terminals.
Heathrow is accessible via the nearby M4 motorway or A4 road (Terminals 2–3), the M25 motorway (Terminals 4 and 5) and the A30 road (Terminal 4). There are drop-off and pick-up areas at all terminals and short- and long-stay multi-storey car parks. All the Heathrow forecourts are drop-off only. There are further car parks, not run by Heathrow Airport Holdings, just outside the airport: the most recognisable is the National Car Parks facility, although there are many other options; these car parks are connected to the terminals by shuttle buses.
Four parallel tunnels under the northern runway connect the M4 Heathrow spur and the A4 road to Terminals 2–3. The two larger tunnels are each two lanes wide and are used for motorised traffic. The two smaller tunnels were originally reserved for pedestrians and bicycles; to increase traffic capacity the cycle lanes have been modified to each take a single lane of cars, although bicycles still have priority over cars. Pedestrian access to the smaller tunnels has been discontinued, with the free bus services being used instead.
There are (mainly off-road) bicycle routes to some of the terminals. Free bicycle parking places are available in car parks 1 and 1A, at Terminal 4, and to the North and South of Terminal 5's Interchange Plaza. Cycling is not currently allowed through the main tunnel to access Terminals 2 and 3 (Terminal 1 closed in 2015).
2/2015
There is a long history of expansion proposals for Heathrow since it was first designated as a civil airport. Following the cancellation of the Maplin project in 1974, a fourth terminal was proposed but expansion beyond this ruled out. However, the Airports Inquiries of 1981–83 and the 1985 Airports Policy White Paper considered further expansion and, following a four-year-long public inquiry in 1995–99, Terminal 5 was approved. In 2003, after many studies and consultations, the Future of Air Transport White Paper was published which proposed a third runway at Heathrow, as well as a second runway at Stansted Airport. In January 2009, the Transport Secretary at the time, Geoff Hoon announced that the British government supported the expansion of Heathrow by building a third runway and a sixth terminal building. This decision followed the 2003 white paper on the future of air transport in the UK, and a public consultation in November 2007. This was a controversial decision which met with widespread opposition because of the expected greenhouse gas emissions, impact on local communities, as well as noise and air pollution concerns.
Before the 2010 general election, the Conservative and Liberal Democrat parties announced that they would prevent the construction of any third runway or further material expansion of the airport's operating capacity. The Mayor of London, then Boris Johnson, took the position that London needs more airport capacity, favouring the construction of an entirely new airport in the Thames Estuary rather than expanding Heathrow. After the Conservative-Liberal Democrat coalition took power, it was announced that the third runway expansion was cancelled. Two years later, leading Conservatives were reported to have changed their minds on the subject.
Another proposal for expanding Heathrow's capacity was the Heathrow Hub, which aims to extend both runways to a total length of about 7,000 metres and divide them into four so that they each provide two, full length runways, allowing simultaneous take-offs and landings while decreasing noise levels.
In July 2013, the airport submitted three new proposals for expansion to the Airports Commission, which was established to review airport capacity in the southeast of England. The Airports Commission was chaired by Sir Howard Davies who, at the time of his appointment was in the employ of GIC Private Limited (formerly known as Government Investment Corporation of Singapore) and a member of its International Advisory Board. GIC Private Limited was then (2012), as it remains today, one of Heathrow's principal owners. Sir Howard Davies resigned these positions upon confirmation of his appointment to lead the Airports Commission, although it has been observed that he failed to identify these interests when invited to complete the Airports Commission's register of interests. Each of the three proposals that were to be considered by Sir Howard Davies's commission involved the construction of a third runway, either to the north, northwest or southwest of the airport.
The commission released its interim report in December 2013, shortlisting three options: the north-west third runway option at Heathrow, extending an existing runway at Heathrow, and a second runway at Gatwick Airport. After this report was published, the government confirmed that no options had been ruled out for airport expansion in the South-east and that a new runway would not be built at Heathrow before 2015. The full report was published on 1 July 2015, and backed a third, north-west, runway at Heathrow. Reaction to the report was generally negative, particularly from London Mayor Boris Johnson. One senior Conservative told Channel 4: "Howard Davies has dumped an utter steaming pile of poo on the Prime Minister's desk." On 25 October 2016, the government confirmed that Heathrow would be allowed to build a third runway; however, a final decision would not be taken until winter of 2017/18, after consultations and government votes. The earliest opening year would be 2025. On 5 June 2018, the UK Cabinet approved the third runway, with a full vote planned for Parliament. On 25 June 2018, the House of Commons voted, 415–119, in favour of the third runway. The bill received support from most MPs in the Conservative and Labour parties. A judicial review against the decision is being launched by four London local authorities affected by the expansion—Wandsworth, Richmond, Hillingdon and Hammersmith and Fulham—in partnership with Greenpeace and London mayor Sadiq Khan. Khan previously stated he would take legal action if it were passed by Parliament.
Currently, all rail connections with Heathrow airport run along an east–west alignment to and from central London, and a number of schemes have been proposed over the years to develop new rail transport links with other parts of London and with stations outside the city. This mainline rail service is due to be extended to central London and Essex when the Elizabeth line, currently under construction, opens.
A 2009 proposal to create a southern link with via the Waterloo–Reading line was abandoned in 2011 due to lack of funding and difficulties with a high number of level crossings on the route into London, and a plan to link Heathrow to the planned High Speed 2 (HS2) railway line (with a new station, ) was also dropped from the HS2 plans in March 2015.
Among other schemes that have been considered is a rapid transport link between Heathrow and Gatwick Airports, known as "Heathwick", which would allow the airports to operate jointly as an airline hub; In 2018, the Department for Transport began to invite proposals for privately funded rail links to Heathrow Airport. Projects being considered under this initiative include:
The Mayor of London's office and Transport for London commissioned plans in the event of Heathrow's closure—to replace it by a large built-up area. Some of the plans seem to show terminal 5, or part of it, kept as a shopping centre. | https://en.wikipedia.org/wiki?curid=13595 |
Hipparchus
Hipparchus of Nicaea (; , "Hipparkhos"; ) was a Greek astronomer, geographer, and mathematician. He is considered the founder of trigonometry but is most famous for his incidental discovery of precession of the equinoxes.
Hipparchus was born in Nicaea, Bithynia (now İznik, Turkey), and probably died on the island of Rhodes, Greece. He is known to have been a working astronomer at least from 162 to 127 . Hipparchus is considered the greatest ancient astronomical observer and, by some, the greatest overall astronomer of antiquity. He was the first whose quantitative and accurate models for the motion of the Sun and Moon survive. For this he certainly made use of the observations and perhaps the mathematical techniques accumulated over centuries by the Babylonians and by Meton of Athens (5th century ), Timocharis, Aristyllus, Aristarchus of Samos and Eratosthenes, among others. He developed trigonometry and constructed trigonometric tables, and he solved several problems of spherical trigonometry. With his solar and lunar theories and his trigonometry, he may have been the first to develop a reliable method to predict solar eclipses. His other reputed achievements include the discovery and measurement of Earth's precession, the compilation of the first comprehensive star catalog of the western world, and possibly the invention of the astrolabe, also of the armillary sphere, which he used during the creation of much of the star catalogue.
Hipparchus was born in Nicaea (Greek "Νίκαια"), in the ancient district of Bithynia (modern-day Iznik in province Bursa), in what today is the country Turkey. The exact dates of his life are not known, but Ptolemy attributes astronomical observations to him in the period from 147–127 , and some of these are stated as made in Rhodes; earlier observations since 162 might also have been made by him. His birth date ( ) was calculated by Delambre based on clues in his work. Hipparchus must have lived some time after 127 because he analyzed and published his observations from that year. Hipparchus obtained information from Alexandria as well as Babylon, but it is not known when or if he visited these places. He is believed to have died on the island of Rhodes, where he seems to have spent most of his later life.
It is not known what Hipparchus's economic means were nor how he supported his scientific activities. His appearance is likewise unknown: there are no contemporary portraits. In the 2nd and 3rd centuries coins were made in his honour in Bithynia that bear his name and show him with a globe; this supports the tradition that he was born there.
Relatively little of Hipparchus's direct work survives into modern times. Although he wrote at least fourteen books, only his commentary on the popular astronomical poem by Aratus was preserved by later copyists. Most of what is known about Hipparchus comes from Strabo's "Geography" and Pliny's "Natural History" in the 1st century; Ptolemy's 2nd-century "Almagest"; and additional references to him in the 4th century by Pappus and Theon of Alexandria in their commentaries on the "Almagest".
Hipparchus was amongst the first to calculate a heliocentric system, but he abandoned his work because the calculations showed the orbits were not perfectly circular as believed to be mandatory by the science of the time. Although a contemporary of Hipparchus', Seleucus of Seleucia, remained a proponent of the heliocentric model, Hipparchus' rejection of heliocentrism, supported by ideas from Aristotle, remained dominant for nearly 2000 years until Copernican heliocentrism turned the tide of the debate.
Hipparchus's only preserved work is "Τῶν Ἀράτου καὶ Εὐδόξου φαινομένων ἐξήγησις" ("Commentary on the Phaenomena of Eudoxus and Aratus"). This is a highly critical commentary in the form of two books on a popular poem by Aratus based on the work by Eudoxus. Hipparchus also made a list of his major works, which apparently mentioned about fourteen books, but which is only known from references by later authors. His famous star catalog was incorporated into the one by Ptolemy, and may be almost perfectly reconstructed by subtraction of two and two-thirds degrees from the longitudes of Ptolemy's stars. The first trigonometric table was apparently compiled by Hipparchus, who is consequently now known as "the father of trigonometry".
Hipparchus was in the international news in 2005, when it was again proposed (as in 1898) that the data on the celestial globe of Hipparchus or in his star catalog may have been preserved in the only surviving large ancient celestial globe which depicts the constellations with moderate accuracy, the globe carried by the Farnese Atlas. There are a variety of mis-steps in the more ambitious 2005 paper, thus no specialists in the area accept its widely publicized speculation.
Lucio Russo has said that Plutarch, in his work "On the Face in the Moon", was reporting some physical theories that we consider to be Newtonian and that these may have come originally from Hipparchus; he goes on to say that Newton may have been influenced by them. According to one book review, both of these claims have been rejected by other scholars.
A line in Plutarch's "Table Talk" states that Hipparchus counted 103,049 compound propositions that can be formed from ten simple propositions. 103,049 is the tenth Schröder–Hipparchus number, which counts the number of ways of adding one or more pairs of parentheses around consecutive subsequences of two or more items in any sequence of ten symbols. This has led to speculation that Hipparchus knew about enumerative combinatorics, a field of mathematics that developed independently in modern mathematics.
Earlier Greek astronomers and mathematicians were influenced by Babylonian astronomy to some extent, for instance the period relations of the Metonic cycle and Saros cycle may have come from Babylonian sources (see "Babylonian astronomical diaries"). Hipparchus seems to have been the first to exploit Babylonian astronomical knowledge and techniques systematically. Except for Timocharis and Aristillus, he was the first Greek known to divide the circle in 360 degrees of 60 arc minutes (Eratosthenes before him used a simpler sexagesimal system dividing a circle into 60 parts); he also adopted the Babylonian astronomical "cubit" unit (Akkadian "ammatu", Greek πῆχυς "pēchys") which was equivalent to 2° or 2.5° ('large cubit').
Hipparchus probably compiled a list of Babylonian astronomical observations; G. J. Toomer, a historian of astronomy, has suggested that Ptolemy's knowledge of eclipse records and other Babylonian observations in the "Almagest" came from a list made by Hipparchus. Hipparchus's use of Babylonian sources has always been known in a general way, because of Ptolemy's statements. However, Franz Xaver Kugler demonstrated that the synodic and anomalistic periods that Ptolemy attributes to Hipparchus had already been used in Babylonian ephemerides, specifically the collection of texts nowadays called "System B" (sometimes attributed to Kidinnu).
Hipparchus's long draconitic lunar period (5,458 months = 5,923 lunar nodal periods) also appears a few times in Babylonian records. But the only such tablet explicitly dated is post-Hipparchus so the direction of transmission is not settled by the tablets.
Hipparchus's draconitic lunar motion cannot be solved by the lunar-four arguments that are sometimes proposed to explain his anomalistic motion. A solution that has produced the exact ratio is rejected by most historians though it uses the only anciently attested method of determining such ratios, and it automatically delivers the ratio's four-digit numerator and denominator. Hipparchus initially used ("Almagest" 6.9) his 141 BC eclipse with a Babylonian eclipse of 720 BC to find the less accurate ratio 7,160 synodic months = 7,770 draconitic months, simplified by him to 716 = 777 through division by 10. (He similarly found from the 345-year cycle the ratio 4267 synodic months = 4573 anomalistic months and divided by 17 to obtain the standard ratio 251 synodic months = 269 anomalistic months.) If he sought a longer time base for this draconitic investigation he could use his same 141 BC eclipse with a moonrise 1245 BC eclipse from Babylon, an interval of 13,645 synodic months = draconitic months ≈ anomalistic months. Dividing by produces 5458 synodic months = 5923 precisely. The obvious main objection is that the early eclipse is unattested though that is not surprising in itself and there is no consensus on whether Babylonian observations were recorded this remotely. Though Hipparchus's tables formally went back only to 747 BC, 600 years before his era, the tables were actually good back to before the eclipse in question because as only recently noted their use in reverse is no more difficult than forwards.
Hipparchus was recognized as the first mathematician known to have possessed a trigonometric table, which he needed when computing the eccentricity of the orbits of the Moon and Sun. He tabulated values for the chord function, which for a central angle in a circle gives the length of the straight line segment between the points where the angle intersects the circle. He computed this for a circle with a circumference of 21,600 units and a radius (rounded) of 3438 units; this circle has a unit length of 1 arc minute along its perimeter. He tabulated the chords for angles with increments of 7.5°. In modern terms, the chord subtended by a central angle in a circle of given radius equals the radius times twice the sine of half of the angle, i.e.:
The now lost work in which Hipparchus is said to have developed his chord table, is called "Tōn en kuklōi eutheiōn" ("Of Lines Inside a Circle") in Theon of Alexandria's 4th-century commentary on section I.10 of the "Almagest". Some claim the table of Hipparchus may have survived in astronomical treatises in India, like the "Surya Siddhanta". Trigonometry was a significant innovation, because it allowed Greek astronomers to solve any triangle, and made it possible to make quantitative astronomical models and predictions using their preferred geometric techniques.
Hipparchus must have used a better approximation for π than the one from Archimedes of between (3.14085) and (3.14286). Perhaps he had the one later used by Ptolemy: 3;8,30 (sexagesimal)(3.1417) ("Almagest" VI.7), but it is not known whether he computed an improved value himself.
Some scholars do not believe Āryabhaṭa's sine table has anything to do with Hipparchus's chord table. Others do not agree that Hipparchus even constructed a chord table. Bo C. Klintberg states, "With mathematical reconstructions and philosophical arguments I show that Toomer's 1973 paper never contained any conclusive evidence for his claims that Hipparchus had a 3438'-based chord table, and that the Indians used that table to compute their sine tables. Recalculating Toomer's reconstructions with a 3600' radius – i.e. the radius of the chord table in Ptolemy's Almagest, expressed in 'minutes' instead of 'degrees' – generates Hipparchan-like ratios similar to those produced by a 3438′ radius. It is therefore possible that the radius of Hipparchus's chord table was 3600′, and that the Indians independently constructed their 3438′-based sine table."
Hipparchus could have constructed his chord table using the Pythagorean theorem and a theorem known to Archimedes. He also might have developed and used the theorem called Ptolemy's theorem; this was proved by Ptolemy in his "Almagest" (I.10) (and later extended by Carnot).
Hipparchus was the first to show that the stereographic projection is conformal, and that it transforms circles on the sphere that do not pass through the center of projection to circles on the plane. This was the basis for the astrolabe.
Besides geometry, Hipparchus also used arithmetic techniques developed by the Chaldeans. He was one of the first Greek mathematicians to do this, and in this way expanded the techniques available to astronomers and geographers.
There are several indications that Hipparchus knew spherical trigonometry, but the first surviving text discussing it is by Menelaus of Alexandria in the 1st century, who on that basis is now commonly credited with its discovery. (Previous to the finding of the proofs of Menelaus a century ago, Ptolemy was credited with the invention of spherical trigonometry.) Ptolemy later used spherical trigonometry to compute things like the rising and setting points of the ecliptic, or to take account of the lunar parallax. If he did not use spherical trigonometry, Hipparchus may have used a globe for these tasks, reading values off coordinate grids drawn on it, or he may have made approximations from planar geometry, or perhaps used arithmetical approximations developed by the Chaldeans.
Aubrey Diller has shown that the clima calculations which Strabo preserved from Hipparchus could have been performed by spherical trigonometry using the only accurate obliquity known to have been used by ancient astronomers, 23°40′. All thirteen clima figures agree with Diller's proposal. Further confirming his contention is the finding that the big errors in Hipparchus's longitude of Regulus and both longitudes of Spica agree to a few minutes in all three instances with a theory that he took the wrong sign for his correction for parallax when using eclipses for determining stars' positions.
Hipparchus also studied the motion of the Moon and confirmed the accurate values for two periods of its motion that Chaldean astronomers are widely presumed to have possessed before him, whatever their ultimate origin. The traditional value (from Babylonian System B) for the mean synodic month is 29 days; 31,50,8,20 (sexagesimal) = 29.5305941... days. Expressed as 29 days + 12 hours + hours this value has been used later in the Hebrew calendar. The Chaldeans also knew that 251 synodic months ≈ 269 anomalistic months. Hipparchus used the multiple of this period by a factor of 17, because that interval is also an eclipse period, and is also close to an integer number of years (4267 moons : 4573 anomalistic periods : 4630.53 nodal periods : 4611.98 lunar orbits : 344.996 years : 344.982 solar orbits : 126,007.003 days : 126,351.985 rotations). What was so exceptional and useful about the cycle was that all 345-year-interval eclipse pairs occur slightly over 126,007 days apart within a tight range of only about ± hour, guaranteeing (after division by 4267) an estimate of the synodic month correct to one part in order of magnitude 10 million. The 345-year periodicity is why the ancients could conceive of a "mean" month and quantify it so accurately that it is even today correct to a fraction of a second of time.
Hipparchus could confirm his computations by comparing eclipses from his own time (presumably 27 January 141 and 26 November 139 according to [Toomer 1980]), with eclipses from Babylonian records 345 years earlier ("Almagest" IV.2; [A.Jones, 2001]). Already al-Biruni ("Qanun" VII.2.II) and Copernicus ("de revolutionibus" IV.4) noted that the period of 4,267 moons is actually about 5 minutes longer than the value for the eclipse period that Ptolemy attributes to Hipparchus. However, the timing methods of the Babylonians had an error of no less than 8 minutes. Modern scholars agree that Hipparchus rounded the eclipse period to the nearest hour, and used it to confirm the validity of the traditional values, rather than try to derive an improved value from his own observations. From modern ephemerides and taking account of the change in the length of the day (see ΔT) we estimate that the error in the assumed length of the synodic month was less than 0.2 seconds in the 4th century and less than 0.1 seconds in Hipparchus's time.
It had been known for a long time that the motion of the Moon is not uniform: its speed varies. This is called its "anomaly", and it repeats with its own period; the anomalistic month. The Chaldeans took account of this arithmetically, and used a table giving the daily motion of the Moon according to the date within a long period. The Greeks however preferred to think in geometrical models of the sky. Apollonius of Perga had at the end of the 3rd century proposed two models for lunar and planetary motion:
Hipparchus devised a geometrical method to find the parameters from three positions of the Moon, at particular phases of its anomaly. In fact, he did this separately for the eccentric and the epicycle model. Ptolemy describes the details in the "Almagest" IV.11. Hipparchus used two sets of three lunar eclipse observations, which he carefully selected to satisfy the requirements. The eccentric model he fitted to these eclipses from his Babylonian eclipse list: 22/23 December 383 , 18/19 June 382 , and 12/13 December 382 . The epicycle model he fitted to lunar eclipse observations made in Alexandria at 22 September 201 , 19 March 200 , and 11 September 200 .
The somewhat weird numbers are due to the cumbersome unit he used in his chord table according to one group of historians, who explain their reconstruction's inability to agree with these four numbers as partly due to some sloppy rounding and calculation errors by Hipparchus, for which Ptolemy criticised him (he himself made rounding errors too). A simpler alternate reconstruction agrees with all four numbers. Anyway, Hipparchus found inconsistent results; he later used the ratio of the epicycle model ( : ), which is too small (60 : 4;45 sexagesimal). Ptolemy established a ratio of 60 : . (The maximum angular deviation producible by this geometry is the arcsin of divided by 60, or about 5° 1', a figure that is sometimes therefore quoted as the equivalent of the Moon's equation of the center in the Hipparchan model.)
Before Hipparchus, Meton, Euctemon, and their pupils at Athens had made a solstice observation (i.e., timed the moment of the summer solstice) on 27 June 432 (proleptic Julian calendar). Aristarchus of Samos is said to have done so in 280 , and Hipparchus also had an observation by Archimedes. As shown in a 1991
paper, in 158 BC Hipparchus computed a very erroneous summer solstice from Callippus's calendar. He observed the summer solstice in 146 and 135 both accurate to a few hours, but observations of the moment of equinox were simpler, and he made twenty during his lifetime. Ptolemy gives an extensive discussion of Hipparchus's work on the length of the year in the "Almagest" III.1, and quotes many observations that Hipparchus made or used, spanning 162–128 . Analysis of Hipparchus's seventeen equinox observations made at Rhodes shows that the mean error in declination is positive seven arc minutes, nearly agreeing with the sum of refraction by air and Swerdlow's parallax. The random noise is two arc minutes or more nearly one arcminute if rounding is taken into account which approximately agrees with the sharpness of the eye. Ptolemy quotes an equinox timing by Hipparchus (at 24 March 146 at dawn) that differs by 5 hours from the observation made on Alexandria's large public equatorial ring that same day (at 1 hour before noon): Hipparchus may have visited Alexandria but he did not make his equinox observations there; presumably he was on Rhodes (at nearly the same geographical longitude). He could have used the equatorial ring of his armillary sphere or another equatorial ring for these observations, but Hipparchus (and Ptolemy) knew that observations with these instruments are sensitive to a precise alignment with the equator, so if he were restricted to an armillary, it would make more sense to use its meridian ring as a transit instrument. The problem with an equatorial ring (if an observer is naive enough to trust it very near dawn or dusk) is that atmospheric refraction lifts the Sun significantly above the horizon: so for a northern hemisphere observer its apparent declination is too high, which changes the observed time when the Sun crosses the equator. (Worse, the refraction decreases as the Sun rises and increases as it sets, so it may appear to move in the wrong direction with respect to the equator in the course of the day – as Ptolemy mentions. Ptolemy and Hipparchus apparently did not realize that refraction is the cause.) However, such details have doubtful relation to the data of either man, since there is no textual, scientific, or statistical ground for believing that their equinoxes were taken on an equatorial ring, which is useless for solstices in any case. Not one of two centuries of mathematical investigations of their solar errors has claimed to have traced them to the effect of refraction on use of an equatorial ring. Ptolemy claims his solar observations were on a transit instrument set in the meridian.
Recent expert translation and analysis by Anne Tihon of papyrus P. Fouad 267 A has confirmed the 1991 finding cited above that Hipparchus obtained a summer solstice in 158 BC But the papyrus makes the date 26 June, over a day earlier than the 1991 paper's conclusion for 28 June. The earlier study's §M found that Hipparchus did not adopt 26 June solstices until 146 BC when he founded the orbit of the Sun which Ptolemy later adopted. Dovetailing these data suggests Hipparchus extrapolated the 158 BC 26 June solstice from his 145 solstice 12 years later a procedure that would cause only minuscule error. The papyrus also confirmed that Hipparchus had used Callippic solar motion in 158 BC, a new finding in 1991 but not attested directly until P. Fouad 267 A. Another table on the papyrus is perhaps for sidereal motion and a third table is for Metonic tropical motion, using a previously unknown year of – days. This was presumably found by dividing the 274 years from 432 to 158 BC, into the corresponding interval of 100077 days and hours between Meton's sunrise and Hipparchus's sunset solstices.
At the end of his career, Hipparchus wrote a book called "Peri eniausíou megéthous" ("On the Length of the Year") about his results. The established value for the tropical year, introduced by Callippus in or before 330 was days. Speculating a Babylonian origin for the Callippic year is hard to defend, since Babylon did not observe solstices thus the only extant System B year length was based on Greek solstices (see below). Hipparchus's equinox observations gave varying results, but he himself points out (quoted in "Almagest" III.1(H195)) that the observation errors by himself and his predecessors may have been as large as day. He used old solstice observations, and determined a difference of about one day in about 300 years. So he set the length of the tropical year to − days (= 365.24666... days = 365 days 5 hours 55 min, which differs from the actual value (modern estimate, including earth spin acceleration) in his time of about 365.2425 days, an error of about 6 min per year, an hour per decade, 10 hours per century.
Between the solstice observation of Meton and his own, there were 297 years spanning 108,478 days. D. Rawlins noted that this implies a tropical year of 365.24579... days = 365 days;14,44,51 (sexagesimal; = 365 days + + + ) and that this exact year length has been found on one of the few Babylonian clay tablets which explicitly specifies the System B month. This is an indication that Hipparchus's work was known to Chaldeans.
Another value for the year that is attributed to Hipparchus (by the astrologer Vettius Valens in the 1st century) is 365 + + days (= 365.25347... days = 365 days 6 hours 5 min), but this may be a corruption of another value attributed to a Babylonian source: 365 + + days (= 365.25694... days = 365 days 6 hours 10 min). It is not clear if this would be a value for the sidereal year (actual value at his time (modern estimate) about 365.2565 days), but the difference with Hipparchus's value for the tropical year is consistent with his rate of precession (see below).
Before Hipparchus, astronomers knew that the lengths of the seasons are not equal. Hipparchus made observations of equinox and solstice, and according to Ptolemy ("Almagest" III.4) determined that spring (from spring equinox to summer solstice) lasted 94½ days, and summer (from summer solstice to autumn equinox) days. This is inconsistent with a premise of the Sun moving around the Earth in a circle at uniform speed. Hipparchus's solution was to place the Earth not at the center of the Sun's motion, but at some distance from the center. This model described the apparent motion of the Sun fairly well. It is known today that the planets, including the Earth, move in approximate ellipses around the Sun, but this was not discovered until Johannes Kepler published his first two laws of planetary motion in 1609. The value for the eccentricity attributed to Hipparchus by Ptolemy is that the offset is of the radius of the orbit (which is a little too large), and the direction of the apogee would be at longitude 65.5° from the vernal equinox. Hipparchus may also have used other sets of observations, which would lead to different values. One of his two eclipse trios' solar longitudes are consistent with his having initially adopted inaccurate lengths for spring and summer of and days. His other triplet of solar positions is consistent with and days, an improvement on the results ( and days) attributed to Hipparchus by Ptolemy, which a few scholars still question the authorship of. Ptolemy made no change three centuries later, and expressed lengths for the autumn and winter seasons which were already implicit (as shown, e.g., by A. Aaboe).
Hipparchus also undertook to find the distances and sizes of the Sun and the Moon. He published his results in a work of two books called "Perí megethōn kaí apostēmátōn" ("On Sizes and Distances") by Pappus in his commentary on the "Almagest" V.11; Theon of Smyrna (2nd century) mentions the work with the addition "of the Sun and Moon".
Hipparchus measured the apparent diameters of the Sun and Moon with his "diopter". Like others before and after him, he found that the Moon's size varies as it moves on its (eccentric) orbit, but he found no perceptible variation in the apparent diameter of the Sun. He found that at the "mean" distance of the Moon, the Sun and Moon had the same apparent diameter; at that distance, the Moon's diameter fits 650 times into the circle, i.e., the mean apparent diameters are = 0°33′14″.
Like others before and after him, he also noticed that the Moon has a noticeable parallax, i.e., that it appears displaced from its calculated position (compared to the Sun or stars), and the difference is greater when closer to the horizon. He knew that this is because in the then-current models the Moon circles the center of the Earth, but the observer is at the surface—the Moon, Earth and observer form a triangle with a sharp angle that changes all the time. From the size of this parallax, the distance of the Moon as measured in Earth radii can be determined. For the Sun however, there was no observable parallax (we now know that it is about 8.8", several times smaller than the resolution of the unaided eye).
In the first book, Hipparchus assumes that the parallax of the Sun is 0, as if it is at infinite distance. He then analyzed a solar eclipse, which Toomer (against the opinion of over a century of astronomers) presumes to be the eclipse of 14 March 190 . It was total in the region of the Hellespont (and in his birthplace, Nicaea); at the time Toomer proposes the Romans were preparing for war with Antiochus III in the area, and the eclipse is mentioned by Livy in his "Ab Urbe Condita Libri" VIII.2. It was also observed in Alexandria, where the Sun was reported to be obscured 4/5ths by the Moon. Alexandria and Nicaea are on the same meridian. Alexandria is at about 31° North, and the region of the Hellespont about 40° North. (It has been contended that authors like Strabo and Ptolemy had fairly decent values for these geographical positions, so Hipparchus must have known them too. However, Strabo's Hipparchus dependent latitudes for this region are at least 1° too high, and Ptolemy appears to copy them, placing Byzantium 2° high in latitude.) Hipparchus could draw a triangle formed by the two places and the Moon, and from simple geometry was able to establish a distance of the Moon, expressed in Earth radii. Because the eclipse occurred in the morning, the Moon was not in the meridian, and it has been proposed that as a consequence the distance found by Hipparchus was a lower limit. In any case, according to Pappus, Hipparchus found that the least distance is 71 (from this eclipse), and the greatest 81 Earth radii.
In the second book, Hipparchus starts from the opposite extreme assumption: he assigns a (minimum) distance to the Sun of 490 Earth radii. This would correspond to a parallax of 7′, which is apparently the greatest parallax that Hipparchus thought would not be noticed (for comparison: the typical resolution of the human eye is about 2′; Tycho Brahe made naked eye observation with an accuracy down to 1′). In this case, the shadow of the Earth is a cone rather than a cylinder as under the first assumption. Hipparchus observed (at lunar eclipses) that at the mean distance of the Moon, the diameter of the shadow cone is lunar diameters. That apparent diameter is, as he had observed, degrees. With these values and simple geometry, Hipparchus could determine the mean distance; because it was computed for a minimum distance of the Sun, it is the maximum mean distance possible for the Moon. With his value for the eccentricity of the orbit, he could compute the least and greatest distances of the Moon too. According to Pappus, he found a least distance of 62, a mean of , and consequently a greatest distance of Earth radii. With this method, as the parallax of the Sun decreases (i.e., its distance increases), the minimum limit for the mean distance is 59 Earth radii – exactly the mean distance that Ptolemy later derived.
Hipparchus thus had the problematic result that his minimum distance (from book 1) was greater than his maximum mean distance (from book 2). He was intellectually honest about this discrepancy, and probably realized that especially the first method is very sensitive to the accuracy of the observations and parameters. (In fact, modern calculations show that the size of the 189 solar eclipse at Alexandria must have been closer to ths and not the reported ths, a fraction more closely matched by the degree of totality at Alexandria of eclipses occurring in 310 and 129 which were also nearly total in the Hellespont and are thought by many to be more likely possibilities for the eclipse Hipparchus used for his computations.)
Ptolemy later measured the lunar parallax directly ("Almagest" V.13), and used the second method of Hipparchus with lunar eclipses to compute the distance of the Sun ("Almagest" V.15). He criticizes Hipparchus for making contradictory assumptions, and obtaining conflicting results ("Almagest" V.11): but apparently he failed to understand Hipparchus's strategy to establish limits consistent with the observations, rather than a single value for the distance. His results were the best so far: the actual mean distance of the Moon is 60.3 Earth radii, within his limits from Hipparchus's second book.
Theon of Smyrna wrote that according to Hipparchus, the Sun is 1,880 times the size of the Earth, and the Earth twenty-seven times the size of the Moon; apparently this refers to volumes, not diameters. From the geometry of book 2 it follows that the Sun is at 2,550 Earth radii, and the mean distance of the Moon is radii. Similarly, Cleomedes quotes Hipparchus for the sizes of the Sun and Earth as 1050:1; this leads to a mean lunar distance of 61 radii. Apparently Hipparchus later refined his computations, and derived accurate single values that he could use for predictions of solar eclipses.
See [Toomer 1974] for a more detailed discussion.
Pliny ("Naturalis Historia" II.X) tells us that Hipparchus demonstrated that lunar eclipses can occur five months apart, and solar eclipses seven months (instead of the usual six months); and the Sun can be hidden twice in thirty days, but as seen by different nations. Ptolemy discussed this a century later at length in "Almagest" VI.6. The geometry, and the limits of the positions of Sun and Moon when a solar or lunar eclipse is possible, are explained in "Almagest" VI.5. Hipparchus apparently made similar calculations. The result that two solar eclipses can occur one month apart is important, because this can not be based on observations: one is visible on the northern and the other on the southern hemisphere – as Pliny indicates – and the latter was inaccessible to the Greek.
Prediction of a solar eclipse, i.e., exactly when and where it will be visible, requires a solid lunar theory and proper treatment of the lunar parallax. Hipparchus must have been the first to be able to do this. A rigorous treatment requires spherical trigonometry, thus those who remain certain that Hipparchus lacked it must speculate that he may have made do with planar approximations. He may have discussed these things in "Perí tēs katá plátos mēniaías tēs selēnēs kinēseōs" ("On the monthly motion of the Moon in latitude"), a work mentioned in the "Suda".
Pliny also remarks that "he also discovered for what exact reason, although the shadow causing the eclipse must from sunrise onward be below the earth, it happened once in the past that the Moon was eclipsed in the west while both luminaries were visible above the earth" (translation H. Rackham (1938), Loeb Classical Library 330 p. 207). Toomer (1980) argued that this must refer to the large total lunar eclipse of 26 November 139 , when over a clean sea horizon as seen from Rhodes, the Moon was eclipsed in the northwest just after the Sun rose in the southeast. This would be the second eclipse of the 345-year interval that Hipparchus used to verify the traditional Babylonian periods: this puts a late date to the development of Hipparchus's lunar theory. We do not know what "exact reason" Hipparchus found for seeing the Moon eclipsed while apparently it was not in exact opposition to the Sun. Parallax lowers the altitude of the luminaries; refraction raises them, and from a high point of view the horizon is lowered.
Hipparchus and his predecessors used various instruments for astronomical calculations and observations, such as the gnomon, the astrolabe, and the armillary sphere.
Hipparchus is credited with the invention or improvement of several astronomical instruments, which were used for a long time for naked-eye observations. According to Synesius of Ptolemais (4th century) he made the first "astrolabion": this may have been an armillary sphere (which Ptolemy however says he constructed, in "Almagest" V.1); or the predecessor of the planar instrument called astrolabe (also mentioned by Theon of Alexandria). With an astrolabe Hipparchus was the first to be able to measure the geographical latitude and time by observing fixed stars. Previously this was done at daytime by measuring the shadow cast by a gnomon, by recording the length of the longest day of the year or with the portable instrument known as a "scaphe".
Ptolemy mentions ("Almagest" V.14) that he used a similar instrument as Hipparchus, called "dioptra", to measure the apparent diameter of the Sun and Moon. Pappus of Alexandria described it (in his commentary on the "Almagest" of that chapter), as did Proclus ("Hypotyposis" IV). It was a 4-foot rod with a scale, a sighting hole at one end, and a wedge that could be moved along the rod to exactly obscure the disk of Sun or Moon.
Hipparchus also observed solar equinoxes, which may be done with an equatorial ring: its shadow falls on itself when the Sun is on the equator (i.e., in one of the equinoctial points on the ecliptic), but the shadow falls above or below the opposite side of the ring when the Sun is south or north of the equator. Ptolemy quotes (in "Almagest" III.1 (H195)) a description by Hipparchus of an equatorial ring in Alexandria; a little further he describes two such instruments present in Alexandria in his own time.
Hipparchus applied his knowledge of spherical angles to the problem of denoting locations on the Earth's surface. Before him a grid system had been used by Dicaearchus of Messana, but Hipparchus was the first to apply mathematical rigor to the determination of the latitude and longitude of places on the Earth. Hipparchus wrote a critique in three books on the work of the geographer Eratosthenes of Cyrene (3rd century ), called "Pròs tèn Eratosthénous geographían" ("Against the Geography of Eratosthenes"). It is known to us from Strabo of Amaseia, who in his turn criticised Hipparchus in his own "Geographia". Hipparchus apparently made many detailed corrections to the locations and distances mentioned by Eratosthenes. It seems he did not introduce many improvements in methods, but he did propose a means to determine the geographical longitudes of different cities at lunar eclipses (Strabo "Geographia" 1 January 2012). A lunar eclipse is visible simultaneously on half of the Earth, and the difference in longitude between places can be computed from the difference in local time when the eclipse is observed. His approach would give accurate results if it were correctly carried out but the limitations of timekeeping accuracy in his era made this method impractical.
Late in his career (possibly about 135 ) Hipparchus compiled his star catalog, the original of which does not survive. He also constructed a celestial globe depicting the constellations, based on his observations. His interest in the fixed stars may have been inspired by the observation of a supernova (according to Pliny), or by his discovery of precession, according to Ptolemy, who says that Hipparchus could not reconcile his data with earlier observations made by Timocharis and Aristillus. For more information see Discovery of precession. In Raphael's painting "The School of Athens", Hipparchus is depicted holding his celestial globe, as the representative figure for astronomy.
Previously, Eudoxus of Cnidus in the 4th century had described the stars and constellations in two books called "Phaenomena" and "Entropon". Aratus wrote a poem called "Phaenomena" or "Arateia" based on Eudoxus's work. Hipparchus wrote a commentary on the "Arateia" – his only preserved work – which contains many stellar positions and times for rising, culmination, and setting of the constellations, and these are likely to have been based on his own measurements.
Hipparchus made his measurements with an armillary sphere, and obtained the positions of at least 850 stars. It is disputed which coordinate system(s) he used. Ptolemy's catalog in the "Almagest", which is derived from Hipparchus's catalog, is given in ecliptic coordinates. However Delambre in his "Histoire de l'Astronomie Ancienne" (1817) concluded that Hipparchus knew and used the equatorial coordinate system, a conclusion challenged by Otto Neugebauer in his "A History of Ancient Mathematical Astronomy" (1975). Hipparchus seems to have used a mix of ecliptic coordinates and equatorial coordinates: in his commentary on Eudoxos he provides stars' polar distance (equivalent to the declination in the equatorial system), right ascension (equatorial), longitude (ecliptical), polar longitude (hybrid), but not celestial latitude.
As with most of his work, Hipparchus's star catalog was adopted and perhaps expanded by Ptolemy. Delambre, in 1817, cast doubt on Ptolemy's work. It was disputed whether the star catalog in the "Almagest" is due to Hipparchus, but 1976–2002 statistical and spatial analyses (by R. R. Newton, Dennis Rawlins, Gerd Grasshoff, Keith Pickering and Dennis Duke) have shown conclusively that the "Almagest" star catalog is almost entirely Hipparchan. Ptolemy has even (since Brahe, 1598) been accused by astronomers of fraud for stating ("Syntaxis", book 7, chapter 4) that he observed all 1025 stars: for almost every star he used Hipparchus's data and precessed it to his own epoch centuries later by adding 2°40' to the longitude, using an erroneously small precession constant of 1° per century.
In any case the work started by Hipparchus has had a lasting heritage, and was much later updated by Al Sufi (964) and Copernicus (1543). Ulugh Beg reobserved all the Hipparchus stars he could see from Samarkand in 1437 to about the same accuracy as Hipparchus's. The catalog was superseded only in the late 16th century by Brahe and Wilhelm IV of Kassel via superior ruled instruments and spherical trigonometry, which improved accuracy by an order of magnitude even before the invention of the telescope. Hipparchus is considered the greatest observational astronomer from classical antiquity until Brahe.
Hipparchus is only conjectured to have ranked the apparent magnitudes of stars on a numerical scale from 1, the brightest, to 6, the faintest. Nevertheless, this system certainly precedes Ptolemy, who used it extensively about 150. This system was made more precise and extended by N. R. Pogson in 1856, who placed the magnitudes on a logarithmic scale, making magnitude 1 stars 100 times brighter than magnitude 6 stars, thus each magnitude is or 2.512 times brighter than the next faintest magnitude.
Hipparchus is generally recognized as discoverer of the precession of the equinoxes in 127 . His two books on precession, "On the Displacement of the Solsticial and Equinoctial Points" and "On the Length of the Year", are both mentioned in the "Almagest" of Claudius Ptolemy. According to Ptolemy, Hipparchus measured the longitude of Spica and Regulus and other bright stars. Comparing his measurements with data from his predecessors, Timocharis and Aristillus, he concluded that Spica had moved 2° relative to the autumnal equinox. He also compared the lengths of the tropical year (the time it takes the Sun to return to an equinox) and the sidereal year (the time it takes the Sun to return to a fixed star), and found a slight discrepancy. Hipparchus concluded that the equinoxes were moving ("precessing") through the zodiac, and that the rate of precession was not less than 1° in a century.
Hipparchus's treatise "Against the Geography of Eratosthenes" in three books is not preserved.
Most of our knowledge of it comes from Strabo, according to whom Hipparchus thoroughly and often unfairly criticized Eratosthenes, mainly for internal contradictions and inaccuracy in determining positions of geographical localities. Hipparchus insists that a geographic map must be based only on astronomical measurements of latitudes and longitudes and triangulation for finding unknown distances.
In geographic theory and methods Hipparchus introduced three main innovations.
He was the first to use the grade grid, to determine geographic latitude from star observations, and not only from the Sun's altitude, a method known long before him, and to suggest that geographic longitude could be determined by means of simultaneous observations of lunar eclipses in distant places. In the practical part of his work, the so-called "table of climata", Hipparchus listed latitudes for several tens of localities. In particular, he improved Eratosthenes' values for the latitudes of Athens, Sicily, and southern extremity of India.
In calculating latitudes of climata (latitudes correlated with the length of the longest solstitial day), Hipparchus used an unexpectedly accurate value for the obliquity of the ecliptic, 23°40' (the actual value in the second half of the 2nd century was approximately 23°43'), whereas all other ancient authors knew only a roughly rounded value 24°, and even Ptolemy used a less accurate value, 23°51'.
Hipparchus opposed the view generally accepted in the Hellenistic period that the Atlantic and Indian Oceans and the Caspian Sea are parts of a single ocean. At the same time he extends the limits of the oikoumene, i.e. the inhabited part of the land, up to the equator and the Arctic Circle.
Hipparchus' ideas found their reflection in the "Geography" of Ptolemy. In essence, Ptolemy's work is an extended attempt to realize Hipparchus' vision of what geography ought to be.
He is depicted opposite Ptolemy in Raphael's painting The School of Athens, although this figure is popularly believed to be Strabo or Zoroaster.
The rather cumbersome formal name for the ESA's Hipparcos Space Astrometry Mission was High Precision Parallax Collecting Satellite; it was deliberately named in this way to give an acronym, HiPParCoS, that echoed and commemorated the name of Hipparchus. The lunar crater Hipparchus and the asteroid 4000 Hipparchus are more directly named after him.
He was inducted into the International Space Hall of Fame in 2004.
The Astronomer's Monument at the Griffith Observatory in Los Angeles, California, United States features a relief of Hipparchus as one of six of the greatest astronomers of all time and the only one from Antiquity.
General
Precession
Celestial bodies
Star catalog | https://en.wikipedia.org/wiki?curid=13600 |
Huldrych Zwingli
Huldrych Zwingli or Ulrich Zwingli (1 January 1484 – 11 October 1531) was a leader of the Reformation in Switzerland, born during a time of emerging Swiss patriotism and increasing criticism of the Swiss mercenary system. He attended the University of Vienna and the University of Basel, a scholarly center of Renaissance humanism. He continued his studies while he served as a pastor in Glarus and later in Einsiedeln, where he was influenced by the writings of Erasmus.
In 1519, Zwingli became the pastor of the Grossmünster in Zürich where he began to preach ideas on reform of the Catholic Church. In his first public controversy in 1522, he attacked the custom of fasting during Lent. In his publications, he noted corruption in the ecclesiastical hierarchy, promoted clerical marriage, and attacked the use of images in places of worship. In 1525, he introduced a new communion liturgy to replace the Mass. He also clashed with the Anabaptists, which resulted in their persecution. Historians have debated whether or not he turned Zürich into a theocracy.
The Reformation spread to other parts of the Swiss Confederation, but several cantons resisted, preferring to remain Catholic. Zwingli formed an alliance of Reformed cantons which divided the Confederation along religious lines. In 1529, a war was averted at the last moment between the two sides. Meanwhile, Zwingli's ideas came to the attention of Martin Luther and other reformers. They met at the Marburg Colloquy and agreed on many points of doctrine, but they could not reach an accord on the doctrine of the Real Presence of Christ in the Eucharist.
In 1531, Zwingli's alliance applied an unsuccessful food blockade on the Catholic cantons. The cantons responded with an attack at a moment when Zürich was ill-prepared, and Zwingli died on the battlefield. His legacy lives on in the confessions, liturgy, and church orders of the Reformed churches of today.
The Swiss Confederation in Huldrych Zwingli's time consisted of thirteen states (cantons) as well as affiliated areas and common lordships. Unlike the modern state of Switzerland, which operates under a federal government, each of the thirteen cantons was nearly independent, conducting its own domestic and foreign affairs. Each canton formed its own alliances within and without the Confederation. This relative independence served as the basis for conflict during the time of the Reformation when the various cantons divided between different confessional camps. Military ambitions gained an additional impetus with the competition to acquire new territory and resources, as seen for example in the Old Zürich War of 1440–1446.
The wider political environment in Europe during the 15th and 16th centuries was also volatile. For centuries the relationship with the Confederation's powerful neighbour, France, determined the foreign policies of the Swiss. Nominally, the Confederation formed a part of the Holy Roman Empire. However, through a succession of wars culminating in the Swabian War in 1499, the Confederation had become "de facto" independent. As the two continental powers and minor regional states such as the Duchy of Milan, the Duchy of Savoy, and the Papal States competed and fought against each other, there were far-reaching political, economic, and social consequences for the Confederation. During this time the mercenary pension system became a subject of disagreement. The religious factions of Zwingli's time debated vociferously the merits of sending young Swiss men to fight in foreign wars mainly for the enrichment of the cantonal authorities.
These internal and external factors contributed to the rise of a Confederation national consciousness, in which the term "fatherland" () began to take on meaning beyond a reference to an individual canton. At the same time, Renaissance humanism, with its universal values and emphasis on scholarship (as exemplified by Erasmus (1466–1536), the "prince of humanism"), had taken root in the Confederation. Within this environment, defined by the confluence of Swiss patriotism and humanism, Zwingli was born in 1484.
Huldrych Zwingli was born on 1 January 1484 in Wildhaus, in the Toggenburg valley of Switzerland, to a family of farmers, the third child of nine. His father, Ulrich, played a leading role in the administration of the community ("Amtmann" or chief local magistrate). Zwingli's primary schooling was provided by his uncle, Bartholomew, a cleric in Weesen, where he probably met Katharina von Zimmern. At ten years old, Zwingli was sent to Basel to obtain his secondary education where he learned Latin under Magistrate Gregory Bünzli. After three years in Basel, he stayed a short time in Bern with the humanist, Henry Wölfflin. The Dominicans in Bern tried to persuade Zwingli to join their order and it is possible that he was received as a novice. However, his father and uncle disapproved of such a course and he left Bern without completing his Latin studies. He enrolled in the University of Vienna in the winter semester of 1498 but was expelled, according to the university's records. However, it is not certain that Zwingli was indeed expelled, and he re-enrolled in the summer semester of 1500; his activities in 1499 are unknown. Zwingli continued his studies in Vienna until 1502, after which he transferred to the University of Basel where he received the Master of Arts degree ("Magister") in 1506.
Zwingli was ordained in Constance, the seat of the local diocese, and he celebrated his first Mass in his hometown, Wildhaus, on 29 September 1506. As a young priest he had studied little theology, but this was not considered unusual at the time. His first ecclesiastical post was the pastorate of the town of Glarus, where he stayed for ten years. It was in Glarus, whose soldiers were used as mercenaries in Europe, that Zwingli became involved in politics. The Swiss Confederation was embroiled in various campaigns with its neighbours: the French, the Habsburgs, and the Papal States. Zwingli placed himself solidly on the side of the Roman See. In return, Pope Julius II honoured Zwingli by providing him with an annual pension. He took the role of chaplain in several campaigns in Italy, including the Battle of Novara in 1513. However, the decisive defeat of the Swiss in the Battle of Marignano caused a shift in mood in Glarus in favour of the French rather than the pope. Zwingli, the papal partisan, found himself in a difficult position and he decided to retreat to Einsiedeln in the canton of Schwyz. By this time, he had become convinced that mercenary service was immoral and that Swiss unity was indispensable for any future achievements. Some of his earliest extant writings, such as "The Ox" (1510) and "The Labyrinth" (1516), attacked the mercenary system using allegory and satire. His countrymen were presented as virtuous people within a French, imperial, and papal triangle. Zwingli stayed in Einsiedeln for two years during which he withdrew completely from politics in favour of ecclesiastical activities and personal studies.
Zwingli's time as the pastor of Glarus and Einsiedeln was characterized by inner growth and development. He perfected his Greek and he took up the study of Hebrew. His library contained over three hundred volumes from which he was able to draw upon classical, patristic, and scholastic works. He exchanged scholarly letters with a circle of Swiss humanists and began to study the writings of Erasmus. Zwingli took the opportunity to meet him while Erasmus was in Basel between August 1514 and May 1516. Zwingli's turn to relative pacifism and his focus on preaching can be traced to the influence of Erasmus.
In late 1518, the post of the "Leutpriestertum" (people's priest) of the Grossmünster at Zürich became vacant. The canons of the foundation that administered the Grossmünster recognised Zwingli's reputation as a fine preacher and writer. His connection with humanists was a decisive factor as several canons were sympathetic to Erasmian reform. In addition, his opposition to the French and to mercenary service was welcomed by Zürich politicians. On 11 December 1518, the canons elected Zwingli to become the stipendiary priest and on 27 December he moved permanently to Zürich.
On 1 January 1519, Zwingli gave his first sermon in Zürich. Deviating from the prevalent practice of basing a sermon on the Gospel lesson of a particular Sunday, Zwingli, using Erasmus' New Testament as a guide, began to read through the Gospel of Matthew, giving his interpretation during the sermon, known as the method of "lectio continua". He continued to read and interpret the book on subsequent Sundays until he reached the end and then proceeded in the same manner with the Acts of the Apostles, the New Testament epistles, and finally the Old Testament. His motives for doing this are not clear, but in his sermons he used exhortation to achieve moral and ecclesiastical improvement which were goals comparable with Erasmian reform. Sometime after 1520, Zwingli's theological model began to evolve into an idiosyncratic form that was neither Erasmian nor Lutheran. Scholars do not agree on the process of how he developed his own unique model. One view is that Zwingli was trained as an Erasmian humanist and Luther played a decisive role in changing his theology. Another view is that Zwingli did not pay much attention to Luther's theology and in fact he considered it as part of the humanist reform movement. A third view is that Zwingli was not a complete follower of Erasmus, but had diverged from him as early as 1516 and that he independently developed his theology.
Zwingli's theological stance was gradually revealed through his sermons. He attacked moral corruption and in the process he named individuals who were the targets of his denunciations. Monks were accused of indolence and high living. In 1519, Zwingli specifically rejected the veneration of saints and called for the need to distinguish between their true and fictional accounts. He cast doubts on hellfire, asserted that unbaptised children were not damned, and questioned the power of excommunication. His attack on the claim that tithing was a divine institution, however, had the greatest theological and social impact. This contradicted the immediate economic interests of the foundation. One of the elderly canons who had supported Zwingli's election, Konrad Hofmann, complained about his sermons in a letter. Some canons supported Hofmann, but the opposition never grew very large. Zwingli insisted that he was not an innovator and that the sole basis of his teachings was Scripture.
Within the diocese of Constance, Bernhardin Sanson was offering a special indulgence for contributors to the building of St Peter's in Rome. When Sanson arrived at the gates of Zürich at the end of January 1519, parishioners prompted Zwingli with questions. He responded with displeasure that the people were not being properly informed about the conditions of the indulgence and were being induced to part with their money on false pretences. This was over a year after Martin Luther published his Ninety-five theses (31 October 1517). The council of Zürich refused Sanson entry into the city. As the authorities in Rome were anxious to contain the fire started by Luther, the Bishop of Constance denied any support of Sanson and he was recalled.
In August 1519, Zürich was struck by an outbreak of the plague during which at least one in four persons died. All of those who could afford it left the city, but Zwingli remained and continued his pastoral duties. In September, he caught the disease and nearly died. He described his preparation for death in a poem, Zwingli's "Pestlied", consisting of three parts: the onset of the illness, the closeness to death, and the joy of recovery. The final verses of the first part read:
In the years following his recovery, Zwingli's opponents remained in the minority. When a vacancy occurred among the canons of the Grossmünster, Zwingli was elected to fulfill that vacancy on 29 April 1521. In becoming a canon, he became a full citizen of Zürich. He also retained his post as the people's priest of the Grossmünster.
The first public controversy regarding Zwingli's preaching broke out during the season of Lent in 1522. On the first fasting Sunday, 9 March, Zwingli and about a dozen other participants consciously transgressed the fasting rule by cutting and distributing two smoked sausages (the "Wurstessen" in Christoph Froschauer's workshop). Zwingli defended this act in a sermon which was published on 16 April, under the title "Von Erkiesen und Freiheit der Speisen" (Regarding the Choice and Freedom of Foods). He noted that no general valid rule on food can be derived from the Bible and that to transgress such a rule is not a sin. The event, which came to be referred to as the Affair of the Sausages, is considered to be the start of the Reformation in Switzerland. Even before the publication of this treatise, the diocese of Constance reacted by sending a delegation to Zürich. The city council condemned the fasting violation, but assumed responsibility over ecclesiastical matters and requested the religious authorities clarify the issue. The bishop responded on 24 May by admonishing the Grossmünster and city council and repeating the traditional position.
Following this event, Zwingli and other humanist friends petitioned the bishop on 2 July to abolish the requirement of celibacy on the clergy. Two weeks later the petition was reprinted for the public in German as "Eine freundliche Bitte und Ermahnung an die Eidgenossen" (A Friendly Petition and Admonition to the Confederates). The issue was not just an abstract problem for Zwingli, as he had secretly married a widow, Anna Reinhart, earlier in the year. Their cohabitation was well-known and their public wedding took place on 2 April 1524, three months before the birth of their first child. They would eventually have four children: Regula, William, Huldrych, and Anna. As the petition was addressed to the secular authorities, the bishop responded at the same level by notifying the Zürich government to maintain the ecclesiastical order. Other Swiss clergymen joined in Zwingli's cause which encouraged him to make his first major statement of faith, "Apologeticus Archeteles" (The First and Last Word). He defended himself against charges of inciting unrest and heresy. He denied the ecclesiastical hierarchy any right to judge on matters of church order because of its corrupted state.
The events of 1522 brought no clarification on the issues. Not only did the unrest between Zürich and the bishop continue, tensions were growing among Zürich's Confederation partners in the Swiss Diet. On 22 December, the Diet recommended that its members prohibit the new teachings, a strong indictment directed at Zürich. The city council felt obliged to take the initiative and find its own solution.
On 3 January 1523, the Zürich city council invited the clergy of the city and outlying region to a meeting to allow the factions to present their opinions. The bishop was invited to attend or to send a representative. The council would render a decision on who would be allowed to continue to proclaim their views. This meeting, the first Zürich disputation, took place on 29 January 1523.
The meeting attracted a large crowd of approximately six hundred participants. The bishop sent a delegation led by his vicar general, Johannes Fabri. Zwingli summarised his position in the "Schlussreden" (Concluding Statements or the Sixty-seven Articles). Fabri, who had not envisaged an academic disputation in the manner Zwingli had prepared for, was forbidden to discuss high theology before laymen, and simply insisted on the necessity of the ecclesiastical authority. The decision of the council was that Zwingli would be allowed to continue his preaching and that all other preachers should teach only in accordance with Scripture.
In September 1523, Leo Jud, Zwingli's closest friend and colleague and pastor of St. Peterskirche, publicly called for the removal of statues of saints and other icons. This led to demonstrations and iconoclastic activities. The city council decided to work out the matter of images in a second disputation. The essence of the mass and its sacrificial character was also included as a subject of discussion. Supporters of the mass claimed that the eucharist was a true sacrifice, while Zwingli claimed that it was a commemorative meal. As in the first disputation, an invitation was sent out to the Zürich clergy and the bishop of Constance. This time, however, the lay people of Zürich, the dioceses of Chur and Basel, the University of Basel, and the twelve members of the Confederation were also invited. About nine hundred persons attended this meeting, but neither the bishop nor the Confederation sent representatives. The disputation started on 26 October 1523 and lasted two days.
Zwingli again took the lead in the disputation. His opponent was the aforementioned canon, Konrad Hofmann, who had initially supported Zwingli's election. Also taking part was a group of young men demanding a much faster pace of reformation, who among other things pleaded for replacing infant baptism with adult baptism. This group was led by Conrad Grebel, one of the initiators of the Anabaptist movement. During the first three days of dispute, although the controversy of images and the mass were discussed, the arguments led to the question of whether the city council or the ecclesiastical government had the authority to decide on these issues. At this point, Konrad Schmid, a priest from Aargau and follower of Zwingli, made a pragmatic suggestion. As images were not yet considered to be valueless by everyone, he suggested that pastors preach on this subject under threat of punishment. He believed the opinions of the people would gradually change and the voluntary removal of images would follow. Hence, Schmid rejected the radicals and their iconoclasm, but supported Zwingli's position. In November the council passed ordinances in support of Schmid's motion. Zwingli wrote a booklet on the evangelical duties of a minister, "Kurze, christliche Einleitung" (Short Christian Introduction), and the council sent it out to the clergy and the members of the Confederation.
In December 1523, the council set a deadline of Pentecost in 1524 for a solution to the elimination of the mass and images. Zwingli gave a formal opinion in "Vorschlag wegen der Bilder und der Messe" (Proposal Concerning Images and the Mass). He did not urge an immediate, general abolition. The council decided on the orderly removal of images within Zürich, but rural congregations were granted the right to remove them based on majority vote. The decision on the mass was postponed.
Evidence of the effect of the Reformation was seen in early 1524. Candlemas was not celebrated, processions of robed clergy ceased, worshippers did not go with palms or relics on Palm Sunday to the Lindenhof, and triptychs remained covered and closed after Lent. Opposition to the changes came from Konrad Hofmann and his followers, but the council decided in favour of keeping the government mandates. When Hofmann left the city, opposition from pastors hostile to the Reformation broke down. The bishop of Constance tried to intervene in defending the mass and the veneration of images. Zwingli wrote an official response for the council and the result was the severance of all ties between the city and the diocese.
Although the council had hesitated in abolishing the mass, the decrease in the exercise of traditional piety allowed pastors to be unofficially released from the requirement of celebrating mass. As individual pastors altered their practices as each saw fit, Zwingli was prompted to address this disorganised situation by designing a communion liturgy in the German language. This was published in "Aktion oder Brauch des Nachtmahls" (Act or Custom of the Supper). Shortly before Easter, Zwingli and his closest associates requested the council to cancel the mass and to introduce the new public order of worship. On Maundy Thursday, 13 April 1525, Zwingli celebrated communion under his new liturgy. Wooden cups and plates were used to avoid any outward displays of formality. The congregation sat at set tables to emphasise the meal aspect of the sacrament. The sermon was the focal point of the service and there was no organ music or singing. The importance of the sermon in the worship service was underlined by Zwingli's proposal to limit the celebration of communion to four times a year.
For some time Zwingli had accused mendicant orders of hypocrisy and demanded their abolition in order to support the truly poor. He suggested the monasteries be changed into hospitals and welfare institutions and incorporate their wealth into a welfare fund. This was done by reorganising the foundations of the Grossmünster and Fraumünster and pensioning off remaining nuns and monks. The council secularised the church properties (Fraumünster handed over by Zwingli's acquaintance Katharina von Zimmern) and established new welfare programs for the poor. Zwingli requested permission to establish a Latin school, the "Prophezei" (Prophecy) or "Carolinum", at the Grossmünster. The council agreed and it was officially opened on 19 June 1525 with Zwingli and Jud as teachers. It served to retrain and re-educate the clergy. The Zürich Bible translation, traditionally attributed to Zwingli and printed by Christoph Froschauer, bears the mark of teamwork from the Prophecy school. Scholars have not yet attempted to clarify Zwingli's share of the work based on external and stylistic evidence.
Shortly after the second Zürich disputation, many in the radical wing of the Reformation became convinced that Zwingli was making too many concessions to the Zürich council. They rejected the role of civil government and demanded the immediate establishment of a congregation of the faithful. Conrad Grebel, the leader of the radicals and the emerging Anabaptist movement, spoke disparagingly of Zwingli in private. On 15 August 1524 the council insisted on the obligation to baptise all newborn infants. Zwingli secretly conferred with Grebel's group and late in 1524, the council called for official discussions. When talks were broken off, Zwingli published "Wer Ursache gebe zu Aufruhr" (Whoever Causes Unrest) clarifying the opposing points-of-view. On 17 January 1525 a public debate was held and the council decided in favour of Zwingli. Anyone refusing to have their children baptised was required to leave Zürich. The radicals ignored these measures and on 21 January, they met at the house of the mother of another radical leader, Felix Manz. Grebel and a third leader, George Blaurock, performed the first recorded Anabaptist adult baptisms.
On 2 February, the council repeated the requirement on the baptism of all babies and some who failed to comply were arrested and fined, Manz and Blaurock among them. Zwingli and Jud interviewed them and more debates were held before the Zürich council. Meanwhile, the new teachings continued to spread to other parts of the Confederation as well as a number of Swabian towns. On 6–8 November, the last debate on the subject of baptism took place in the Grossmünster. Grebel, Manz, and Blaurock defended their cause before Zwingli, Jud, and other reformers. There was no serious exchange of views as each side would not move from their positions and the debates degenerated into an uproar, each side shouting abuse at the other.
The Zürich council decided that no compromise was possible. On 7 March 1526 it released the notorious mandate that no one shall rebaptise another under the penalty of death. Although Zwingli, technically, had nothing to do with the mandate, there is no indication that he disapproved. Felix Manz, who had sworn to leave Zürich and not to baptise any more, had deliberately returned and continued the practice. After he was arrested and tried, he was executed on 5 January 1527 by being drowned in the Limmat. He was the first Anabaptist martyr; three more were to follow, after which all others either fled or were expelled from Zürich.
On 8 April 1524, five cantons, Lucerne, Uri, Schwyz, Unterwalden, and Zug, formed an alliance, "die fünf Orte" (the Five States) to defend themselves from Zwingli's Reformation. They contacted the opponents of Martin Luther including John Eck, who had debated Luther in the Leipzig Disputation of 1519. Eck offered to dispute Zwingli and he accepted. However, they could not agree on the selection of the judging authority, the location of the debate, and the use of the Swiss Diet as a court. Because of the disagreements, Zwingli decided to boycott the disputation. On 19 May 1526, all the cantons sent delegates to Baden. Although Zürich's representatives were present, they did not participate in the sessions. Eck led the Catholic party while the reformers were represented by Johannes Oecolampadius of Basel, a theologian from Württemberg who had carried on an extensive and friendly correspondence with Zwingli. While the debate proceeded, Zwingli was kept informed of the proceedings and printed pamphlets giving his opinions. It was of little use as the Diet decided against Zwingli. He was to be banned and his writings were no longer to be distributed. Of the thirteen Confederation members, Glarus, Solothurn, Fribourg, and Appenzell as well as the Five States voted against Zwingli. Bern, Basel, Schaffhausen, and Zürich supported him.
The Baden disputation exposed a deep rift in the Confederation on matters of religion. The Reformation was now emerging in other states. The city of St Gallen, an affiliated state to the Confederation, was led by a reformed mayor, Joachim Vadian, and the city abolished the mass in 1527, just two years after Zürich. In Basel, although Zwingli had a close relationship with Oecolampadius, the government did not officially sanction any reformatory changes until 1 April 1529 when the mass was prohibited. Schaffhausen, which had closely followed Zürich's example, formally adopted the Reformation in September 1529. In the case of Bern, Berchtold Haller, the priest at St Vincent Münster, and Niklaus Manuel, the poet, painter, and politician, had campaigned for the reformed cause. But it was only after another disputation that Bern counted itself as a canton of the Reformation. Four hundred and fifty persons participated, including pastors from Bern and other cantons as well as theologians from outside the Confederation such as Martin Bucer and Wolfgang Capito from Strasbourg, Ambrosius Blarer from Constance, and Andreas Althamer from Nuremberg. Eck and Fabri refused to attend and the Catholic cantons did not send representatives. The meeting started on 6 January 1528 and lasted nearly three weeks. Zwingli assumed the main burden of defending the Reformation and he preached twice in the Münster. On 7 February 1528 the council decreed that the Reformation be established in Bern.
Even before the Bern disputation, Zwingli was canvassing for an alliance of reformed cities. Once Bern officially accepted the Reformation, a new alliance, "das Christliche Burgrecht" (the Christian Civic Union) was created. The first meetings were held in Bern between representatives of Bern, Constance, and Zürich on 5–6 January 1528. Other cities, including Basel, Biel, Mülhausen, Schaffhausen, and St Gallen, eventually joined the alliance. The Five (Catholic) States felt encircled and isolated, so they searched for outside allies. After two months of negotiations, the Five States formed "die Christliche Vereinigung" (the Christian Alliance) with Ferdinand of Austria on 22 April 1529.
Soon after the Austrian treaty was signed, a reformed preacher, Jacob Kaiser, was captured in Uznach and executed in Schwyz. This triggered a strong reaction from Zwingli; he drafted "Ratschlag über den Krieg" (Advice About the War) for the government. He outlined justifications for an attack on the Catholic states and other measures to be taken. Before Zürich could implement his plans, a delegation from Bern that included Niklaus Manuel arrived in Zürich. The delegation called on Zürich to settle the matter peacefully. Manuel added that an attack would expose Bern to further dangers as Catholic Valais and the Duchy of Savoy bordered its southern flank. He then noted, "You cannot really bring faith by means of spears and halberds." Zürich, however, decided that it would act alone, knowing that Bern would be obliged to acquiesce. War was declared on 8 June 1529. Zürich was able to raise an army of 30,000 men. The Five States were abandoned by Austria and could raise only 9,000 men. The two forces met near Kappel, but war was averted due to the intervention of Hans Aebli, a relative of Zwingli, who pleaded for an armistice.
Zwingli was obliged to state the terms of the armistice. He demanded the dissolution of the Christian Alliance; unhindered preaching by reformers in the Catholic states; prohibition of the pension system; payment of war reparations; and compensation to the children of Jacob Kaiser. Manuel was involved in the negotiations. Bern was not prepared to insist on the unhindered preaching or the prohibition of the pension system. Zürich and Bern could not agree and the Five (Catholic) States pledged only to dissolve their alliance with Austria. This was a bitter disappointment for Zwingli and it marked his decline in political influence. The first Land Peace of Kappel, "der erste Landfriede", ended the war on 24 June.
While Zwingli carried on the political work of the Swiss Reformation, he developed his theological views with his colleagues. The famous disagreement between Luther and Zwingli on the interpretation of the eucharist originated when Andreas Karlstadt, Luther's former colleague from Wittenberg, published three pamphlets on the Lord's Supper in which Karlstadt rejected the idea of a real presence in the elements. These pamphlets, published in Basel in 1524, received the approval of Oecolampadius and Zwingli. Luther rejected Karlstadt's arguments and considered Zwingli primarily to be a partisan of Karlstadt. Zwingli began to express his thoughts on the eucharist in several publications including "de Eucharistia" (On the Eucharist). Understanding that Christ had ascended to heaven and was sitting at the Father's right hand, Zwingli criticized the idea that Christ's humanity could be in two places at once. Unlike his divinity, Christ's human body was not omnipresent and so could not be in heaven and at the same time be present in the elements. Timothy George, evangelical author, editor of Christianity Today and professor of Historical Theology at Beeson Divinity School at Samford University, has firmly refuted a long-standing misreading of Zwingli that erroneously claimed the Reformer denied all notions of real presence and believed in a memorial view of the Supper, where it was purely symbolic.
By spring 1527, Luther reacted strongly to Zwingli's views in the treatise "Dass Diese Worte Christi "Das ist mein Leib etc." noch fest stehen wider die Schwarmgeister" (That These Words of Christ "This is My Body etc." Still Stand Firm Against the Fanatics). The controversy continued until 1528 when efforts to build bridges between the Lutheran and the Zwinglian views began. Martin Bucer tried to mediate while Philip of Hesse, who wanted to form a political coalition of all Protestant forces, invited the two parties to Marburg to discuss their differences. This event became known as the Marburg Colloquy.
Zwingli accepted Philip's invitation fully believing that he would be able to convince Luther. In contrast, Luther did not expect anything to come out of the meeting and had to be urged by Philip to attend. Zwingli, accompanied by Oecolampadius, arrived on 28 September 1529, with Luther and Philipp Melanchthon arriving shortly thereafter. Other theologians also participated including Martin Bucer, Andreas Osiander, Johannes Brenz, and Justus Jonas. The debates were held from 1–4 October and the results were published in the fifteen "Marburg Articles". The participants were able to agree on fourteen of the articles, but the fifteenth article established the differences in their views on the presence of Christ in the eucharist. Professor George summarized the incompatible views, "On this issue, they parted without having reached an agreement. Both Luther and Zwingli agreed that the bread in the Supper was a sign. For Luther, however, that which the bread signified, namely the body of Christ, was present “in, with, and under” the sign itself. For Zwingli, though, sign and thing signified were separated by a distance—the width between heaven and earth."
The failure to find agreement resulted in strong emotions on both sides. “When the two sides departed, Zwingli cried out in tears, “There are no people on earth with whom I would rather be at one than the [Lutheran] Wittenbergers.”” Because of the differences, Luther initially refused to acknowledge Zwingli and his followers as Christians,
With the failure of the Marburg Colloquy and the split of the Confederation, Zwingli set his goal on an alliance with Philip of Hesse. He kept up a lively correspondence with Philip. Bern refused to participate, but after a long process, Zürich, Basel, and Strasbourg signed a mutual defence treaty with Philip in November 1530. Zwingli also personally negotiated with France's diplomatic representative, but the two sides were too far apart. France wanted to maintain good relations with the Five States. Approaches to Venice and Milan also failed.
As Zwingli was working on establishing these political alliances, Charles V, the Holy Roman Emperor, invited Protestants to the Augsburg Diet to present their views so that he could make a verdict on the issue of faith. The Lutherans presented the Augsburg Confession. Under the leadership of Martin Bucer, the cities of Strasbourg, Constance, Memmingen, and Lindau produced the Tetrapolitan Confession. This document attempted to take a middle position between the Lutherans and Zwinglians. It was too late for the "Burgrecht" cities to produce a confession of their own. Zwingli then produced his own private confession, "Fidei ratio" (Account of Faith) in which he explained his faith in twelve articles conforming to the articles of the Apostles' Creed. The tone was strongly anti-Catholic as well as anti-Lutheran. The Lutherans did not react officially, but criticised it privately. Zwingli's and Luther's old opponent, Johann Eck, counter-attacked with a publication, "Refutation of the Articles Zwingli Submitted to the Emperor".
When Philip of Hesse formed the Schmalkaldic League at the end of 1530, the four cities of the Tetrapolitan Confession joined on the basis of a Lutheran interpretation of that confession. Given the flexibility of the league's entrance requirements, Zürich, Basel, and Bern also considered joining. However, Zwingli could not reconcile the Tetrapolitan Confession with his own beliefs and wrote a harsh refusal to Bucer and Capito. This offended Philip to the point where relations with the League were severed. The "Burgrecht" cities now had no external allies to help deal with internal Confederation religious conflicts.
The peace treaty of the First Kappel War did not define the right of unhindered preaching in the Catholic states. Zwingli interpreted this to mean that preaching should be permitted, but the Five States suppressed any attempts to reform. The "Burgrecht" cities considered different means of applying pressure to the Five States. Basel and Schaffhausen preferred quiet diplomacy while Zürich wanted armed conflict. Zwingli and Jud unequivocally advocated an attack on the Five States. Bern took a middle position which eventually prevailed. In May 1531, Zürich reluctantly agreed to impose a food blockade. It failed to have any effect and in October, Bern decided to withdraw the blockade. Zürich urged its continuation and the "Burgrecht" cities began to quarrel among themselves.
On 9 October 1531, in a surprise move, the Five States declared war on Zürich. Zürich's mobilisation was slow due to internal squabbling and on 11 October, 3500 poorly deployed men encountered a Five States force nearly double their size near Kappel. Many pastors, including Zwingli, were among the soldiers. The battle lasted less than one hour and Zwingli was among the 500 casualties in the Zürich army.
Zwingli had considered himself first and foremost a soldier of Christ; second a defender of his country, the Confederation; and third a leader of his city, Zürich, where he had lived for the previous twelve years. Ironically, he died at the age of 47, not for Christ nor for the Confederation, but for Zürich.
In Tabletalk, Luther is recorded saying: "They say that Zwingli recently died thus; if his error had prevailed, we would have perished, and our church with us. It was a judgment of God. That was always a proud people. The others, the papists, will probably also be dealt with by our Lord God." Erasmus wrote, "We are freed from great fear by the death of the two preachers, Zwingli and Oecolampadius, whose fate has wrought an incredible change in the mind of many. This is the wonderful hand of God on high." Oecolampadius had died on 24 November. Erasmus also wrote, "If Bellona had favoured them, it would have been all over with us."
According to Zwingli, the cornerstone of theology is the Bible. Zwingli appealed to scripture constantly in his writings. He placed its authority above other sources such as the ecumenical councils or the Church Fathers, although he did not hesitate to use other sources to support his arguments. The principles that guide Zwingli's interpretations are derived from his rationalist humanist education and his Reformed understanding of the Bible. He rejected literalist interpretations of a passage, such as those of the Anabaptists, and used synecdoche and analogies, methods he describes in "A Friendly Exegesis" (1527). Two analogies that he used quite effectively were between baptism and circumcision and between the eucharist and Passover. He also paid attention to the immediate context and attempted to understand the purpose behind it, comparing passages of scripture with each other.
Zwingli rejected the word "sacrament" in the popular usage of his time. For ordinary people, the word meant some kind of holy action of which there is inherent power to free the conscience from sin. For Zwingli, a sacrament was an initiatory ceremony or a pledge, pointing out that the word was derived from "sacramentum" meaning an oath. (However, the word is also translated "mystery".) In his early writings on baptism, he noted that baptism was an example of such a pledge. He challenged Catholics by accusing them of superstition when they ascribed the water of baptism a certain power to wash away sin. Later, in his conflict with the Anabaptists, he defended the practice of infant baptism, noting that there is no law forbidding the practice. He argued that baptism was a sign of a covenant with God, thereby replacing circumcision in the Old Testament.
Zwingli approached the eucharist in a similar manner to baptism. During the first Zürich disputation in 1523, he denied that an actual sacrifice occurred during the mass, arguing that Christ made the sacrifice only once and for all eternity. Hence, the eucharist was "a memorial of the sacrifice". Following this argument, he further developed his view, coming to the conclusion of the "signifies" interpretation for the words of the institution. He used various passages of scripture to argue against transubstantiation as well as Luther's views, the key text being John 6:63, "It is the Spirit who gives life, the flesh is of no avail". Zwingli's approach and interpretation of scripture to understand the meaning of the eucharist was one reason he could not reach a consensus with Luther.
The impact of Luther on Zwingli's theological development has long been a source of interest and discussion among Lutheran scholars, who seek to firmly establish Luther as the first Reformer. Zwingli himself asserted vigorously his independence of Luther and the most recent studies have lent credibility to this claim. Zwingli appears to have read Luther's books in search of confirmation from Luther for his own views. He agreed with the stand Luther took against the pope. Like Luther, Zwingli was also a student and admirer of Augustine.
Zwingli enjoyed music and could play several instruments, including the violin, harp, flute, dulcimer and hunting horn. He would sometimes amuse the children of his congregation on his lute and was so well known for his playing that his enemies mocked him as "the evangelical lute-player and fifer". Three of Zwingli's "Lieder" or hymns have been preserved: the "Pestlied" mentioned above, an adaptation of Psalm 65 (c. 1525), and the "Kappeler Lied", which is believed to have been composed during the campaign of the first war of Kappel (1529). These songs were not meant to be sung during worship services and are not identified as hymns of the Reformation, though they were published in some 16th-century hymnals.
Zwingli criticised the practice of priestly chanting and monastic choirs. The criticism dates from 1523 when he attacked certain worship practices. His arguments are detailed in the Conclusions of 1525, in which, Conclusions 44, 45 and 46 are concerned with musical practices under the rubric of "prayer". He associated music with images and vestments, all of which he felt diverted people's attention from true spiritual worship. It is not known what he thought of the musical practices in early Lutheran churches. Zwingli, however, eliminated instrumental music from worship in the church, stating that God had not commanded it in worship. The organist of the People's Church in Zürich is recorded as weeping upon seeing the great organ broken up. Although Zwingli did not express an opinion on congregational singing, he made no effort to encourage it. Nevertheless, scholars have found that Zwingli was supportive of a role for music in the church. Gottfried W. Locher writes, "The old assertion 'Zwingli was against church singing' holds good no longer ... Zwingli's polemic is concerned exclusively with the medieval Latin choral and priestly chanting and not with the hymns of evangelical congregations or choirs". Locher goes on to say that "Zwingli freely allowed vernacular psalm or choral singing. In addition, he even seems to have striven for lively, antiphonal, unison recitative". Locher then summarizes his comments on Zwingli's view of church music as follows: "The chief thought in his conception of worship was always 'conscious attendance and understanding'—'devotion', yet with the lively participation of all concerned".
Today's Musikabteilung (literally: music departement), located in the choir of the "Predigern" church in Zürich was founded in 1971, and forms a scientific music collection of European importance. It publishes the materials entrusted to it at irregular intervals as CDs. The repertoire ranges from the early 16th-century spiritual music of Huldrych Zwingli to music of the late 20th century, published under the label "Musik aus der Zentralbibliothek Zürich".
Zwingli was a humanist and a scholar with many devoted friends and disciples. He communicated as easily with the ordinary people of his congregation as with rulers such as Philip of Hesse. His reputation as a stern, stolid reformer is counterbalanced by the fact that he had an excellent sense of humour and used satiric fables, spoofing, and puns in his writings. He was more conscious of social obligations than was Luther, and he genuinely believed that the masses would accept a government guided by God's word. He tirelessly promoted assistance to the poor, who he believed should be cared for by a truly Christian community.
In December 1531 the Zürich council selected Heinrich Bullinger (1504-1575) as Zwingli's successor. Bullinger immediately removed any doubts about Zwingli's orthodoxy and defended him as a prophet and a martyr. During Bullinger's ascendancy, the confessional divisions of the Swiss Confederation stabilised. Bullinger rallied the reformed cities and cantons and helped them to recover from the defeat at Kappel. Zwingli had instituted fundamental reforms; Bullinger consolidated and refined them.
Scholars have found it difficult to assess Zwingli's impact on history, for several reasons. There is no consensus on the definition of "Zwinglianism"; by any definition, Zwinglianism evolved under his successor, Heinrich Bullinger; and research into Zwingli's influence on Bullinger and John Calvin remains rudimentary. Bullinger adopted most of Zwingli's points of doctrine. Like Zwingli, he summarised his theology several times, the best-known example being the Second Helvetic Confession of 1566. Meanwhile, Calvin had taken over the Reformation in Geneva. Calvin differed with Zwingli on the eucharist and criticised him for regarding it as simply a metaphorical event. In 1549, however, Bullinger and Calvin succeeded in overcoming the differences in doctrine and produced the "Consensus Tigurinus" (Zürich Consensus). They declared that the eucharist was not just symbolic of the meal, but they also rejected the Lutheran position that the body and blood of Christ is in union with the elements. With this rapprochement, Calvin established his role in the Swiss Reformed Churches and eventually in the wider world.
Outside of Switzerland, no church counts Zwingli as its founder. Scholars speculate as to why Zwinglianism has not diffused more widely, even though Zwingli's theology is considered the first expression of Reformed theology. Although his name is not widely recognised, Zwingli's legacy lives on in the basic confessions of the Reformed churches of today. He is often called, after Martin Luther and John Calvin, the "Third Man of the Reformation".
In 2019 the Swiss director released a Swiss-German film on the career of the reformer: "Zwingli".
Zwingli's collected works are expected to fill 21 volumes. A collection of selected works was published in 1995 by the "Zwingliverein" in collaboration with the "Theologischer Verlag Zürich" This four-volume collection contains the following works:
The complete 21-volume edition is being undertaken by the "Zwingliverein" in collaboration with the "Institut für schweizerische Reformationsgeschichte", and is projected to be organised as follows:
Vols. XIII and XIV have been published, vols. XV and XVI are under preparation. Vols. XVII to XXI are planned to cover the New Testament.
Older German / Latin editions available online include:
See also the following English translations of selected works by Zwingli:
Thank you for reading this Wikipedia article. | https://en.wikipedia.org/wiki?curid=13602 |
Homeschooling
Homeschooling, also known as home education, is the education of children at home or at a variety of places other than school. Home education is usually conducted by a parent, tutor, or an online teacher. Many families use less formal ways of educating. "Homeschooling" is the term commonly used in North America, whereas "home education" is commonly used in the United Kingdom, Europe, and in many Commonwealth countries.
Before the introduction of compulsory school attendance laws, most childhood education was done by families and local communities. In many developed countries, homeschooling is a legal alternative to public and private schools. In other nations, homeschooling remains illegal or restricted to specific conditions, as recorded by homeschooling international status and statistics.
For most of history and in different cultures, the education of children at home by family members was a common practice. Enlisting professional tutors was an option available only to the wealthy. Homeschooling declined in the 19th and 20th centuries with the enactment of compulsory attendance laws. However, it continued to be practised in isolated communities. Homeschooling began a resurgence in the 1960s and 1970s with educational reformists dissatisfied with industrialized education.
The earliest public schools in modern Western culture were established during the reformation with the encouragement of Martin Luther in the German states of Gotha and Thuringia in 1524 and 1527. From the 1500s to 1800s the literacy rate increased until a majority of adults were literate, but development of the literacy rate occurred before the implementation of compulsory attendance and universal education.
Home education and apprenticeship continued to remain the main form of education until the 1830s. However, in the 18th century, the majority of people in Europe lacked formal education. Since the early 19th century, formal classroom schooling became the most common means of schooling throughout the developed countries.
In 1647, New England provided compulsory elementary education. Regional differences in schooling existed in colonial America. In the south, farms and plantations were so widely dispersed that community schools such as those in the more compact settlements of the north were impossible. In the middle colonies, the educational situation varied when comparing New York with New England.
Most Native American tribal cultures traditionally used home education and apprenticeship to pass knowledge to children. Parents were supported by extended relatives and tribal leaders in the education of their children. The Native Americans vigorously resisted compulsory education in the United States.
In the 1960s, Rousas John Rushdoony began to advocate homeschooling, which he saw as a way to combat the secular nature of the public school system in the United States. He vigorously attacked progressive school reformers such as Horace Mann and John Dewey, and argued for the dismantling of the state's influence in education in three works: "Intellectual Schizophrenia", "The Messianic Character of American Education", and "The Philosophy of the Christian Curriculum". Rushdoony was frequently called as an expert witness by the Home School Legal Defense Association (HSLDA) in court cases. He frequently advocated the use of private schools.
During this time, American educational professionals Raymond and Dorothy Moore began to research the academic validity of the rapidly growing Early Childhood Education movement. This research included independent studies by other researchers and a review of over 8,000 studies bearing on early childhood education and the physical and mental development of children.
They asserted that formal schooling before ages 8–12 not only lacked the anticipated effectiveness, but also harmed children. The Moores published their view that formal schooling was damaging young children academically, socially, mentally, and even physiologically. The Moores presented evidence that childhood problems such as juvenile delinquency, nearsightedness, increased enrollment of students in special education classes and behavioral problems were the result of increasingly earlier enrollment of students. The Moores cited studies demonstrating that orphans who were given surrogate mothers were measurably more intelligent, with superior long-term effects – even though the mothers were "mentally retarded teenagers" – and that illiterate tribal mothers in Africa produced children who were socially and emotionally more advanced than typical western children, "by western standards of measurement".
Their primary assertion was that the bonds and emotional development made at home with parents during these years produced critical long-term results that were cut short by enrollment in schools, and could neither be replaced nor corrected in an institutional setting afterward. Recognizing a necessity for early out-of-home care for some children, particularly special needs and impoverished children and children from exceptionally inferior homes, they maintained that the vast majority of children were far better situated at home, even with mediocre parents, than with the most gifted and motivated teachers in a school setting. They described the difference as follows: "This is like saying, if you can help a child by taking him off the cold street and housing him in a warm tent, then warm tents should be provided for "all" children – when obviously most children already have even more secure housing."
The Moores embraced homeschooling after the publication of their first work, "Better Late Than Early", in 1975, and became important homeschool advocates and consultants with the publication of books such as "Home Grown Kids" (1981), and "Homeschool Burnout".
Simultaneously, other authors published books questioning the premises and efficacy of compulsory schooling, including "Deschooling Society" by Ivan Illich in 1970 and "No More Public School" by Harold Bennet in 1972.
In 1976, educator John Holt published "Instead of Education; Ways to Help People Do Things Better". In its conclusion, he called for a "Children's Underground Railroad" to help children escape compulsory schooling. In response, Holt was contacted by families from around the U.S. to tell him that they were educating their children at home. In 1977, after corresponding with a number of these families, Holt began producing "Growing Without Schooling", a newsletter dedicated to home education. Holt was nicknamed the "father of homeschooling." Holt later wrote a book about homeschooling, "Teach Your Own", in 1981.
In 1980, Holt said, "I want to make it clear that I don't see homeschooling as some kind of answer to badness of schools. I think that the home is the proper base for the exploration of the world which we call learning or education. Home would be the best base no matter how good the schools were." One common theme in the homeschool philosophies of both Holt and that of the Moores is that home education should not attempt to bring the school construct into the home, or a view of education as an academic preliminary to life. They viewed home education as a natural, experiential aspect of life that occurs as the members of the family are involved with one another in daily living.
Homeschooling can be used as a form of supplemental education and as a way of helping children learn under specific circumstances. The term may also refer to instruction in the home under the supervision of correspondence schools or umbrella schools. Some jurisdictions require adherence to an approved curriculum. A curriculum-free philosophy of homeschooling is sometimes called "unschooling", a term coined in 1977 by American educator and author John Holt in his magazine, "Growing Without Schooling". The term emphasizes the more spontaneous, less structured learning environment in which a child's interests drive his pursuit of knowledge. Some parents provide a liberal arts education using the trivium and quadrivium as the main models.
Parents commonly cite two main motivations for homeschooling their children: dissatisfaction with the local schools and the interest in increased involvement with their children's learning and development. Parental dissatisfaction with available schools typically includes concerns about the school environment, the quality of academic instruction, the curriculum, bullying, racism and lack of faith in the school's ability to cater to their children's special needs. Some parents homeschool in order to have greater control over what and how their children are taught, to cater more adequately to an individual child's aptitudes and abilities, to provide instruction from a specific religious or moral position, and to take advantage of the efficiency of one-to-one instruction and thus allow the child to spend more time on childhood activities, socializing, and non-academic learning.
Some African-American families choose homeschool as a way of increasing their children's understanding of African-American history – such as the Jim Crow laws that resulted in their ancestors being beaten or killed for learning to read – and to limit the harm caused by the unintentional and sometimes subtle systemic racism that affects most American schools.
Some parents have objections to the secular nature of public schools and homeschool in order to give their children a religious education. Use of a religious curriculum is common among these families. Recent sociological work suggests that an increasing number of parents are choosing homeschooling because of low academic quality at the local schools, or because of bullying or health problems.
Homeschooling may also be a factor in the choice of parenting style. Homeschooling can be a matter of consistency for families living in isolated rural locations, for those temporarily abroad, and for those who travel frequently. Many young athletes, actors, and musicians are taught at home to accommodate their training and practice schedules more conveniently. Homeschooling can be about mentorship and apprenticeship, in which a tutor or teacher is with the child for many years and becomes more intimately acquainted with the child.
According to Elizabeth Bartholet, surveys of homeschoolers show that a majority of homeschoolers in the USA are motivated by "conservative Christian beliefs, and seek to remove their children from mainstream culture".
Homeschools use a wide variety of methods and materials. Families choose different educational methods, which represent a variety of educational philosophies and paradigms. Some of the methods or learning environments used include Classical education (including Trivium, Quadrivium), Charlotte Mason education, Montessori method, Theory of multiple intelligences, Unschooling, Radical Unschooling, Waldorf education, School-at-home (curriculum choices from both secular and religious publishers), A Thomas Jefferson Education, unit studies, curriculum made up from private or small publishers, apprenticeship, hands-on-learning, distance learning (both online and correspondence), dual enrollment in local schools or colleges, and curriculum provided by local schools and many others. Some of these approaches are used in private and public schools. Educational research and studies support the use of some of these methods. Unschooling, natural learning, Charlotte Mason Education, Montessori, Waldorf, apprenticeship, hands-on-learning, unit studies are supported to varying degrees by research by constructivist learning theories and situated cognition theories. Elements of these theories may be found in the other methods as well.
A student's education may be customized to support his or her learning level, style, and interests. It is not uncommon for a student to experience more than one approach as the family discovers what works best for their student. Many families use an eclectic approach, picking and choosing from various suppliers. For sources of curricula and books, a study found that 78 percent utilized "a public library"; 77 percent used "a homeschooling catalog, publisher, or individual specialist"; 68 percent used "retail bookstore or another store"; 60 percent used "an education publisher that was not affiliated with homeschooling." "Approximately half" used curriculum from "a homeschooling organization", 37 percent from a "church, synagogue or other religious institution" and 23 percent from "their local public school or district." In 2003, 41 percent utilized some sort of distance learning, approximately 20 percent by "television, video or radio"; 19 percent via "The Internet, e-mail, or the World Wide Web"; and 15 percent taking a "correspondence course by mail designed specifically for homeschoolers."
Individual governmental units, e.g. states and local districts, vary in official curriculum and attendance requirements.
As a subset of homeschooling, informal learning happens outside of the classroom, but has no traditional boundaries of education. Informal learning is an everyday form of learning through participation and creation, in contrast with the traditional view of teacher-centered learning. The term is often combined with non-formal learning, and self-directed learning. Informal learning differs from traditional learning since there are no expected objectives or outcomes. From the learner's standpoint, the knowledge that they receive is not intentional. Anything from planting a garden to baking a cake or even talking to a technician at work about the installation of new software, can be considered informal learning. The individual is completing a task with different intentions but ends up learning skills in the process. Children watching their tomato plants grow will not generate questions about photosynthesis but they will learn that their plants are growing with water and sunlight. This leads them to have a base understanding of complex scientific concepts without any background studying. The recent trend of homeschooling becoming less stigmatized has been in connection with the traditional waning of the idea that the state needs to be in primary and ultimate control over the education and upbringing of all children to create future adult citizens. This breeds an ever-growing importance on the ideas and concepts that children learn outside of the traditional classroom setting, including Informal learning.
Depending on the part of the world, informal learning can take on many different identities and has differing cultural importances. Many ways of organizing homeschooling draw on apprenticeship qualities and on non-western cultures. In some South American indigenous cultures, such as the Chillihuani community in Peru, children learn irrigation and farming technique through play, advancing them not only in their own village and society, but also in their knowledge of realistic techniques that they will need to survive. In Western culture, children use informal learning in two main ways. The first as talked about is through hands-on experience with new material. The second is asking questions to someone who has more experience than they have (i.e. parents, elders). Children's inquisitive nature is their way of cementing the ideas they have learned through exposure to informal learning. It is a more casual way of learning than traditional learning and serves the purpose of taking in information any which way they can.
All other approaches to homeschooling are subsumed under two basic categories: structured and unstructured homeschooling. Structured homeschooling includes any method or style of home education that follows a basic curriculum with articulated goals and outcomes. This style attempts to imitate the structure of the traditional school setting while personalizing the curriculum. Unstructured homeschooling is any form of home education where parents do not construct a curriculum at all. Unschooling, as it is known, attempts to teach through the child's daily experiences and focuses more on self-directed learning by the child, free of textbooks, teachers, and any formal assessment of success or failure.
In a unit study approach, multiple subjects such as math, science, history, art, and geography, are studied in relation to a single topic. Unit studies are useful for teaching multiple grades simultaneously as the difficulty level can be adjusted for each student. An extended form of unit studies, Integrated Thematic Instruction utilizes one central theme integrated throughout the curriculum so that students finish a school year with a deep understanding of a certain broad subject or idea.
All-in-one homeschooling curricula (variously known as "school-at-home", "the traditional approach", "school-in-a-box" or "The Structured Approach"), are instructional methods of teaching in which the curriculum and homework of the student are similar or identical to those used in a public or private school. Purchased as a grade-level package or separately by subject, the package may contain all of the needed books, materials, tests, answer keys, and extensive teacher guides. These materials cover the same subject areas as public schools, allowing for an easy transition into the school system. These are among the most expensive options for homeschooling, but they require minimal preparation and are easy to use. There is, however, complete curriculum available for free, such as that available at allinonehomeschool.com. Some localities provide the same materials used at local schools to homeschoolers. The purchase of a complete curriculum and their teaching/grading service from an accredited distance learning curriculum provider may allow students to obtain an accredited high school diploma.
"Natural learning" refers to a type of learning-on-demand where children pursue knowledge based on their interests and parents take an active part in facilitating activities and experiences conducive to learning but do not rely heavily on textbooks or spend much time "teaching", looking instead for "learning moments" throughout their daily activities. Parents see their role as that of affirming through positive feedback and modeling the necessary skills, and the child's role as being responsible for asking and learning.
The term "unschooling" as coined by John Holt describes an approach in which parents do not authoritatively direct the child's education, but interact with the child following the child's own interests, leaving them free to explore and learn as their interests lead. "Unschooling" does not indicate that the child is not being educated, but that the child is not being "schooled", or educated in a rigid school-type manner. Holt asserted that children learn through the experiences of life, and he encouraged parents to live their lives with their child. Also known as interest-led or child-led learning, unschooling attempts to follow opportunities as they arise in real life, through which a child will learn without coercion. Children at school learn from 1 teacher and 2 auxiliary teachers in a classroom of approximately 30. Kids have the opportunity of dedicated education at home with a ratio of 1 to 1. An unschooled child may utilize texts or classroom instruction, but these are not considered central to education. Holt asserted that there is no specific body of knowledge that is, or should be, required of a child.
Both unschooling and natural learning advocates believe that children learn best by doing; a child may learn reading to further an interest about history or other cultures, or math skills by operating a small business or sharing in family finances. They may learn animal husbandry keeping dairy goats or meat rabbits, botany tending a kitchen garden, chemistry to understand the operation of firearms or the internal combustion engine, or politics and local history by following a zoning or historical-status dispute. While any type of homeschoolers may also use these methods, the unschooled child initiates these learning activities. The natural learner participates with parents and others in learning together.
Another prominent proponent of unschooling is John Taylor Gatto, author of Dumbing Us Down, The Exhausted School, A Different Kind of Teacher, and Weapons of Mass Instruction. Gatto argues that public education is the primary tool of "state-controlled consciousness" and serves as a prime illustration of the total institution — a social system which impels obedience to the state and quells free-thinking or dissent.
Autonomous learning is a school of education which sees learners as individuals who can and should be i.e. be responsible for their own learning climate.
Autonomous education helps students develop their self-consciousness, vision, practicality, and freedom of discussion. These attributes serve to aid the student in his/her independent learning. However, a student must not start their autonomous learning completely on their own. It is said, that by first having interaction with someone who has more knowledge in a subject, will speed up the student's learning, and hence allow them to learn more independently.
Some degree of autonomous learning is popular with those who home educate their children. In true autonomous learning, the child usually gets to decide what projects they wish to tackle or what interests to pursue. In-home education, this can be instead of or in addition to regular subjects like doing math or English.
According to Home Education UK, the autonomous education philosophy emerged from the epistemology of Karl Popper in "The Myth of the Framework: In Defence of Science and Rationality", which is developed in the debates, which seek to rebut the neo-Marxist social philosophy of convergence proposed by the Frankfurt School (e.g. Theodor W. Adorno, Jürgen Habermas, Max Horkheimer).
A homeschool cooperative is a cooperative of families who homeschool their children. It provides an opportunity for children to learn from other parents who are more specialized in certain areas or subjects. Co-ops also provide social interaction. They may take lessons together or go on field trips. Some co-ops also offer events such as prom and graduation for homeschoolers.
Homeschoolers are beginning to utilize Web 2.0 as a way to simulate homeschool cooperatives online. With social networks, homeschoolers can chat, discuss threads in forums, share information and tips, and even participate in online classes via blackboard systems similar to those used by colleges.
According to the Home School Legal Defense Association (HSLDA) in 2004, "Many studies over the last few years have established the academic excellence of homeschooled children." "Home Schooling Achievement", a compilation of studies published by the HSLDA, supported the academic integrity of homeschooling. This booklet summarized a 1997 study by Ray and the 1999 Rudner study. The Rudner study noted two limitations of its own research: it is not necessarily representative of all homeschoolers and it is not a comparison with other schooling methods. Among the homeschooled students who took the tests, the average homeschooled student outperformed his public school peers by 30 to 37 percentile points across all subjects. The study also indicates that public school performance gaps between minorities and genders were virtually non-existent among the homeschooled students who took the tests.
A survey of 11,739 homeschooled students conducted in 2008 found that, on average, the homeschooled students scored 37 percentile points above public school students on standardized achievement tests. This is consistent with the 1999 Rudner study. However, Rudner said that these same students in public school may have scored just as well because of the dedicated parents they had. The Ray study also found that homeschooled students who had a certified teacher as a parent scored one percentile lower than homeschooled students who did not have a certified teacher as a parent. Another nationwide descriptive study conducted by Ray contained students ranging from ages 5–18 and he found that homeschoolers scored in at least the 80th percentile on their tests.
In 2011, a quasi-experimental study was conducted that included homeschooled and traditional public students between the ages of 5 and 10. It was discovered that the majority of the homeschooled children achieved higher standardized scores compared to their counterparts. However, Martin-Chang also found that unschooling children ages 5–10 scored significantly below traditionally educated children, while academically-oriented homeschooled children scored from one half grade level above to 4.5 grade levels above traditionally schooled children on standardized tests (n=37 homeschooled children matched with children from the same socioeconomic and educational background).
Studies have also examined the impact of homeschooling on students' GPAs. Cogan (2010) found that homeschooled students had higher high school GPAs (3.74) and transfer GPAs (3.65) than conventional students. Snyder (2013) provided corroborating evidence that homeschoolers were outperforming their peers in the areas of standardized tests and overall GPAs. Looking beyond high school, a study by the 1990 National Home Education Research Institute (as cited by Wichers, 2001) found that at least 33% of homeschooled students attended a four-year college, and 17% attended a two-year college. This same study examined the students after one year, finding that 17% pursued higher education. Thus, the data indicate that homeschooling can also prepare students for success in higher education.
On average, studies suggest homeschoolers score at or above the national average on standardized tests. Homeschool students have been accepted into many Ivy League universities. However, The Coalition for Responsible Homeschooling notes that "Our knowledge of homeschooling’s effect on academic achievement is limited by the fact that many of the studies that have been conducted on homeschoolers suffer from methodological problems which make their findings inconclusive."
Homeschooled children may receive more individualized attention than students enrolled in traditional public schools. A 2011 study suggests that a structured environment could play a key role in homeschooler academic achievement. This means that parents were highly involved in their child's education and they were creating clear educational goals. In addition, these students were being offered organized lesson plans which are either self-made or purchased.
A study conducted by Ray (2010), indicates that the higher the level of parents' income, the more likely the homeschooled child is able to achieve academic success.
In the 1970s, Raymond and Dorothy Moore conducted four federally funded analyses of more than 8,000 early childhood studies, from which they published their original findings in "Better Late Than Early", 1975. This was followed by "School Can Wait", a repackaging of these same findings designed specifically for educational professionals. They concluded that "where possible, children should be withheld from formal schooling until at least ages eight to ten." Their reason was that children "are not mature enough for formal school programs until their senses, coordination, neurological development and cognition are ready". They concluded that the outcome of forcing children into formal schooling is a sequence of "1) uncertainty as the child leaves the family nest early for a less secure environment, 2) puzzlement at the new pressures and restrictions of the classroom, 3) frustration because unready learning tools – senses, cognition, brain hemispheres, coordination – cannot handle the regimentation of formal lessons and the pressures they bring, 4) hyperactivity growing out of nerves and jitter, from frustration, 5) failure which quite naturally flows from the four experiences above, and 6) delinquency which is failure's twin and apparently for the same reason." According to the Moores, "early formal schooling is burning out our children. Teachers who attempt to cope with these youngsters also are burning out." Aside from academic performance, they think early formal schooling also destroys "positive sociability", encourages peer dependence, and discourages self-worth, optimism, respect for parents, and trust in peers. They believe this situation is particularly acute for boys because of their delay in maturity. The Moores cited a Smithsonian Report on the development of genius, indicating a requirement for "1) much time spent with warm, responsive parents and other adults, 2) very little time spent with peers, and 3) a great deal of free exploration under parental guidance." Their analysis suggested that children need "more of home and less of formal school", "more free exploration with... parents, and fewer limits of classroom and books", and "more old fashioned chores – children working with parents – and less attention to rivalry sports and amusements."
Along with positive school outcomes, homeschooled youth are also less likely to use and abuse illicit substances and are more likely to disapprove of using alcohol and marijuana.
There are claims that studies showing that homeschooled students do better on standardized tests do not compare with mandatory public-school testing.
By contrast, SAT and ACT tests are self-selected by homeschooled and formally schooled students alike. Some homeschoolers averaged higher scores on these college entrance tests in South Carolina. Other scores (1999 data) showed mixed results, for example showing higher levels for homeschoolers in English (homeschooled 23.4 vs national average 20.5) and reading (homeschooled 24.4 vs national average 21.4) on the ACT, but mixed scores in math (homeschooled 20.4 vs national average 20.7 on the ACT as opposed homeschooled 535 vs national average 511 on the 1999 SAT math).
Some advocates of homeschooling and educational choice counter with an input-output theory, pointing out that home educators expend only an average of $500–$600 a year on each student (not counting the cost of the parents' time), in comparison to $9,000–$10,000 (including the cost of staff time) for each public school student in the United States, which suggests home-educated students would be especially dominant on tests if afforded access to an equal commitment of tax-funded educational resources.
Many teachers and school districts oppose the idea of homeschooling. However, research has shown that homeschooled children often excel in many areas of academic endeavor. According to a study done on the homeschool movement, homeschoolers often achieve academic success and admission into elite universities. There is also evidence that most are remarkably well socialized. According to the National Home Education Research Institute president, Brian Ray, socialization is not a problem for homeschooling children, many of whom are involved in community sports, volunteer activities, book groups, or homeschool co-ops.
Using the Piers-Harris Children's Self-Concept Scale, John Taylor later found that, "while half of the conventionally schooled children scored at or below the 50th percentile (in self-concept), only 10.3% of the home-schooling children did so." He further stated that "the self-concept of home-schooling children is significantly higher statistically than that of children attending conventional school. This has implications in the areas of academic achievement and socialization which have been found to parallel self-concept. Regarding socialization, Taylor's results would mean that very few home-schooling children are socially deprived. He states that critics who speak out against homeschooling on the basis of social deprivation are actually addressing an area which favors homeschoolers.
In 2003, the National Home Education Research Institute conducted a survey of 7,300 U.S. adults who had been homeschooled (5,000 for more than seven years). Their findings included:
Richard G. Medlin, Ph.D.'s research found that homeschooled children have better social skills than children attending traditional schools.
Opposition to homeschooling comes from some organizations of teachers and school districts. The National Education Association, a United States teachers' union and professional association, opposes homeschooling.
UC Berkeley political scientist Professor Robert Reich wrote in "The Civic Perils of Homeschooling" (2002) that homeschooling can probably result in biased students, as many homeschooling parents view the education of their children as a matter properly under their control and no one else's. A 2014 study showed that greater exposure to homeschooling was associated with more political tolerance.
Gallup polls of American voters have shown a significant change in attitude in the last 20 years, from 73% opposed to home education in 1985 to 54% opposed in 2001. In 1988, when asked whether parents should have a right to choose homeschooling, 53 percent thought that they should, as revealed by another poll.
Homeschooling is legal in many countries. Countries with the most prevalent home education movements include Australia, Canada, New Zealand, the United Kingdom, Mexico, Chile and the United States. Some countries have highly regulated home education programs as an extension of the compulsory school system; few others, such as Germany, have outlawed it entirely. In other countries, while not restricted by law, homeschooling is not socially acceptable or considered desirable and is virtually non-existent. | https://en.wikipedia.org/wiki?curid=13603 |
Heteroatom
In chemistry, a heteroatom (from Ancient Greek "heteros", "different", + "atomos", "uncut") is, strictly, any atom that is not carbon or hydrogen.
In practice, the term is usually used more specifically, to indicate that non-carbon atoms have replaced carbon in the backbone of the molecular structure. Typical heteroatoms are nitrogen (N), oxygen (O), sulfur (S), phosphorus (P), chlorine (Cl), bromine (Br), and iodine (I), as well as the metals lithium (Li) and magnesium (Mg).
It can also be used with highly specific meanings in specialised contexts. In the description of protein structure, in particular in the Protein Data Bank file format, a heteroatom record (HETATM) describes an atom as belonging to a small molecule cofactor rather than being part of a biopolymer chain.
In the context of zeolites, the term "heteroatom" refers to partial isomorphous substitution of the typical framework atoms (silicon, aluminium, and phosphorus) by other elements such as beryllium, vanadium, and chromium. The goal is usually to adjust properties of the material (e.g., Lewis acidity) to optimize the material for a certain application (e.g., catalysis). | https://en.wikipedia.org/wiki?curid=13605 |
Half-life
Half-life (symbol "t"1⁄2) is the time required for a quantity to reduce to half of its initial value. The term is commonly used in nuclear physics to describe how quickly unstable atoms undergo, or how long stable atoms survive, radioactive decay. The term is also used more generally to characterize any type of exponential or non-exponential decay. For example, the medical sciences refer to the biological half-life of drugs and other chemicals in the human body. The converse of half-life is doubling time.
The original term, "half-life period", dating to Ernest Rutherford's discovery of the principle in 1907, was shortened to "half-life" in the early 1950s. Rutherford applied the principle of a radioactive element's half-life to studies of age determination of rocks by measuring the decay period of radium to lead-206.
Half-life is constant over the lifetime of an exponentially decaying quantity, and it is a characteristic unit for the exponential decay equation. The accompanying table shows the reduction of a quantity as a function of the number of half-lives elapsed.
A half-life usually describes the decay of discrete entities, such as radioactive atoms. In that case, it does not work to use the definition that states "half-life is the time required for exactly half of the entities to decay". For example, if there is just one radioactive atom, and its half-life is one second, there will "not" be "half of an atom" left after one second.
Instead, the half-life is defined in terms of probability: "Half-life is the time required for exactly half of the entities to decay "on average"". In other words, the "probability" of a radioactive atom decaying within its half-life is 50%.
For example, the image on the right is a simulation of many identical atoms undergoing radioactive decay. Note that after one half-life there are not "exactly" one-half of the atoms remaining, only "approximately", because of the random variation in the process. Nevertheless, when there are many identical atoms decaying (right boxes), the law of large numbers suggests that it is a "very good approximation" to say that half of the atoms remain after one half-life.
Various simple exercises can demonstrate probabilistic decay, for example involving flipping coins or running a statistical computer program.
An exponential decay can be described by any of the following three equivalent formulas:
where
The three parameters , , and are all directly related in the following way:
where ln(2) is the natural logarithm of 2 (approximately 0.693).
Some quantities decay by two exponential-decay processes simultaneously. In this case, the actual half-life can be related to the half-lives "t"1 and "t"2 that the quantity would have if each of the decay processes acted in isolation:
For three or more processes, the analogous formula is:
For a proof of these formulas, see Exponential decay § Decay by two or more processes.
There is a half-life describing any exponential-decay process. For example:
The term "half-life" is almost exclusively used for decay processes that are exponential (such as radioactive decay or the other examples above), or approximately exponential (such as biological half-life discussed below). In a decay process that is not even close to exponential, the half-life will change dramatically while the decay is happening. In this situation it is generally uncommon to talk about half-life in the first place, but sometimes people will describe the decay in terms of its "first half-life", "second half-life", etc., where the first half-life is defined as the time required for decay from the initial value to 50%, the second half-life is from 50% to 25%, and so on.
A biological half-life or elimination half-life is the time it takes for a substance (drug, radioactive nuclide, or other) to lose one-half of its pharmacologic, physiologic, or radiological activity. In a medical context, the half-life may also describe the time that it takes for the concentration of a substance in blood plasma to reach one-half of its steady-state value (the "plasma half-life").
The relationship between the biological and plasma half-lives of a substance can be complex, due to factors including accumulation in tissues, active metabolites, and receptor interactions.
While a radioactive isotope decays almost perfectly according to so-called "first order kinetics" where the rate constant is a fixed number, the elimination of a substance from a living organism usually follows more complex chemical kinetics.
For example, the biological half-life of water in a human being is about 9 to 10 days, though this can be altered by behavior and other conditions. The biological half-life of caesium in human beings is between one and four months.
The concept of a half-life has also been utilized for pesticides in plants, and certain authors maintain that pesticide risk and impact assessment models rely on and are sensitive to information describing dissipation from plants.
In epidemiology, the concept of half-life can refer to the length of time for the number of incident cases in a disease outbreak to drop by half, particularly if the dynamics of the outbreak can be modeled exponentially. | https://en.wikipedia.org/wiki?curid=13606 |
Humus
In soil science, humus (derived in 1790–1800 from the Latin "humus" for earth, ground) denominates the fraction of soil organic matter that is amorphous and without the "cellular cake structure characteristic of plants, micro-organisms or animals". Humus significantly affects the bulk density of soil and contributes to its retention of moisture and nutrients. Even though compost and humus are used interchangeably, they are not the same. The difference with compost is that humus is created through anaerobic fermentation, while the first is made posible by aerobic decomposition.
In agriculture, "humus" sometimes also is used to describe mature or natural compost extracted from a woodland or other spontaneous source for use as a soil conditioner. It is also used to describe a topsoil horizon that contains organic matter (humus type, humus form, humus profile).
Humus is the dark organic matter that forms in soil when dead plant and animal matter decays. Humus has many nutrients that improve the health of soil, nitrogen being the most important. The ratio of carbon to nitrogen (C:N) of humus is 10:1.
It is difficult to define humus precisely because it is a very complex substance which is not fully understood. Humus is different from decomposing soil organic matter. The latter looks rough and has visible remains of the original plant or animal matter. Fully humified humus, on the contrary, has a uniformly dark, spongy, and jelly-like appearance, and is amorphous; it may gradually decompose over several years or persist for millennia. It has no determinate shape, structure, or quality. However, when examined under a microscope, humus may reveal tiny plant, animal, or microbial remains that have been mechanically, but not chemically, degraded. This suggests an ambiguous boundary between humus and soil organic matter. While distinct, humus is an integral part of soil organic matter.
Microorganisms decompose a large portion of the soil organic matter into inorganic minerals that the roots of plants can absorb as nutrients. This process is termed "mineralization". In this process, nitrogen (nitrogen cycle) and the other nutrients (nutrient cycle) in the decomposed organic matter are recycled. Depending on the conditions in which the decomposition occurs, a fraction of the organic matter does not mineralize, and instead is transformed by a process called "humification" into concatenations of organic polymers. Because these organic polymers are resistant to the action of microorganisms, they are stable, and constitute "humus". This stability implies that humus integrates into the permanent structure of the soil, thereby improving it.
Humification can occur naturally in soil or artificially in the production of compost. Organic matter is humified by a combination of saprotrophic fungi, bacteria, microbes and animals such as earthworms, nematodes, protozoa, and arthropods. Plant remains, including those that animals digested and excreted, contain organic compounds: sugars, starches, proteins, carbohydrates, lignins, waxes, resins, and organic acids. Decay in the soil begins with the decomposition of sugars and starches from carbohydrates, which decompose easily as detritivores initially invade the dead plant organs, while the remaining cellulose and lignin decompose more slowly. Simple proteins, organic acids, starches, and sugars decompose rapidly, while crude proteins, fats, waxes, and resins remain relatively unchanged for longer periods of time. Lignin, which is quickly transformed by white-rot fungi, is one of the primary precursors of humus, together with by-products of microbial and animal activity. The humus produced by humification is thus a mixture of compounds and complex biological chemicals of plant, animal, or microbial origin that has many functions and benefits in soil. Some judge earthworm humus (vermicompost) to be the optimal organic manure.
Much of the humus in most soils has persisted for more than 100 years, rather than having been decomposed into CO2, and can be regarded as stable; this organic matter has been protected from decomposition by microbial or enzyme action because it is hidden (occluded) inside small aggregates of soil particles, or tightly sorbed or complexed to clays. Most humus that is not protected in this way is decomposed within 10 years and can be regarded as less stable or more labile. Stable humus contributes few plant-available nutrients in soil, but it helps maintain its physical structure. A very stable form of humus is formed from the slow oxidation of soil carbon after the incorporation of finely powdered charcoal into the topsoil. This process is speculated to have been important in the formation of the very fertile Amazonian "terra preta do Indio".
Humus has a characteristic black or dark brown color and is organic due to an accumulation of organic carbon. Soil scientists use the capital letters O, A, B, C, and E to identify the master horizons, and lowercase letters for distinctions of these horizons. Most soils have three major horizons: the surface horizon (A), the subsoil (B), and the substratum (C). Some soils have an organic horizon (O) on the surface, but this horizon can also be buried. The master horizon (E) is used for subsurface horizons that have significantly lost minerals (eluviation). Bedrock, which is not soil, uses the letter R.
The importance of chemically stable humus is thought by some to be the fertility it provides to soils in both a physical and chemical sense, though some agricultural experts put a greater focus on other features of it, such as its ability to suppress disease. It helps the soil retain moisture by increasing microporosity, and encourages the formation of good soil structure. The incorporation of oxygen into large organic molecular assemblages generates many active, negatively charged sites that bind to positively charged ions (cations) of plant nutrients, making them more available to the plant by way of ion exchange. Humus allows soil organisms to feed and reproduce, and is often described as the "life-force" of the soil. | https://en.wikipedia.org/wiki?curid=13607 |
Hydrogen bond
A hydrogen bond (often informally abbreviated H-bond) is a partial intermolecular bonding interaction between a lone pair on an electron rich donor atom, particularly the second-row elements nitrogen (N), oxygen (O), or fluorine (F), and the antibonding molecular orbital of a bond between hydrogen (H) and a more electronegative atom or group. Such an interacting system is generally denoted Dn–H···Ac, where the solid line denotes a polar covalent bond, and the dotted or dashed line indicates the hydrogen bond. The use of three centered dots for the hydrogen bond is specifically recommended by the IUPAC. While hydrogen bonding has both covalent and electrostatic contributions, and the degrees to which they contribute are currently debated, the present evidence strongly implies that the primary contribution is covalent.
Hydrogen bonds can be intermolecular (occurring between separate molecules) or intramolecular (occurring among parts of the same molecule). Depending on the nature of the donor and acceptor atoms which constitute the bond, their geometry, and environment, the energy of a hydrogen bond can vary between 1 and 40 kcal/mol. This makes them somewhat stronger than a van der Waals interaction, and weaker than fully covalent or ionic bonds. This type of bond can occur in inorganic molecules such as water and in organic molecules like DNA and proteins.
The hydrogen bond is responsible for many of the anomalous physical and chemical properties of compounds of N, O, and F. In particular, intermolecular hydrogen bonding is responsible for the high boiling point of water (100 °C) compared to the other group 16 hydrides that have much weaker hydrogen bonds. Intramolecular hydrogen bonding is partly responsible for the secondary and tertiary structures of proteins and nucleic acids. It also plays an important role in the structure of polymers, both synthetic and natural.
Weaker hydrogen bonds are known for hydrogen atoms bound to elements such as sulfur (S) or chlorine (Cl); even carbon (C) can serve as a donor, particularly when the carbon or one of its neighbors is electronegative (e.g., in chloroform, aldehydes and terminal acetylenes). Gradually, it was recognized that there are many examples of weaker hydrogen bonding involving donor other than N, O, or F and/or acceptor Ac with electronegativity approaching that of hydrogen (rather than being much more electronegative). Though these "non-traditional" hydrogen bonding interactions are often quite weak (~1 kcal/mol), they are also ubiquitous and are increasingly recognized as important control elements in receptor-ligand interactions in medicinal chemistry or intra-/intermolecular interactions in materials sciences. The definition of hydrogen bonding has gradually broadened over time to include these weaker attractive interactions. In 2011, an IUPAC Task Group recommended a modern evidence-based definition of hydrogen bonding, which was published in the IUPAC journal "Pure and Applied Chemistry". This definition specifies:
As part of a more detailed list of criteria, the IUPAC publication acknowledges that the attractive interaction can arise from some combination of electrostatics (multipole-multipole and multipole-induced multipole interactions), covalency (charge transfer by orbital overlap), and dispersion (London forces), and states that the relative importance of each will vary depending on the system. However, a footnote to the criterion recommends the exclusion of interactions in which dispersion is the primary contributor, specifically giving Ar---CH4 and CH4---CH4 as examples of such interactions to be excluded from the definition.
Nevertheless, most introductory textbooks still restrict the definition of hydrogen bond to the "classical" type of hydrogen bond characterized in the opening paragraph.
A hydrogen atom attached to a relatively electronegative atom is the hydrogen bond "donor". C-H bonds only participate in hydrogen bonding when the carbon atom is bound to electronegative substituents, as is the case in chloroform, CHCl3. In a hydrogen bond, the electronegative atom not covalently attached to the hydrogen is named proton acceptor, whereas the one covalently bound to the hydrogen is named the proton donor. While this nomenclature is recommended by the IUPAC, it can be misleading, since in other donor-acceptor bonds, the donor/acceptor assignment is based on the source of the electron pair (such nomenclature is also used for hydrogen bonds by some authors). In the hydrogen bond donor, the H center is protic. The donor is a Lewis acid. Hydrogen bonds are represented as H···Y system, where the dots represent the hydrogen bond. Liquids that display hydrogen bonding (such as water) are called associated liquids.
The hydrogen bond is often described as an electrostatic dipole-dipole interaction. However, it also has some features of covalent bonding: it is directional and strong, produces interatomic distances shorter than the sum of the van der Waals radii, and usually involves a limited number of interaction partners, which can be interpreted as a type of valence. These covalent features are more substantial when acceptors bind hydrogens from more electronegative donors.
Hydrogen bonds can vary in strength from weak (1–2 kJ mol−1) to strong (161.5 kJ mol−1 in the ion ). Typical enthalpies in vapor include:
The strength of intermolecular hydrogen bonds is most often evaluated by measurements of equilibria between molecules containing donor and/or acceptor units, most often in solution. The strength of intramolecular hydrogen bonds can be studied with equilibria between conformers with and without hydrogen bonds. The most important method for the identification of hydrogen bonds also in complicated molecules is crystallography, sometimes also NMR-spectroscopy. Structural details, in particular distances between donor and acceptor which are smaller than the sum of the van der Waals radii can be taken as indication of the hydrogen bond strength.
One scheme gives the following somewhat arbitrary classification: those that are 15 to 40 kcal/mol, 5 to 15 kcal/mol, and >0 to 5 kcal/mol are considered strong, moderate, and weak, respectively.
The X−H distance is typically ≈110 pm, whereas the H···Y distance is ≈160 to 200 pm. The typical length of a hydrogen bond in water is 197 pm. The ideal bond angle depends on the nature of the hydrogen bond donor. The following hydrogen bond angles between a hydrofluoric acid donor and various acceptors have been determined experimentally:
Strong hydrogen bonds are revealed by downfield shifts in the 1H NMR spectrum. For example, the acidic proton in the enol tautomer of acetylacetone appears at δH 15.5, which is about 10 ppm downfield of a conventional alcohol.
In the IR spectrum, hydrogen bonding shifts the X-H stretching frequency to lower energy (i.e. the vibration frequency decreases). This shift reflects a weakening of the X-H bond. Certain hydrogen bonds - improper hydrogen bonds - show a blue shift of the X-H stretching frequency and a decrease in the bond length. H-bonds can also be measured by IR vibrational mode shifts of the acceptor. The amide I mode of backbone carbonyls in α-helices shifts to lower frequencies when they form H-bonds with side-chain hydroxyl groups.
Hydrogen bonding is of continuing theoretical interest. According to a modern description O:H-O integrates both the intermolecular O:H lone pair ":" nonbond and the intramolecular H-O polar-covalent bond associated with O-O repulsive coupling.
Quantum chemical calculations of the relevant interresidue potential constants (compliance constants) revealed large differences between individual H bonds of the same type. For example, the central interresidue N−H···N hydrogen bond between guanine and cytosine is much stronger in comparison to the N−H···N bond between the adenine-thymine pair.
Theoretically, the bond strength of the hydrogen bonds can be assessed using NCI index, non-covalent interactions index, which allows a visualization of these non-covalent interactions, as its name indicates, using the electron density of the system.
From interpretations of the anisotropies in the Compton profile of ordinary ice that the hydrogen bond is partly covalent. However, this interpretation was challenged.
Most generally, the hydrogen bond can be viewed as a metric-dependent electrostatic scalar field between two or more intermolecular bonds. This is slightly different from the intramolecular bound states of, for example, covalent or ionic bonds; however, hydrogen bonding is generally still a bound state phenomenon, since the interaction energy has a net negative sum. The initial theory of hydrogen bonding proposed by Linus Pauling suggested that the hydrogen bonds had a partial covalent nature. This interpretation remained controversial until NMR techniques demonstrated information transfer between hydrogen-bonded nuclei, a feat that would only be possible if the hydrogen bond contained some covalent character.
The concept of hydrogen bonding once was challenging. Linus Pauling credits T. S. Moore and T. F. Winmill with the first mention of the hydrogen bond, in 1912. Moore and Winmill used the hydrogen bond to account for the fact that trimethylammonium hydroxide is a weaker base than tetramethylammonium hydroxide. The description of hydrogen bonding in its better-known setting, water, came some years later, in 1920, from Latimer and Rodebush. In that paper, Latimer and Rodebush cite work by a fellow scientist at their laboratory, Maurice Loyal Huggins, saying, "Mr. Huggins of this laboratory in some work as yet unpublished, has used the idea of a hydrogen kernel held between two atoms as a theory in regard to certain organic compounds."
A ubiquitous example of a hydrogen bond is found between water molecules. In a discrete water molecule, there are two hydrogen atoms and one oxygen atom. Two molecules of water can form a hydrogen bond between them that is to say oxygen-hydrogen bonding; the simplest case, when only two molecules are present, is called the water dimer and is often used as a model system. When more molecules are present, as is the case with liquid water, more bonds are possible because the oxygen of one water molecule has two lone pairs of electrons, each of which can form a hydrogen bond with a hydrogen on another water molecule. This can repeat such that every water molecule is H-bonded with up to four other molecules, as shown in the figure (two through its two lone pairs, and two through its two hydrogen atoms). Hydrogen bonding strongly affects the crystal structure of ice, helping to create an open hexagonal lattice. The density of ice is less than the density of water at the same temperature; thus, the solid phase of water floats on the liquid, unlike most other substances.
Liquid water's high boiling point is due to the high number of hydrogen bonds each molecule can form, relative to its low molecular mass. Owing to the difficulty of breaking these bonds, water has a very high boiling point, melting point, and viscosity compared to otherwise similar liquids not conjoined by hydrogen bonds. Water is unique because its oxygen atom has two lone pairs and two hydrogen atoms, meaning that the total number of bonds of a water molecule is up to four.
The number of hydrogen bonds formed by a molecule of liquid water fluctuates with time and temperature. From TIP4P liquid water simulations at 25 °C, it was estimated that each water molecule participates in an average of 3.59 hydrogen bonds. At 100 °C, this number decreases to 3.24 due to the increased molecular motion and decreased density, while at 0 °C, the average number of hydrogen bonds increases to 3.69. A more recent study found a much smaller number of hydrogen bonds: 2.357 at 25 °C. The differences may be due to the use of a different method for defining and counting the hydrogen bonds.
Where the bond strengths are more equivalent, one might instead find the atoms of two interacting water molecules partitioned into two polyatomic ions of opposite charge, specifically hydroxide (OH−) and hydronium (H3O+). (Hydronium ions are also known as "hydroxonium" ions.)
Indeed, in pure water under conditions of standard temperature and pressure, this latter formulation is applicable only rarely; on average about one in every 5.5 × 108 molecules gives up a proton to another water molecule, in accordance with the value of the dissociation constant for water under such conditions. It is a crucial part of the uniqueness of water.
Because water may form hydrogen bonds with solute proton donors and acceptors, it may competitively inhibit the formation of solute intermolecular or intramolecular hydrogen bonds. Consequently, hydrogen bonds between or within solute molecules dissolved in water are almost always unfavorable relative to hydrogen bonds between water and the donors and acceptors for hydrogen bonds on those solutes. Hydrogen bonds between water molecules have an average lifetime of 10−11 seconds, or 10 picoseconds.
A single hydrogen atom can participate in two hydrogen bonds, rather than one. This type of bonding is called "bifurcated" (split in two or "two-forked"). It can exist, for instance, in complex natural or synthetic organic molecules. It has been suggested that a bifurcated hydrogen atom is an essential step in water reorientation.
Acceptor-type hydrogen bonds (terminating on an oxygen's lone pairs) are more likely to form bifurcation (it is called overcoordinated oxygen, OCO) than are donor-type hydrogen bonds, beginning on the same oxygen's hydrogens.
For example, hydrogen fluoride—which has three lone pairs on the F atom but only one H atom—can form only two bonds; (ammonia has the opposite problem: three hydrogen atoms but only one lone pair).
Hydrogen bonding plays an important role in determining the three-dimensional structures and the properties adopted by many synthetic and natural proteins. Compared to the C-C, C-O, and C-N bonds that comprise most polymers, hydrogen bonds are far weaker, perhaps 5%. Thus, hydrogen bonds can be broken by chemical or mechanical means while retaining the basic structure of the polymer backbone. This hierarchy of bond strengths (covalent bonds being stronger than hydrogen-bonds being stronger than van der Waals forces) is key to understanding the properties of many materials.
In these macromolecules, bonding between parts of the same macromolecule cause it to fold into a specific shape, which helps determine the molecule's physiological or biochemical role. For example, the double helical structure of DNA is due largely to hydrogen bonding between its base pairs (as well as pi stacking interactions), which link one complementary strand to the other and enable replication.
In the secondary structure of proteins, hydrogen bonds form between the backbone oxygens and amide hydrogens. When the spacing of the amino acid residues participating in a hydrogen bond occurs regularly between positions "i" and "i" + 4, an alpha helix is formed. When the spacing is less, between positions "i" and "i" + 3, then a 310 helix is formed. When two strands are joined by hydrogen bonds involving alternating residues on each participating strand, a beta sheet is formed. Hydrogen bonds also play a part in forming the tertiary structure of protein through interaction of R-groups. (See also protein folding).
Bifurcated H-bond systems are common in alpha-helical transmembrane proteins between the backbone amide C=O of residue "i" as the H-bond acceptor and two H-bond donors from residue "i+4": the backbone amide N-H and a side-chain hydroxyl or thiol H+. The energy preference of the bifurcated H-bond hydroxyl or thiol system is -3.4 kcal/mol or -2.6 kcal/mol, respectively. This type of bifurcated H-bond provides an intrahelical H-bonding partner for polar side-chains, such as serine, threonine, and cysteine within the hydrophobic membrane environments.
The role of hydrogen bonds in protein folding has also been linked to osmolyte-induced protein stabilization. Protective osmolytes, such as trehalose and sorbitol, shift the protein folding equilibrium toward the folded state, in a concentration dependent manner. While the prevalent explanation for osmolyte action relies on excluded volume effects that are entropic in nature, recent circular dichroism (CD) experiments have shown osmolyte to act through an enthalpic effect. The molecular mechanism for their role in protein stabilization is still not well established, though several mechanisms have been proposed. Recently, computer molecular dynamics simulations suggested that osmolytes stabilize proteins by modifying the hydrogen bonds in the protein hydration layer.
Several studies have shown that hydrogen bonds play an important role for the stability between subunits in multimeric proteins. For example, a study of sorbitol dehydrogenase displayed an important hydrogen bonding network which stabilizes the tetrameric quaternary structure within the mammalian sorbitol dehydrogenase protein family.
A protein backbone hydrogen bond incompletely shielded from water attack is a dehydron. Dehydrons promote the removal of water through proteins or ligand binding. The exogenous dehydration enhances the electrostatic interaction between the amide and carbonyl groups by de-shielding their partial charges. Furthermore, the dehydration stabilizes the hydrogen bond by destabilizing the nonbonded state consisting of dehydrated isolated charges.
Wool, being a protein fibre, is held together by hydrogen bonds, causing wool to recoil when stretched. However, washing at high temperatures can permanently break the hydrogen bonds and a garment may permanently lose its shape.
Hydrogen bonds are important in the structure of cellulose and derived polymers in its many different forms in nature, such as cotton and flax.
Many polymers are strengthened by hydrogen bonds within and between the chains. Among the synthetic polymers, a well characterized example is nylon, where hydrogen bonds occur in the repeat unit and play a major role in crystallization of the material. The bonds occur between carbonyl and amine groups in the amide repeat unit. They effectively link adjacent chains, which help reinforce the material. The effect is great in aramid fibre, where hydrogen bonds stabilize the linear chains laterally. The chain axes are aligned along the fibre axis, making the fibres extremely stiff and strong.
The hydrogen-bond networks make both natural and synthetic polymers sensitive to humidity levels in the atmosphere because water molecules can diffuse into the surface and disrupt the network. Some polymers are more sensitive than others. Thus nylons are more sensitive than aramids, and nylon 6 more sensitive than nylon-11.
A symmetric hydrogen bond is a special type of hydrogen bond in which the proton is spaced exactly halfway between two identical atoms. The strength of the bond to each of those atoms is equal. It is an example of a three-center four-electron bond. This type of bond is much stronger than a "normal" hydrogen bond. The effective bond order is 0.5, so its strength is comparable to a covalent bond. It is seen in ice at high pressure, and also in the solid phase of many anhydrous acids such as hydrofluoric acid and formic acid at high pressure. It is also seen in the bifluoride ion [F--H--F]−. Due to severe steric constraint, the protonated form of Proton Sponge (1,8-bis(dimethylamino)naphthalene) and its derivatives also have symmetric hydrogen bonds ([N--H--N]+), although in the case of protonated Proton Sponge, the assembly is bent.
Symmetric hydrogen bonds have been observed recently spectroscopically in formic acid at high pressure (>GPa). Each hydrogen atom forms a partial covalent bond with two atoms rather than one. Symmetric hydrogen bonds have been postulated in ice at high pressure (Ice X). Low-barrier hydrogen bonds form when the distance between two heteroatoms is very small.
The hydrogen bond can be compared with the closely related dihydrogen bond, which is also an intermolecular bonding interaction involving hydrogen atoms. These structures have been known for some time, and well characterized by crystallography; however, an understanding of their relationship to the conventional hydrogen bond, ionic bond, and covalent bond remains unclear. Generally, the hydrogen bond is characterized by a proton acceptor that is a lone pair of electrons in nonmetallic atoms (most notably in the nitrogen, and chalcogen groups). In some cases, these proton acceptors may be pi-bonds or metal complexes. In the dihydrogen bond, however, a metal hydride serves as a proton acceptor, thus forming a hydrogen-hydrogen interaction. Neutron diffraction has shown that the molecular geometry of these complexes is similar to hydrogen bonds, in that the bond length is very adaptable to the metal complex/hydrogen donor system.
The dynamics of hydrogen bond structures in water can be probed by the IR spectrum of OH stretching vibration. In the hydrogen bonding network in protic organic ionic plastic crystals (POIPCs), which are a type of phase change material exhibiting solid-solid phase transitions prior to melting, variable-temperature infrared spectroscopy can reveal the temperature dependence of hydrogen bonds and the dynamics of both the anions and the cations. The sudden weakening of hydrogen bonds during the solid-solid phase transition seems to be coupled with the onset of orientational or rotational disorder of the ions.
Hydrogen bonding is a key to the design of drugs. According to Lipinski's rule of five the majority of orally active drugs tend to have between five and ten hydrogen bonds. These interactions exist between nitrogen–hydrogen and oxygen–hydrogen centers. As with many other rules of thumb, many exceptions exist. | https://en.wikipedia.org/wiki?curid=13609 |
Heraldry
Heraldry () is a broad term, encompassing the design, display, and study of armorial bearings (known as "armory"), as well as related disciplines, such as vexillology, together with the study of ceremony, rank, and pedigree. Armory, the best-known branch of heraldry, concerns the design and transmission of the heraldic achievement. The achievement, or armorial bearings usually includes a coat of arms on a shield, helmet, and crest, together with any accompanying devices, such as supporters, badges, heraldic banners, and mottoes.
Although the use of various devices to signify individuals and groups goes back to antiquity, both the form and use of such devices varied widely, and the concept of regular, hereditary designs, constituting the distinguishing feature of heraldry, did not develop until the High Middle Ages. It is very often claimed that the use of helmets with face guards during this period made it difficult to recognize one's commanders in the field when large armies gathered together for extended periods, necessitating the development of heraldry as a symbolic language, but there is very little actual support for this view.
The beauty and pageantry of heraldic designs allowed them to survive the gradual abandonment of armour on the battlefield during the seventeenth century. Heraldry has been described poetically as "the handmaid of history", "the shorthand of history", and "the floral border in the garden of history". In modern times, individuals, public and private organizations, corporations, cities, towns, and regions use heraldry and its conventions to symbolize their heritage, achievements, and aspirations.
Various symbols have been used to represent individuals or groups for thousands of years. The earliest representations of distinct persons and regions in Egyptian art show the use of standards topped with the images or symbols of various gods, and the names of kings appear upon emblems known as serekhs, representing the king's palace, and usually topped with a falcon representing the god Horus, of whom the king was regarded as the earthly incarnation. Similar emblems and devices are found in ancient Mesopotamian art of the same period, and the precursors of heraldic beasts such as the griffin can also be found. In the Bible, the Book of Numbers refers to the standards and ensigns of the children of Israel, who were commanded to gather beneath these emblems and declare their pedigrees. The Greek and Latin writers frequently describe the shields and symbols of various heroes, and units of the Roman army were sometimes identified by distinctive markings on their shields.
Until the nineteenth century, it was common for heraldic writers to cite examples such as these, and metaphorical symbols such as the "Lion of Judah" or "Eagle of the Caesars" as evidence of the antiquity of heraldry itself; and to infer therefrom that the great figures of ancient history bore arms representing their noble status and descent. The Book of Saint Albans, compiled in 1486, declares that Christ himself was a gentleman of coat armour. But these fabulous claims have long since been dismissed as the fantasy of medieval heralds, for there is no evidence of a distinctive symbolic language akin to that of heraldry during this early period; nor do many of the shields described in antiquity bear a close resemblance to those of medieval heraldry; nor is there any evidence that specific symbols or designs were passed down from one generation to the next, representing a particular person or line of descent.
The medieval heralds also devised arms for various knights and lords from history and literature. Notable examples include the toads attributed to Pharamond, the cross and martlets of Edward the Confessor, and the various arms attributed to the Nine Worthies and the Knights of the Round Table. These too are now regarded as a fanciful invention, rather than evidence of the antiquity of heraldry.
The development of the modern heraldic language cannot be attributed to a single individual, time, or place. Although certain designs that are now considered heraldic were evidently in use during the eleventh century, most accounts and depictions of shields up to the beginning of the twelfth century contain little or no evidence of their heraldic character. For example, the Bayeux Tapestry, illustrating the Norman invasion of England in 1066, and probably commissioned about 1077, when the cathedral of Bayeux was rebuilt, depicts a number of shields of various shapes and designs, many of which are plain, while others are decorated with dragons, crosses, or other typically heraldic figures. Yet no individual is depicted twice bearing the same arms, nor are any of the descendants of the various persons depicted known to have borne devices resembling those in the tapestry.
Similarly, an account of the French knights at the court of the Byzantine emperor Alexius I at the beginning of the twelfth century describes their shields of polished metal, utterly devoid of heraldic design. A Spanish manuscript from 1109 describes both plain and decorated shields, none of which appears to have been heraldic. The Abbey of St. Denis contained a window commemorating the knights who embarked on the Second Crusade in 1147, and was probably made soon after the event; but Montfaucon's illustration of the window before it was destroyed shows no heraldic design on any of the shields.
In England, from the time of the Norman conquest, official documents had to be sealed. Beginning in the twelfth century, seals assumed a distinctly heraldic character; a number of seals dating from between 1135 and 1155 appear to show the adoption of heraldic devices in England, France, Germany, Spain, and Italy. A notable example of an early armorial seal is attached to a charter granted by Philip I, Count of Flanders, in 1164. Seals from the latter part of the eleventh and early twelfth centuries show no evidence of heraldic symbolism, but by the end of the twelfth century, seals are uniformly heraldic in nature.
One of the earliest known examples of armory as it subsequently came to be practiced can be seen on the tomb of Geoffrey Plantagenet, Count of Anjou, who died in 1151. An enamel, probably commissioned by Geoffrey's widow between 1155 and 1160, depicts him carrying a blue shield decorated with six golden lions rampant. He wears a blue helmet adorned with another lion, and his cloak is lined in vair. A medieval chronicle states that Geoffrey was given a shield of this description when he was knighted by his father-in-law, Henry I, in 1128; but this account probably dates to about 1175.
The earlier heraldic writers attributed the lions of England to William the Conqueror, but the earliest evidence of the association of lions with the English crown is a seal bearing two lions passant, used by the future King John during the lifetime of his father, Henry II, who died in 1189. Since Henry was the son of Geoffrey Plantagenet, it seems reasonable to suppose that the adoption of lions as an heraldic emblem by Henry or his sons might have been inspired by Geoffrey's shield. John's elder brother, Richard the Lionheart, who succeeded his father on the throne, is believed to have been the first to have borne the arms of three lions passant-guardant, still the arms of England, having earlier used two lions rampant combatant, which arms may also have belonged to his father. Richard is also credited with having originated the English crest of a lion statant (now statant-guardant).
The origins of heraldry are sometimes associated with the Crusades, a series of military campaigns undertaken by Christian armies from 1096 to 1487, with the goal of reconquering Jerusalem and other former Byzantine territories captured by Muslim forces during the seventh century. While there is no evidence that heraldic art originated in the course of the Crusades, there is no reason to doubt that the gathering of large armies, drawn from across Europe for a united cause, would have encouraged the adoption of armorial bearings as a means of identifying one's commanders in the field, or that it helped disseminate the principles of armory across Europe. At least two distinctive features of heraldry are generally accepted as products of the crusaders: the surcoat, an outer garment worn over the armor to protect the wearer from the heat of the sun, was often decorated with the same devices that appeared on a knight's shield. It is from this garment that the phrase "coat of arms" is derived. Also the lambrequin, or mantling, that depends from the helmet and frames the shield in modern heraldry, began as a practical covering for the helmet and the back of the neck during the Crusades, serving much the same function as the surcoat. Its slashed or scalloped edge, today rendered as billowing flourishes, is thought to have originated from hard wearing in the field, or as a means of deadening a sword blow and perhaps entangling the attacker's weapon.
The spread of armorial bearings across Europe soon gave rise to a new occupation: the herald, originally a type of messenger employed by noblemen, assumed the responsibility of learning and knowing the rank, pedigree, and heraldic devices of various knights and lords, as well as the rules and protocols governing the design and description, or "blazoning" of arms, and the precedence of their bearers. As early as the late thirteenth century, certain heralds in the employ of monarchs were given the title "King of Heralds", which eventually became "King of Arms."
In the earliest period, arms were assumed by their bearers without any need for heraldic authority. However, by the middle of the fourteenth century, the principle that only a single individual was entitled to bear a particular coat of arms was generally accepted, and disputes over the ownership of arms seems to have led to gradual establishment of heraldic authorities to regulate their use. The earliest known work of heraldic jurisprudence, "De Insigniis et Armis", was written about 1350 by Bartolus de Saxoferrato, a professor of law at the University of Padua. The most celebrated armorial dispute in English heraldry is that of "Scrope v Grosvenor" (1390), in which two different men claimed the right to bear "azure, a bend or". The continued proliferation of arms, and the number of disputes arising from different men assuming the same arms, led Henry V to issue a proclamation in 1419, forbidding all those who had not borne arms at the Battle of Agincourt from assuming arms, except by inheritance or a grant from the crown.
Beginning in the reign of Henry VIII of England, the English Kings of Arms were commanded to make "visitations", in which they traveled about the country, recording arms borne under proper authority, and requiring those who bore arms without authority either to obtain authority for them, or cease their use. Arms borne improperly were to be taken down and defaced. The first such visitation began in 1530, and the last was carried out in 1700, although no new commissions to carry out visitations were made after the accession of William III in 1689. There is very little evidence that Scots herald ever went on visitations.
In 1484, during the reign of Richard III, the various heralds employed by the crown were incorporated into England's College of Arms, through which all new grants of arms would eventually be issued. The college currently consists of three Kings of Arms, assisted by six Heralds, and four Pursuivants, or junior officers of arms, all under the authority of the Earl Marshal; but all of the arms granted by the college are granted by the authority of the crown. In Scotland Court of the Lord Lyon King of Arms oversees the heraldry, and holds court sessions which are an official part of Scotland's court system.
Similar bodies regulate the granting of arms in other monarchies and several members of the Commonwealth of Nations, but in most other countries there is no heraldic authority, and no law preventing anyone from assuming whatever arms they please, provided that they do not infringe upon the arms of another.
Although heraldry originated from military necessity, it soon found itself at home in the pageantry of the medieval tournament. The opportunity for knights and lords to display their heraldic bearings in a competitive medium led to further refinements, such as the development of elaborate tournament helms, and further popularized the art of heraldry throughout Europe. Prominent burghers and corporations, including many cities and towns, assumed or obtained grants of arms, with only nominal military associations. Heraldic devices were depicted in various contexts, such as religious and funerary art, and in using a wide variety of media, including stonework, carved wood, enamel, stained glass, and embroidery.
As the rise of firearms rendered the mounted knight increasingly irrelevant on the battlefield during the sixteenth and seventeenth centuries, and the tournament faded into history, the military character of heraldry gave way to its use as a decorative art. Freed from the limitations of actual shields and the need for arms to be easily distinguished in combat, heraldic artists designed increasingly elaborate achievements, culminating in the development of "landscape heraldry", incorporating realistic depictions of landscapes, during the latter part of the eighteenth and early part of the nineteenth century. These fell out of fashion during the mid-nineteenth century, when a renewed interest in the history of armory led to the re-evaluation of earlier designs, and a new appreciation for the medieval origins of the art. Since the late nineteenth century, heraldry has focused on the use of varied lines of partition and little-used ordinaries to produce new and unique designs.
A heraldic achievement consists of a shield of arms the coat of arms, or simply coat, together with all of its accompanying elements, such as a crest, supporters, and other heraldic embellishments. The term "coat of arms" technically refers to the shield of arms itself, but the phrase is commonly used to refer to the entire achievement. The one indispensable element of a coat of arms is the shield; many ancient coats of arms consist of nothing else, but no achievement or armorial bearings exists without a coat of arms.
From a very early date, illustrations of arms were frequently embellished with helmets placed above the shields. These in turn came to be decorated with fan-shaped or sculptural crests, often incorporating elements from the shield of arms; as well as a wreath or torse, or sometimes a coronet, from which depended the lambrequin or mantling. To these elements, modern heraldry often adds a motto displayed on a ribbon, typically below the shield. The helmet is borne of right, and forms no part of a grant of arms; it may be assumed without authority by anyone entitled to bear arms, together with mantling and whatever motto the armiger may desire. The crest, however, together with the torse or coronet from which it arises, must be granted or confirmed by the relevant heraldic authority.
If the bearer is entitled to the ribbon, collar, or badge of a knightly order, it may encircle or depend from the shield. Some arms, particularly those of the nobility, are further embellished with supporters, heraldic figures standing alongside or behind the shield; often these stand on a compartment, typically a mound of earth and grass, on which other badges, symbols, or heraldic banners may be displayed. The most elaborate achievements sometimes display the entire coat of arms beneath a pavilion, an embellished tent or canopy of the type associated with the medieval tournament., though this is only very rarely found in English or Scots achievements.
The primary element of an heraldic achievement is the shield, or escutcheon, upon which the coat of arms is depicted. All of the other elements of an achievement are designed to decorate and complement these arms, but only the shield of arms is required. The shape of the shield, like many other details, is normally left to the discretion of the heraldic artist, and many different shapes have prevailed during different periods of heraldic design, and in different parts of Europe.
One shape alone is normally reserved for a specific purpose: the lozenge, a diamond-shaped escutcheon, was traditionally used to display the arms of women, on the grounds that shields, as implements of war, were inappropriate for this purpose. This distinction was not always strictly adhered to, and a general exception was usually made for sovereigns, whose arms represented an entire nation. Sometimes an oval shield, or cartouche, was substituted for the lozenge; this shape was also widely used for the arms of clerics in French, Spanish, and Italian heraldry, although it was never reserved for their use. In recent years, the use of the cartouche for women's arms has become general in Scottish heraldry, while both Scottish and Irish authorities have permitted a traditional shield under certain circumstances, and in Canadian heraldry the shield is now regularly granted.
The whole surface of the escutcheon is termed the field, which may be plain, consisting of a single tincture, or divided into multiple sections of differing tinctures by various lines of partition; and any part of the field may be "semé", or powdered with small charges. The edges and adjacent parts of the escutcheon are used to identify the placement of various heraldic charges; the upper edge, and the corresponding upper third of the shield, are referred to as the chief; the lower part is the base. The sides of the shield are known as the dexter and sinister flanks, although it is important to note that these terms are based on the point of view of the bearer of the shield, who would be standing behind it; accordingly the side which is to the bearer's right is the dexter, and the side to the bearer's left is the sinister, although to the observer, and in all heraldic illustration, the dexter is on the left side, and the sinister on the right.
The placement of various charges may also refer to a number of specific points, nine in number according to some authorities, but eleven according to others. The three most important are "fess point", located in the visual center of the shield; the "honour point", located midway between fess point and the chief; and the "nombril point", located midway between fess point and the base. The other points include "dexter chief", "center chief", and "sinister chief", running along the upper part of the shield from left to right, above the honour point; "dexter flank" and "sinister flank", on the sides approximately level with fess point; and "dexter base", "middle base", and "sinister base" along the lower part of the shield, below the nombril point.
One of the most distinctive qualities of heraldry is the use of a limited palette of colours and patterns, usually referred to as tinctures. These are divided into three categories, known as "metals", "colours", and "furs".
The metals are "or" and "argent", representing gold and silver, respectively, although in practice they are usually depicted as yellow and white. Five colours are universally recognized: "gules", or red; "sable", or black; "azure", or blue; "vert", or green; and "purpure", or purple; and most heraldic authorities also admit two additional colours, known as "sanguine" or "murrey", a dark red or mulberry colour between gules and purpure, and "tenné", an orange or dark yellow to brown colour. These last two are quite rare, and are often referred to as "stains", from the belief that they were used to represent some dishonourable act, although in fact there is no evidence that this use existed outside the imagination of the more fanciful heraldic writers. Perhaps owing to the realization that there is really no such thing as a "stain" in genuine heraldry, as well as the desire to create new and unique designs, the use of these colours for general purposes has become accepted in the twentieth and twenty-first centuries. Occasionally one meets with other colours, particularly in continental heraldry, although they are not generally regarded among the standard heraldic colours. Among these are "cendrée", or ash-colour; "brunâtre", or brown; "bleu-céleste" or "bleu de ciel", sky blue; "amaranth" or "columbine", a bright violet-red or pink colour; and "carnation", commonly used to represent flesh in French heraldry. A more recent addition is the use of "copper" as a metal in one or two Canadian coats of arms.
There are two basic types of heraldic fur, known as ermine and vair, but over the course of centuries each has developed a number of variations. Ermine represents the fur of the stoat, a type of weasel, in its white winter coat, when it is called an ermine. It consists of a white, or occasionally silver field, powdered with black figures known as "ermine spots", representing the black tip of the animal's tail. Ermine was traditionally used to line the cloaks and caps of the nobility. The shape of the heraldic ermine spot has varied considerably over time, and nowadays is typically drawn as an arrowhead surmounted by three small dots, but older forms may be employed at the artist's discretion. When the field is sable and the ermine spots argent, the same pattern is termed "ermines"; when the field is "or" rather than argent, the fur is termed "erminois"; and when the field is sable and the ermine spots "or", it is termed "pean".
Vair represents the winter coat of the red squirrel, which is blue-grey on top and white underneath. To form the linings of cloaks, the pelts were sewn together, forming an undulating, bell-shaped pattern, with interlocking light and dark rows. The heraldic fur is depicted with interlocking rows of argent and azure, although the shape of the pelts, usually referred to as "vair bells", is usually left to the artist's discretion. In the modern form, the bells are depicted with straight lines and sharp angles, and meet only at points; in the older, undulating pattern, now known as "vair ondé" or "vair ancien", the bells of each tincture are curved and joined at the base. There is no fixed rule as to whether the argent bells should be at the top or the bottom of each row. At one time vair commonly came in three sizes, and this distinction is sometimes encountered in continental heraldry; if the field contains fewer than four rows, the fur is termed "gros vair" or "beffroi"; if of six or more, it is "menu-vair", or miniver.
A common variation is "counter-vair", in which alternating rows are reversed, so that the bases of the vair bells of each tincture are joined to those of the same tincture in the row above or below. When the rows are arranged so that the bells of each tincture form vertical columns, it is termed "vair in pale"; in continental heraldry one may encounter "vair in bend", which is similar to vair in pale, but diagonal. When alternating rows are reversed as in counter-vair, and then displaced by half the width of one bell, it is termed "vair in point", or wave-vair. A form peculiar to German heraldry is "alternate vair", in which each vair bell is divided in half vertically, with half argent and half azure. All of these variations can also be depicted in the form known as "potent", in which the shape of the vair bell is replaced by a "T"-shaped figure, known as a potent from its resemblance to a crutch. Although it is really just a variation of vair, it is frequently treated as a separate fur.
When the same patterns are composed of tinctures other than argent and azure, they are termed "vairé" or "vairy" of those tinctures, rather than "vair"; "potenté" of other colours may also be found. Usually vairé will consist of one metal and one colour, but ermine or one of its variations may also be used, and vairé of four tinctures, usually two metals and two colours, is sometimes found.
Three additional furs are sometimes encountered in continental heraldry; in French and Italian heraldry one meets with "plumeté" or "plumetty", in which the field appears to be covered with feathers, and "papelonné", in which it is decorated with scales. In German heraldry one may encounter "kursch", or vair bellies, depicted as brown and furry; all of these probably originated as variations of vair.
Considerable latitude is given to the heraldic artist in depicting the heraldic tinctures; there is no fixed shade or hue to any of them.
Whenever an object is depicted as it appears in nature, rather than in one or more of the heraldic tinctures, it is termed "proper", or the colour of nature. This does not seem to have been done in the earliest heraldry, but examples are known from at least the seventeenth century. While there can be no objection to the occasional depiction of objects in this manner, the overuse of charges in their natural colours is often cited as indicative of bad heraldic practice. The much-maligned practice of landscape heraldry, which flourished in the latter part of the eighteenth and early part of the nineteenth century, made extensive use of such non-heraldic colours.
One of the most important conventions of heraldry is the so-called "rule of tincture". To provide for contrast and visibility, metals should never be placed on metals, and colours should never be placed on colours. This rule does not apply to charges which cross a division of the field, which is partly metal and partly colour; nor, strictly speaking, does it prevent a field from consisting of two metals or two colours, although this is unusual. Furs are considered amphibious, and neither metal nor colour; but in practice ermine and erminois are usually treated as metals, while ermines and pean are treated as colours. This rule is strictly adhered to in British armory, with only rare exceptions; although generally observed in continental heraldry, it is not adhered to quite as strictly. Arms which violate this rule are sometimes known as "puzzle arms", of which the most famous example is the arms of the Kingdom of Jerusalem, consisting of gold crosses on a silver field.
The field of a shield, or less often a charge or crest, is sometimes made up of a pattern of colours, or "variation". A pattern of horizontal (barwise) stripes, for example, is called "barry", while a pattern of vertical (palewise) stripes is called "paly". A pattern of diagonal stripes may be called "bendy" or "bendy sinister", depending on the direction of the stripes. Other variations include "chevrony", "gyronny" and "chequy". Wave shaped stripes are termed "undy". For further variations, these are sometimes combined to produce patterns of "barry-bendy", "paly-bendy", "lozengy" and "fusilly". Semés, or patterns of repeated charges, are also considered variations of the field. The Rule of tincture applies to all semés and variations of the field.
The field of a shield in heraldry can be divided into more than one tincture, as can the various heraldic charges. Many coats of arms consist simply of a division of the field into two contrasting tinctures. These are considered divisions of a shield, so the rule of tincture can be ignored. For example, a shield divided azure and gules would be perfectly acceptable. A line of partition may be straight or it may be varied. The variations of partition lines can be wavy, indented, embattled, engrailed, nebuly, or made into myriad other forms; see Line (heraldry).
In the early days of heraldry, very simple bold rectilinear shapes were painted on shields. These could be easily recognized at a long distance and could be easily remembered. They therefore served the main purpose of heraldry: identification. As more complicated shields came into use, these bold shapes were set apart in a separate class as the "honorable ordinaries". They act as charges and are always written first in blazon. Unless otherwise specified they extend to the edges of the field. Though ordinaries are not easily defined, they are generally described as including the cross, the fess, the pale, the bend, the chevron, the saltire, and the pall.
There is a separate class of charges called sub-ordinaries which are of a geometrical shape subordinate to the ordinary. According to Friar, they are distinguished by their order in blazon. The sub-ordinaries include the inescutcheon, the orle, the tressure, the double tressure, the bordure, the chief, the canton, the label, and flaunches.
Ordinaries may appear in parallel series, in which case blazons in English give them different names such as pallets, bars, bendlets, and chevronels. French blazon makes no such distinction between these diminutives and the ordinaries when borne singly. Unless otherwise specified an ordinary is drawn with straight lines, but each may be indented, embattled, wavy, engrailed, or otherwise have their lines varied.
A charge is any object or figure placed on a heraldic shield or on any other object of an armorial composition. Any object found in nature or technology may appear as a heraldic charge in armory. Charges can be animals, objects, or geometric shapes. Apart from the ordinaries, the most frequent charges are the cross – with its hundreds of variations – and the lion and eagle. Other common animals are stags, wild boars, martlets, and fish. Dragons, bats, unicorns, griffins, and more exotic monsters appear as charges and as supporters.
Animals are found in various stereotyped positions or "attitudes". Quadrupeds can often be found rampant (standing on the left hind foot). Another frequent position is passant, or walking, like the lions of the coat of arms of England. Eagles are almost always shown with their wings spread, or displayed. A pair of wings conjoined is called a vol.
In English heraldry the crescent, mullet, martlet, annulet, fleur-de-lis, and rose may be added to a shield to distinguish cadet branches of a family from the senior line. These cadency marks are usually shown smaller than normal charges, but it still does not follow that a shield containing such a charge belongs to a cadet branch. All of these charges occur frequently in basic undifferenced coats of arms.
To "marshal" two or more coats of arms is to combine them in one shield, to express inheritance, claims to property, or the occupation of an office. This can be done in a number of ways, of which the simplest is impalement: dividing the field "per pale" and putting one whole coat in each half. Impalement replaced the earlier dimidiation – combining the dexter half of one coat with the sinister half of another – because dimidiation can create ambiguity between, for example, a bend and a chevron. "Dexter" (from Latin "dextra", right) means to the right from the viewpoint of the bearer of the arms and "sinister" (from Latin "sinistra", left) means to the left. The dexter side is considered the side of greatest honour (see also Dexter and sinister).
A more versatile method is quartering, division of the field by both vertical and horizontal lines. This practice originated in Spain (Castile and León) after the 13th century. As the name implies, the usual number of divisions is four, but the principle has been extended to very large numbers of "quarters".
Quarters are numbered from the dexter chief (the corner nearest to the right shoulder of a man standing behind the shield), proceeding across the top row, and then across the next row and so on. When three coats are quartered, the first is repeated as the fourth; when only two coats are quartered, the second is also repeated as the third. The quarters of a personal coat of arms correspond to the ancestors from whom the bearer has inherited arms, normally in the same sequence as if the pedigree were laid out with the father's father's ... father (to as many generations as necessary) on the extreme left and the mother's mother's...mother on the extreme right. A few lineages have accumulated hundreds of quarters, though such a number is usually displayed only in documentary contexts. The Scottish and Spanish traditions resist allowing more than four quarters, preferring to subdivide one or more "grand quarters" into sub-quarters as needed.
The third common mode of marshalling is with an inescutcheon, a small shield placed in front of the main shield. In Britain this is most often an "escutcheon of pretence" indicating, in the arms of a married couple, that the wife is an heraldic heiress (i.e., she inherits a coat of arms because she has no brothers). In continental Europe an inescutcheon (sometimes called a "heart shield") usually carries the ancestral arms of a monarch or noble whose domains are represented by the quarters of the main shield.
In German heraldry, animate charges in combined coats usually turn to face the centre of the composition.
In English the word "crest" is commonly (but erroneously) used to refer to an entire heraldic achievement of armorial bearings. The technical use of the heraldic term crest refers to just one component of a complete achievement. The crest rests on top of a helmet which itself rests on the most important part of the achievement: the shield.
The modern crest has grown out of the three-dimensional figure placed on the top of the mounted knights' helms as a further means of identification. In most heraldic traditions, a woman does not display a crest, though this tradition is being relaxed in some heraldic jurisdictions, and the stall plate of Lady Marion Fraser in the Thistle Chapel in St Giles, Edinburgh, shows her coat on a lozenge but with helmet, crest, and motto.
The crest is usually found on a wreath of twisted cloth and sometimes within a coronet. Crest-coronets are generally simpler than coronets of rank, but several specialized forms exist; for example, in Canada, descendants of the United Empire Loyalists are entitled to use a Loyalist military coronet (for descendants of members of Loyalist regiments) or Loyalist civil coronet (for others).
When the helm and crest are shown, they are usually accompanied by a mantling. This was originally a cloth worn over the back of the helmet as partial protection against heating by sunlight. Today it takes the form of a stylized cloak hanging from the helmet. Typically in British heraldry, the outer surface of the mantling is of the principal colour in the shield and the inner surface is of the principal metal, though peers in the United Kingdom use standard colourings (Gules doubled Argent - Red/White) regardless of rank or the colourings of their arms. The mantling is sometimes conventionally depicted with a ragged edge, as if damaged in combat, though the edges of most are simply decorated at the emblazoner's discretion.
Clergy often refrain from displaying a helm or crest in their heraldic achievements. Members of the clergy may display appropriate headwear. This often takes the form of a small crowned, wide brimmed hat called a galero with the colours and tassels denoting rank; or, in the case of Papal coats of arms until the inauguration of Pope Benedict XVI in 2005, an elaborate triple crown known as a tiara. Benedict broke with tradition to substitute a mitre in his arms. Orthodox and Presbyterian clergy do sometimes adopt other forms of head gear to ensign their shields. In the Anglican tradition, clergy members may pass crests on to their offspring, but rarely display them on their own shields.
An armorial motto is a phrase or collection of words intended to describe the motivation or intention of the armigerous person or corporation. This can form a pun on the family name as in Thomas Nevile's motto "Ne vile velis". Mottoes are generally changed at will and do not make up an integral part of the armorial achievement. Mottoes can typically be found on a scroll under the shield. In Scottish heraldry, where the motto is granted as part of the blazon, it is usually shown on a scroll above the crest, and may not be changed at will. A motto may be in any language.
Supporters are human or animal figures or, very rarely, inanimate objects, usually placed on either side of a coat of arms as though supporting it. In many traditions, these have acquired strict guidelines for use by certain social classes. On the European continent, there are often fewer restrictions on the use of supporters. In the United Kingdom, only peers of the realm, a few baronets, senior members of orders of knighthood, and some corporate bodies are granted supporters. Often, these can have local significance or a historical link to the armiger.
If the armiger has the title of baron, hereditary knight, or higher, he may display a coronet of rank above the shield. In the United Kingdom, this is shown between the shield and helmet, though it is often above the crest in Continental heraldry.
Another addition that can be made to a coat of arms is the insignia of a baronet or of an order of knighthood. This is usually represented by a collar or similar band surrounding the shield. When the arms of a knight and his wife are shown in one achievement, the insignia of knighthood surround the husband's arms only, and the wife's arms are customarily surrounded by an ornamental garland of leaves for visual balance.
Since arms pass from parents to offspring, and there is frequently more than one child per couple, it is necessary to distinguish the arms of siblings and extended family members from the original arms as passed on from eldest son to eldest son. Over time several schemes have been used.
To "blazon" arms means to describe them using the formal language of heraldry. This language has its own vocabulary and syntax, or rules governing word order, which becomes essential for comprehension when blazoning a complex coat of arms. The verb comes from the Middle English "blasoun", itself a derivative of the French "blason" meaning "shield". The system of blazoning arms used in English-speaking countries today was developed by heraldic officers in the Middle Ages. The blazon includes a description of the arms contained within the escutcheon or shield, the crest, supporters where present, motto and other insignia. Complex rules, such as the rule of tincture, apply to the physical and artistic form of newly created arms, and a thorough understanding of these rules is essential to the art of heraldry. Though heraldic forms initially were broadly similar across Europe, several national styles had developed by the end of the Middle Ages, and artistic and blazoning styles today range from the very simple to extraordinarily complex.
The emergence of heraldry occurred across western Europe almost simultaneously in the various countries. Originally, heraldic style was very similar from country to country. Over time, heraldic tradition diverged into four broad styles: German-Nordic, Gallo-British, Latin, and Eastern. In addition it can be argued that newer national heraldic traditions, such as South African and Canadian, have emerged in the 20th century.
Coats of arms in Germany, the Nordic countries, Estonia, Latvia, Czech lands and northern Switzerland generally change very little over time. Marks of difference are very rare in this tradition as are heraldic furs. One of the most striking characteristics of German-Nordic heraldry is the treatment of the crest. Often, the same design is repeated in the shield and the crest. The use of multiple crests is also common. The crest is rarely used separately as in British heraldry, but can sometimes serve as a mark of difference between different branches of a family. Torse is optional. Heraldic courtoisie is observed: that is, charges in a composite shield (or two shields displayed together) usually turn to face the centre.
Coats consisting only of a divided field are somewhat more frequent in Germany than elsewhere.
The Low Countries were great centres of heraldry in medieval times. One of the famous armorials is the Gelre Armorial or "Wapenboek", written between 1370 and 1414.
Coats of arms in the Netherlands were not controlled by an official heraldic system like the two in the United Kingdom, nor were they used solely by noble families. Any person could develop and use a coat of arms if they wished to do so, provided they did not usurp someone else's arms, and historically, this right was enshrined in Roman Dutch law. As a result, many merchant families had coats of arms even though they were not members of the nobility. These are sometimes referred to as "burgher arms," and it is thought that most arms of this type were adopted while the Netherlands was a republic (1581–1806). This heraldic tradition was also exported to the erstwhile Dutch colonies.
Dutch heraldry is characterised by its simple and rather sober style, and in this sense, is closer to its medieval origins than the elaborate styles which developed in other heraldic traditions.
The use of cadency marks to difference arms within the same family and the use of semy fields are distinctive features of Gallo-British heraldry (in Scotland the most significant mark of cadency being the bordure, the small brisures playing a very minor role). It is common to see heraldic furs used. In the United Kingdom, the style is notably still controlled by royal officers of arms. French heraldry experienced a period of strict rules of construction under Napoleon. English and Scots heraldries make greater use of supporters than other European countries.
Furs, chevrons and five-pointed stars are more frequent in France and Britain than elsewhere.
The heraldry of southern France, Andorra, Spain, and Italy is characterized by a lack of crests, and uniquely shaped shields. Portuguese heraldry, however, does use crests. Portuguese and Spanish heraldry, which together form a larger Iberian tradition of heraldry, occasionally introduce words to the shield of arms, a practice usually avoided in British heraldry. Latin heraldry is known for extensive use of quartering, because of armorial inheritance via the male and the female lines. Moreover, Italian heraldry is dominated by the Roman Catholic Church, featuring many shields and achievements, most bearing some reference to the Church.
Trees are frequent charges in Latin arms. Charged bordures, including bordures inscribed with words, are seen often in Spain.
Eastern European heraldry is in the traditions developed in Belarus, Bulgaria, Serbia, Croatia, Hungary, Romania, Lithuania, Poland, Slovakia, Ukraine, and Russia. Eastern coats of arms are characterized by a pronounced, territorial, clan system – often, entire villages or military groups were granted the same coat of arms irrespective of family relationships. In Poland, nearly six hundred unrelated families are known to bear the same Jastrzębiec coat of arms. Marks of cadency are almost unknown, and shields are generally very simple, with only one charge. Many heraldic shields derive from ancient house marks. At the least, fifteen per cent of all Hungarian personal arms bear a severed Turk's head, referring to their wars against the Ottoman Empire.
True heraldry, as now generally understood, has its roots in medieval Europe. However, there have been other historical cultures which have used symbols and emblems to represent families or individuals, and in some cases these symbols have been adopted into Western heraldry. For example, the coat of arms of the Ottoman Empire incorporated the royal tughra as part of its crest, along with such traditional Western heraldic elements as the escutcheon and the compartment.
Ancient Greeks were among the first civilizations to use symbols consistently in order to identify a warrior, clan or a state. The first record of a shield blazon is illustrated in Aeschylus' tragedy "Seven Against Thebes".
, also , , and , are Japanese emblems used to decorate and identify an individual or family. While "mon" is an encompassing term that may refer to any such device, "kamon" and "mondokoro" refer specifically to emblems used to identify a family. An authoritative "mon" reference compiles Japan's 241 general categories of "mon" based on structural resemblance (a single "mon" may belong to multiple categories), with 5116 distinct individual "mon" (it is however well acknowledged that there exist lost or obscure "mon" that are not in this compilation).
The devices are similar to the badges and coats of arms in European heraldic tradition, which likewise are used to identify individuals and families. "Mon" are often referred to as crests in Western literature, another European heraldic device similar to the "mon" in function.
Socialist heraldry, also called communist heraldry, consists of emblems in a style typically adopted by communist states and characterized by communist symbolism. Although commonly called "coats of arms", most such devices are not actually coats of arms in the traditional heraldic sense and should therefore, in a strict sense, not be called arms at all. Many communist governments purposely diverged from the traditional forms of European heraldry in order to distance themselves from the monarchies that they usually replaced, with actual coats of arms being seen as symbols of the monarchs.
The Soviet Union was the first state to use socialist heraldry, beginning at its creation in 1922. The style became more widespread after World War II, when many other communist states were established. Even a few non-socialist states have adopted the style, for various reasons—usually because communists had helped them to gain independence—but also when no apparent connection to a Communist nation exists, such as the emblem of Italy. After the fall of the Soviet Union and the other communist states in Eastern Europe in 1989–1991, this style of heraldry was often abandoned for the old heraldic practices, with many (but not all) of the new governments reinstating the traditional heraldry that was previously cast aside.
A tamga or tamgha "stamp, seal" (, Turkic: tamga) is an abstract seal or stamp used by Eurasian nomadic peoples and by cultures influenced by them. The tamga was normally the emblem of a particular tribe, clan or family. They were common among the Eurasian nomads throughout Classical Antiquity and the Middle Ages (including Alans, Mongols, Sarmatians, Scythians and Turkic peoples). Similar "tamga-like" symbols were sometimes also adopted by sedentary peoples adjacent to the Pontic-Caspian steppe both in Eastern Europe and Central Asia, such as the East Slavs, whose ancient royal symbols are sometimes referred to as "tamgas" and have similar appearance.
Unlike European coats of arms, tamgas were not always inherited, and could stand for families or clans (for example, when denoting territory, livestock, or religious items) as well as for specific individuals (such as when used for weapons, or for royal seals). One could also adopt the tamga of one's master or ruler, therefore signifying said master's patronage. Outside of denoting ownership, tamgas also possessed religious significance, and were used as talismans to protect one from curses (it was believed that, as symbols of family, tamgas embodied the power of one's heritage). Tamgas depicted geometric shapes, images of animals, items, or glyphs. As they were usually inscribed using heavy and unwieldy instruments, such as knives or brands, and on different surfaces (meaning that their appearance could vary somewhat), tamgas were always simple and stylised, and needed to be laconic and easily recognisable.
Every sultan of the Ottoman Empire had his own monogram, called the tughra, which served as a royal symbol. A coat of arms in the European heraldic sense was created in the late 19th century. Hampton Court requested from Ottoman Empire the coat of arms to be included in their collection. As the coat of arms had not been previously used in Ottoman Empire, it was designed after this request and the final design was adopted by Sultan Abdul Hamid II on April 17, 1882. It included two flags: the flag of the Ottoman Dynasty, which had a crescent and a star on red base, and the flag of the Islamic Caliph, which had three crescents on a green base.
Heraldry flourishes in the modern world; institutions, companies, and private persons continue using coats of arms as their pictorial identification. In the United Kingdom and Ireland, the English Kings of Arms, Scotland's Lord Lyon King of Arms, and the Chief Herald of Ireland continue making grants of arms. There are heraldic authorities in Canada, South Africa, Spain, and Sweden that grant or register coats of arms. In South Africa, the right to armorial bearings is also determined by Roman Dutch law, due to its origins as a 17th-century colony of the Netherlands.
Heraldic societies abound in Africa, Asia, Australasia, the Americas and Europe. Heraldry aficionados participate in the Society for Creative Anachronism, medieval revivals, micronations and other related projects. Modern armigers use heraldry to express ancestral and personal heritage as well as professional, academic, civic, and national pride. Little is left of class identification in modern heraldry, where the emphasis is more than ever on expression of identity.
Heraldry continues to build on its rich tradition in academia, government, guilds and professional associations, religious institutions, and the military. Nations and their subdivisions – provinces, states, counties, cities, etc. – continue to build on the traditions of civic heraldry. The Roman Catholic Church, Anglican churches, and other religious institutions maintain the traditions of ecclesiastical heraldry for clergy, religious orders, and schools.
Many of these institutions have begun to employ blazons representing modern objects unknown in the medieval world. For example, some heraldic symbols issued by the United States Army Institute of Heraldry incorporate symbols such as guns, airplanes, or locomotives. Some scientific institutions incorporate symbols of modern science such as the atom or particular scientific instruments. The arms of the United Kingdom Atomic Energy Authority uses traditional heraldic symbols to depict the harnessing of atomic power. Locations with strong associations to particular industries may incorporate associated symbols. The coat of arms of Stenungsund Municipality in Sweden, pictured right, incorporates a hydrocarbon molecule, alluding to the historical significance of the petrochemical industry in the region.
Heraldry in countries with heraldic authorities continues to be regulated generally by laws granting rights to arms and recognizing possession of arms as well as protecting against their misuse. Countries without heraldic authorities usually treat coats of arms as creative property in the manner of logos, offering protection under copyright laws. This is the case in Nigeria, where most of the components of its heraldic system are otherwise unregulated. | https://en.wikipedia.org/wiki?curid=13610 |
Heretic (video game)
Heretic is a dark fantasy first-person shooter video game released in 1994. It was developed by Raven Software and published by id Software through GT Interactive. The game was released on Steam on August 3, 2007.
Using a modified version of the "Doom" engine, "Heretic" was one of the first first-person games to feature inventory manipulation and the ability to look up and down. It also introduced multiple gib objects that spawned when a character suffered a death by extreme force or heat. Previously, the character would simply crumple into a heap. The game used randomized ambient sounds and noises, such as evil laughter, chains rattling, distantly ringing bells, and water dripping in addition to the background music to further enhance the atmosphere. The music in the game was composed by Kevin Schilder. An indirect sequel, "", was released the following year. "Heretic II" was released in 1998, which served as a direct sequel continuing the story.
Three brothers (D'Sparil, Korax, and Eidolon), known as the Serpent Riders, have used their powerful magic to possess seven kings of Parthoris, turning them into mindless puppets and corrupting their armies. The Sidhe elves resist the Serpent Riders' magic. The Serpent Riders thus declared the Sidhe as heretics and waged war against them. The Sidhe are forced to take a drastic measure to sever the natural power of the kings destroying them and their armies, but at the cost of weakening the elves' power, giving the Serpent Riders an advantage to slay the elders. While the Sidhe retreat, one elf (revealed to be named Corvus in "Heretic II") sets off on a quest of vengeance against the weakest of the three Serpent Riders, D'Sparil. He travels through the "City of the Damned", the ruined capital of the Sidhe (its real name is revealed to be Silverspring in "Heretic II"), then past the demonic breeding grounds of Hell's Maw and finally the secret Dome of D'Sparil.
The player must first fight through the undead hordes infesting the location where the elders performed their ritual. At its end is the gateway to Hell's Maw, guarded by the Iron Liches. After defeating them, the player must seal the portal and so prevent further infestation, but after he enters the portal guarded by the Maulotaurs, he finds himself inside D'Sparil's dome. After killing D'Sparil, Corvus ends up on a perilous journey with little hope of returning home.
The gameplay of "Heretic" is heavily derived from "Doom", with a level-based structure and an emphasis on finding the proper keys to progress. Many weapons are similar to those from "Doom"; the early weapons in particular are near-exact copies in functionality to those seen in "Doom". Raven added a number of features to "Heretic" that differentiated it from "Doom", however, notably interactive environments, such as rushing water that pushes the player along, and inventory items. In "Heretic", the player can pick up many different items to use at their discretion. These items range from health potions to the "morph ovum", which transforms enemies into chickens. One of the most notable pickups that can be found is the "Tome of Power" which acts as a secondary firing mode for certain weapons, resulting in a much more powerful projectile from each weapon, some of which change the look of the projectile entirely. "Heretic" also features an improved version of the "Doom" engine, sporting the ability to look up and down within constraints, as well as fly. However, the rendering method for looking up and down merely uses a proportional pixel-shearing effect rather than any new rendering algorithm, which distorts the view considerably when looking at high-elevation angles.
As with "Doom", "Heretic" contains various cheat codes that allow the player to be invulnerable, obtain every weapon, be able to instantly kill every monster in a particular level, and several other abilities. However, if the player uses the "all weapons and keys" cheat ("codice_1") from "Doom", a message appears warning the player against cheating and takes away all of his weapons, leaving him with only a quarterstaff. If the player uses the "god mode" cheat ("codice_2") from "Doom", the game will display a message saying "Trying to cheat, eh? Now you die!" and kills the player character.
The original shareware release of "Heretic" came bundled with support for online multiplayer through the new DWANGO service.
Like "Doom", "Heretic" was developed on NeXTSTEP. John Romero helped Raven employees set up the development computers, and taught them how to use id's tools and "Doom" engine.
The original version of "Heretic" was only available through shareware registration (i.e. mail order) and contained three episodes. The retail version, "Heretic: Shadow of the Serpent Riders", was distributed by GT Interactive in 1996, and featured the original three episodes and two additional episodes: "The Ossuary", which takes the player to the shattered remains of a world conquered by the Serpent Riders several centuries ago, and "The Stagnant Demesne", where the player enters D'Sparil's birthplace. This version was the first official release of "Heretic" in Europe. A free patch was also downloadable from Raven's website to update the original "Heretic" with the content found in "Shadow of the Serpent Riders".
Along with the two full additional episodes, "Shadow of the Serpent Riders" contains 3 additional levels in a third additional episode (unofficially known as "Fate's Path") which is inaccessible without the use of cheat codes. The first of these three levels can be accessed by typing the cheat ("codice_3"). The first two levels are fully playable, but the third level does not have an exit so the player is unable to progress further.
On January 11, 1999, the source code of the game engine used in "Heretic" was published by Raven Software under a license that granted rights to non-commercial use, and was re-released under the GNU General Public License on September 4, 2008. This resulted in ports to Linux, Amiga, Atari, and other operating systems, and updates to the game engine to utilize 3D acceleration. The shareware version of a console port for the Dreamcast was also released.
"Heretic" received mixed reviews, garnering an aggregated score of 62% on GameRankings and 78% on PC Zone. "Heretic" and "Hexen" shipped a combined total of roughly 1 million units by August 1997.
While remarking that "Heretic" is a thinly-veiled clone of "Doom", and that its being released in Europe after and with "Quake" due out shortly makes it somewhat outdated, "Maximum" nonetheless regarded it as an extremely polished and worthwhile purchase. They particularly highlighted the two additional episodes of the retail version, saying they offer a satisfying challenge even to first person shooter veterans and are largely what make the game worth buying.
In 1996, "Computer Gaming World" listed being turned into a chicken as #3 on its list of "the 15 best ways to die in computer gaming".
"Next Generation" reviewed the PC version of the game, and stated that "If you're only going to get one action game in the next couple of months, this is the one."
"Heretic" has received three sequels: "", "Hexen II", and "Heretic II". Following ZeniMax Media's acquisition of id Software, the rights to the series have been disputed between both id and Raven Software; Raven's parent company Activision holds the developing rights, while id holds the publishing rights to the first three games. Until both companies come to an agreement, neither will be able to make another installment in the series.
Further homages to the series have been made in other id Software titles; In 2009's "Wolfenstein", which Raven Software developed, "Heretic"'s Tomes of Power are collectible power-ups found throughout the game. The character Galena from "Quake Champions" wears armor bearing the icon of the Serpent Riders. | https://en.wikipedia.org/wiki?curid=13611 |
Hexen: Beyond Heretic
Hexen: Beyond Heretic is a dark fantasy first-person shooter video game developed by Raven Software and published by id Software through GT Interactive Software on October 30, 1995. It is the sequel to 1994's "Heretic", and the second game in Raven Software's "Serpent Riders" trilogy, which culminated with "Hexen II". The title comes from the German noun , which means "witches", and/or the verb , which means "to cast a spell". Game producer John Romero stated that a third, unreleased game in this series was to be called "Hecatomb".
"Hexen: Beyond Heretic" met with highly positive reviews upon release, though the various 1997 console ports were negatively received due to issues with frame rate and controls and the aging of the game itself. Critical plaudits for the game centered on the non-linear level design and the selection of three playable characters, each offering a distinct gameplay experience.
Following the tale of D'Sparil's defeat in "Heretic", "Hexen" takes place in another realm, Cronos, which is besieged by the second of the three Serpent Riders, Korax. Three heroes set out to destroy Korax. The player assumes the role of one such hero. Throughout the course of his quest, he travels through elemental dungeons, a wilderness region, a mountainside seminary, a large castle, and finally a necropolis, before the final showdown with the Serpent Rider.
A new series feature introduced in "Hexen" is the choice of three character classes. Players may choose to play as a fighter (Baratus), a cleric (Parias), or a mage (Daedolon). Each character has unique weapons and physical characteristics, lending an additional degree of variety and replay value to the game. The Fighter relies mainly on close quarter physical attacks with weapons both mundane and magical in nature, and is tougher and faster than the other characters. The Mage uses an assortment of long-range spells, whose reach is counterbalanced by the fact that he is the most fragile and slowest moving of the classes. The Cleric arms himself with a combination of both melee and ranged capabilities, being a middle ground of sorts between the other two classes. Additionally, certain items, such as the flechette (poison gas bomb), behave differently when collected and used by each of the classes, functioning in a manner better suiting their varying approach to combat.
"Hexen" introduces "hub" levels to the series, wherein the player can travel back and forth between central hub levels and connected side levels. This is done in order to solve larger-scale puzzles that require a series of items or switches to be used. The player must traverse through a hub in order to reach a boss and advance to the next hub.
The inventory system returns from "Heretic" with several new items such as the "disc of repulsion" which pushes enemies away from the player and the "icon of the defender" which provides invincibility to each class in a different manner.
Like "Heretic", "Hexen" was developed on NeXTSTEP. "Hexen" uses a modified version of the "Doom" engine, which allows looking up and down, network play with up to eight players, and the choice of three character classes. It also popularized the "hub system" of level progression in the genre of first-person shooter games. Unlike previous games, which had relied purely on General MIDI for music, "Hexen" is also able to play tracks from a CD. The game's own CD contained a soundtrack in an audio format that was exactly the same as the MIDI soundtrack, but played through a high-quality sound module. However, the most significant improvement was the addition of wall translation, rotation, and level scripting.
"Polyobjects" are the walls that move within the game. Because the "Doom" engine uses the binary space partitioning system for rendering, it does not enable moving walls. "Hexen"s moving walls are actually one-sided lines built somewhere else on the map and rendered at the desired start spot when the level is loaded. This enables a pseudo-moving wall, but does not allow moving sectors (such as seeing the tops of moving doors). This often creates problems in sectors containing more than one node, however, explaining the relatively limited use of polyobjects.
Whereas "Doom", "Doom II", and "Heretic" rely on lines within the maps to perform simple actions, "Hexen" also allows these actions to be activated by Action Code Script (ACS). These scripts use a syntactic variant of C, thus allowing special sequencing of game actions. Programming features such as randomization, variables, and intermap script activation enable smooth hub gameplay and are responsible for most of the special effects within the game: on-screen messages, random sound effects, monster spawning, sidedef texture changes, versatile control of polyobjects, level initialization for deathmatch, and even complex environment changes such as earthquakes manipulating floor textures and heights.
On January 11, 1999, the source code for "Hexen" was released by Raven Software under a license that granted rights to non-commercial use, and was re-released under the GNU General Public License on September 4, 2008. This allowed the game to be ported to different platforms such as Linux, AmigaOS, and OS/2 (EComStation).
"Hexen" is compatible with many "Doom" source ports; "Hexen"s features are also compatible with "Doom" WADs made for source ports regardless of what game they are being played on.
The score was composed by Kevin Schilder. In contrast to "Heretic", some songs in "Hexen", in addition to MIDI versions, had higher-quality versions on CD. When playing in CD-audio mode, songs absent from CD would be replaced by some existing CD tracks.
"Hexen" was released for the Sega Saturn, PlayStation, and Nintendo 64, all released by GT Interactive during the first half of 1997. While presenting several specific differences in their respective translations of the original PC game, all of them constitute essentially the same game with no major changes to level design, plot, or overall delivery.
The PlayStation version, developed by Probe Entertainment, has the FMV scenes and Redbook audio music from the PC CD-ROM version, but no multiplayer mode. The scripting and animation is slower, enemies have only their front sprites and lack gory deaths when attacked by strong hits or weapons, and the frame rate is slower. Although all levels are present in this version and feature their correct layouts, their architecture details are somewhat simplified and there is some loss in overall lighting quality. This port is based on a beta version of the original PC version of "Hexen" as many gameplay tweaks are shared, such as the simpler level design and the Fighter's fists being weaker compared to other versions.
The Sega Saturn version, also developed by Probe, inherits most of the restrictions of the PlayStation version, such as the simplified scenery architecture and the downgraded lighting, although it does feature improvements in certain aspects. The scripting is faster, and the frame rate, while not fluid or consistent, is slightly better. The enemies still have all but their front sprites missing, but they retain their gory deaths when killed by a strong hit or weapon. This version also has hidden two-player link-up cooperative and deathmatch modes, accessible only through the unlockable cheat menu. While this port shares the FMV scenes and most of the Redbook audio music from the other CD-ROM versions, it also includes some new music tracks.
The Nintendo 64 version, developed by Software Creations, retains all of the graphical quality and scenery architecture, has a consistent frame rate, and includes high detail and smooth filtering. This version also has four-player split-screen cooperative and deathmatch modes, although they must be played in low detail mode. Due to cartridge storage limitations, the Nintendo 64 version is based on the original PC floppy version and lacks the FMV scenes and Redbook audio music introduced in the CD-ROM version, although it has new narrative introductions to the levels.
"Deathkings of the Dark Citadel" is an official expansion pack that was released for "Hexen" in 1996. It features three more hubs with a total of 20 new single player levels and six new deathmatch levels. Unlike the "Shadow of the Serpent Riders" expansion pack for "Heretic", it had to be purchased in retail stores or by mail order. This was unusual at the time, as most non-free expansion packs also included other new or revised gameplay elements. "Deathkings of the Dark Citadel", unlike "Shadow of the Serpent Riders", was not packaged with the original game, meaning that both had to be purchased separately, and the expansion would not work without already having "Hexen". This expansion pack also did not initially include nor enable any music. Music could be fully enabled by applying a patch specially released to address this issue (usually found online under the name "dkpatch").
Each of the hubs (The Blight, The Constable's Gate, and The Nave) features one secret level, and new puzzles based on the quest items from the original game (no new quest artifacts were added). Any type of enemy may spawn on the map.
The final level of the expansion, the Dark Citadel itself, is an arena-like level, which features teleporting waves of monsters and three bosses (Fighter, Cleric, and Mage clones).
"Heretic" and "Hexen" shipped a combined total of roughly 1 million units to retailers by August 1997.
Reviewing the PC version, "Maximum" remarked that "Hexen" sets itself apart from other "3D slashers" with its selection of characters and novel approach to level design, which "leads to your character choosing their path rather than being guided around a rather linear series of rooms, proving that 3D games have matured". They also commented that the gameplay is consistently intense due to the difficulty of the enemies, the variety of weapons and power-ups, and the sheer size and breadth of the levels. They gave the game 5 out of 5 stars and their "Maximum Game of the Month" award. A reviewer for "Next Generation" opined that ""Hexen" takes everything that was good about "Heretic", and makes it even better." He commented that the ability to choose between three different character classes gives the game replay value, something that had been missing from first-person shooters up until then, and though the graphics are blocky and pixelated, the "eerily lifelike" sound effects make up for it to a large extent. Like "Maximum", he praised the non-linear level design and concluded the game to be a must-have for any first-person shooter fan. Chris Hudak, citing the differing abilities of the three playable characters, called "Hexen" "Slicker, smarter and more stylish than "Doom"---with all the killing and three times the replay value."
"Computer Games Strategy Plus" named "Hexen" the best "First-Person Action" title of 1995. It was also a runner-up for "Computer Gaming World"s 1995 "Action Game of the Year" award, which ultimately went to "". The editors called it "another "Doom" bloodfest distinguished by its fantasy setting and the fact that it let you play as either a fighter, priest or mage, each with unique attributes and weapons".
The Saturn version was far less well-received. A review in "Next Generation" of the Saturn version reasoned that, "Like oil and water, "Doom"-style games and console conversions don't mix well. Unless the programmers are willing to rewrite the graphics engine from scratch, PC ports suffer from getting cramped into too little memory and neglecting the console's native 3D hardware." The reviewer recommended Saturn owners instead try "PowerSlave" or "Ghen War", first-person shooters specifically designed for the console. Shawn Smith and Sushi-X of "Electronic Gaming Monthly" similarly said the game had not been converted well from PC. Others described the Saturn port as an exact conversion, and argued the problem was simply that "Hexen" was too old a game to be released for console in 1997 without any improvements. Though they disagreed on exact reasons, most critics agreed that the Saturn version suffers from pixelated graphics, dramatic drops in frame rate, and cumbersome controls. Scary Larry of "GamePro" gave it a mixed review, summarizing that "although it doesn't live up to "PowerSlave"s standards, it's still decent fun." John Broady of "GameSpot" gave a slightly more dismal assessment: "Despite these glaring deficiencies, "Hexen" nonetheless offers enough enhancements over the standard shooter to warrant a rental, especially for fans of role-playing games who thirst for real-time action. ... But for the rest, the Saturn version of "Hexen" is a classic game of too little and too late." Rich Leadbetter of "Sega Saturn Magazine" and James Price of "Saturn Power" defended the Saturn version, commenting that, although not outstanding, it is far superior to the Saturn version of "Doom", which was released at roughly the same time. Price was particularly enthusiastic about the link cable-enabled multiplayer mode.
The Nintendo 64 version also left most critics unimpressed. The four-player mode was praised as an unprecedented feature in console first person shooters, but the graphics were considered unacceptably poor, particularly the frame rate and the usage of the Nintendo 64's mip-mapping and anti-aliasing in a way which actually worsened the visuals of the game. As with the Saturn version, some critics opined that "Hexen" was too dated by this time to be receiving a straightforward port. Joe Fielder of "GameSpot" additionally complained of a severe bug in the save feature. In a dissenting opinion, Scary Larry concluded that "Although not as polished as "" or as fun and creepy as "Doom 64", "Hexen" gives you three characters to choose from, and the action's addicting once you get into it." He gave it higher scores than the Saturn version in every category except sound. In contrast, Matt Casamassina of "IGN" called it "A shoddy port of a PC game that wasn't so great to begin with."
The PlayStation version was even more negatively received; critics universally panned the port for its poor frame rate, pixelated graphics, and sloppy platform-jumping controls.
"Electronic Gaming Monthly"s 1998 Video Game Buyer's Guide named "Hexen" the 1997 "Game that Should've Stayed on the PC", commenting that while the Nintendo 64 version was the best of the console ports, all three were poor conversions, and "Hexen" was too old by the time they were released. | https://en.wikipedia.org/wiki?curid=13612 |
Hexen II
Hexen II is a dark fantasy first-person shooter (FPS) video game developed by Raven Software from 1996 to 1997, then published by id Software and distributed by Activision. It is the third game in the ""/"Heretic" series, and the last in the Serpent Riders trilogy. It was later made available on Steam on August 3, 2007. Using a modified "Quake" engine, it features single-player and multiplayer game modes, as well as four character classes to choose from, each with different abilities. These include the "offensive" Paladin, the "defensive" Crusader, the spell-casting Necromancer, and the stealthy Assassin.
Improvements from "" and "Quake" include destructible environments, mounted weapons, and unique level up abilities. Like its predecessor, "Hexen II" also uses a "hub" system. These hubs are a number of interconnected levels; changes made in one level have effects in another. Furthermore, the Tome of Power artifact makes a return from "Heretic".
The gameplay of "Hexen II" is very similar to that of the original "Hexen". Instead of three classes, "Hexen II" features four: Paladin, Crusader, Assassin, and Necromancer, each with their own unique weapons and play style.
"Hexen II" also adds certain role-playing video game elements to the mix. Each character has a series of statistics which increase as they gain experience. This then causes the player character to grow in power as his or her HP and Mana increases.
Thyrion is a world that was enslaved by the Serpent Riders. The two previous games in the series documented the liberation of two other worlds, along with the death of their Serpent Rider overlords. Now, the oldest and most powerful of the three Serpent Rider brothers, Eidolon, must be defeated to free Thyrion. Eidolon is supported by his four generals, themselves a reference to the Four Horsemen of the Apocalypse. To confront each general, the player has to travel to four different continents, each possessing a distinct theme (Medieval European for Blackmarsh, Mesoamerican for Mazaera, Ancient Egyptian for Thysis, and Greco-Roman for Septimus). Then, finally, the player returns to Blackmarsh in order to confront Eidolon himself inside of his own dominion Cathedral.
What was originally supposed to be the final game in a trilogy, the sequel to Hexen was originally titled "Hecatomb" but was abandoned after John Romero left id Software in 1996. Activision, the distributor at the time, pressured Raven Software to split development of Hecatomb into two different games, Hexen II and Heretic II. Activision felt that the previous entries in the series, Heretic and Hexen, were different enough from one another that they should treat them as separate entities going forward, instead of just one final game to complete a trilogy. Only a select few ideas of Romero's from "Hecatomb" would ultimately make their way into what became Hexen II and Heretic II.
"Hexen II" was based on an enhanced version of the "Quake" engine. "Hexen II", by way of the "Quake" engine, uses OpenGL for 3D acceleration. However, due to the prevalence of 3dfx hardware at the time of release, the Windows version of the game installs an OpenGL ICD (opengl32.dll) designed specifically for 3dfx's hardware. This driver acts as a wrapper for the proprietary Glide API, and thus is only compatible with 3dfx hardware. Custom OpenGL drivers were also released by PowerVR and Rendition for running "Hexen II" with their respective (and also now defunct) products. Removal of the ICD allows the game to use the default OpenGL system library. Much of the music in this game is remixed versions of the soundtracks of "" and "Heretic" to match the hub themes.
Activision acquired the rights to publish versions of the game for the PlayStation and Sega Saturn. However, neither port was released.
A modification titled "Siege" was created and released by Raven Software in 1998 using updated QuakeWorld architecture, aptly dubbed "HexenWorld". The production concept was to eliminate a normal deathmatch environment in favor of a teamplay castle siege. The basic premise was to divide the players into two teams—attackers and defenders—with each side either assaulting or protecting the castle respectively. At the end of the time limit, whichever team controlled the crown was declared victorious. The mod featured appropriate objects used in the single-player portion of the game, namely catapults and ballistae. The classes, however, were drastically altered with new weapons and abilities, reflecting the departure from the normal deathmatch experience presented in "HexenWorld".
Following the tradition from "Heretic" and "Hexen", Raven released the source code of the "Hexen II" engine on November 10, 2000. This time the source was released under the GNU General Public License, allowing source ports to be made to different platforms like Linux and the Dreamcast.
An expansion pack called "Hexen II Mission Pack: Portal of Praevus" was released on April 1, 1998. It features new levels, new enemies and a new playable character class, The Demoness. It focuses on the attempted resurrection of the three Serpent Riders by the evil wizard Praevus, and takes place in a fifth continent, Tulku, featuring a Sino-Tibetan setting. Unlike the original game, the expansion was not published by id Software, and as such is not currently available via digital re-releases.
The expansion features new quest items, new enemies, and new weapons for the Demoness. She is the only player class to have a ranged starting weapon (similar to the Mage class in the original "Hexen"), whereas all other characters start with melee weapons. It also introduced minor enhancements to the game engine, mostly related to user interface, level scripts, particle effects (rain or snow), and 3D objects. "Portal of Praevus" also features a secret (easter egg) skill level, with respawning monsters. The only released patch for the expansion added respawning of certain items (such as health and ammo) in Nightmare mode, so that it would be slightly easier for playing.
Because of the popularity of the original "Hexen", the game was heavily anticipated. Upon its release, "Hexen II" received "mixed to positive" reviews. "Edge" praised the game for being different from other "Quake" engine-based games, highlighting its inventive and interactive levels, enemy variety, and artificial intelligence. The magazine also credited the game's diversity of weapons and spells for offering different combat strategies.
According to Erik Bethke, "Hexen II" was commercially unsuccessful, with sales slightly above 30,000 units. | https://en.wikipedia.org/wiki?curid=13613 |
Heretic II
Heretic II is a dark fantasy action-adventure game developed by Raven Software and published by Activision in 1998 continuing the story of Corvus, the main character from its predecessor, "Heretic". It is the fourth game in the "" series and comes after the "Serpent Rider" trilogy.
Using a modified Quake II engine, the game features a mix of a third-person camera with a first-person shooter's action, making for a new gaming experience at the time. While progressive, this was a controversial design decision among fans of the original game, a well-known first-person shooter built on the Doom engine. The music was composed by Kevin Schilder. Gerald Brom contributed conceptual work to characters and creatures for the game. This is the only "Heretic"/"Hexen" video game that is unrelated to id Software, apart from its role as engine licenser.
"Heretic II" was later ported to Linux by Loki Software, to the Amiga by Hyperion Entertainment, and Macintosh by MacPlay.
After Corvus returns from his banishment, he finds that a mysterious plague has swept the land of Parthoris, taking the sanity of those it does not kill. Corvus, the protagonist of the first game, is forced to flee his hometown of Silverspring after the infected attack him, but not before he is infected himself. The effects of the disease are held at bay in Corvus’ case because he holds one of the Tomes of Power, but he still must find a cure before he succumbs.
His quest leads him through the city and swamps to a jungle palace, then through a desert canyon and insect hive, followed by a dark network of mines and finally to a castle on a high mountain where he finds an ancient Seraph named Morcalavin. Morcalavin is trying to reach immortality using the seven Tomes of Power, but he uses a false tome, as Corvus has one of them. This has caused Morcalavin to go insane and create the plague. During a battle between Corvus and Morcalavin, Corvus switches the false tome for his real one, curing Morcalavin's insanity and ending the plague.
Unlike previous games in the "Heretic/Hexen" series, which were first-person shooters, players control Corvus from a camera fixed behind him in the third-person perspective. Players are able to use a combination of both melee and ranged attacks, similar to its predecessor. While there are still three weapons the player can collect that each use their own ammo, they also have the ability to use several offensive and defensive spells that draw from pools of green and blue mana, respectively. The Tome of Power is no longer an item scattered around the levels, but a defensive spell that still works in the same manner as the other games in the series by improving damage and granting weapons and offensive spells new abilities for a limited time. Melee combat is also more varied, with the ability to perform several attacks using Corvus' bladestaff and cut off the limbs of enemies, rendering them harmless. Players are also able to utilize magical shrines throughout the game that grant a variety of effects upon use, such as silver or gold armor, a temporary boost in health, a permanent enhancement to the bladestaff, etc.
The game consists of a wide variety of high fantasy medieval backdrops to Corvus's adventure. The third-person perspective and three-dimensional game environment allowed developers to introduce a wide variety of gymnastic moves, like climbing up ledges, back-flipping off walls, and pole vaulting, in a much more dynamic environment than the original game's engine could produce. Both games invite comparison with their respective game-engine namesake: the original "Heretic" was built on the "Doom" engine, and "Heretic II" was built using the "Quake II" engine, later known as id Tech 2. "Heretic II" was favorably received at release because it took a different approach to its design.
Inspired by the Tomb Raider series, Raven Software decided to make use of the Quake II engine to create a third person action game. A major step in the early development was Gerald Brom's concept art. In a month, the company had programmed the game's camera system. After Activision's approval of the game's demo, Raven Software aimed to get the full game finished by Christmas. (it would release just prior to that Thanksgiving) To add to complications, they needed a software renderer to make the game playable to 16-bit users (especially in Europe).
For the animation, the main character Corvus was provided with a backbone for realism and had a total of 1600 frames. Most of the animations were done using Softimage. The static world objects and simplified animations were done with 3D Studio Max. The engine was capable of showing up to 4,000 polygons on screen.
Following ZeniMax Media's acquisition of id Software in 2009, the rights to the series have been disputed between both id and Raven Software; Raven holds the development rights, while id holds the publishing rights to "Heretic II"'s predecessors. Until both companies come to an agreement, neither will be able to release another installment in the series.
"Heretic II" was a commercial flop. According to PC Data, its sales in the United States totaled 28,994 units by April 1999. Activision's Steve Felsen blamed this performance on the game's design: he noted that "fans of first-person shooters—the target audience for this game—stayed away due to the third-person perspective".
"Next Generation" reviewed the PC version of the game, rating it three stars out of five, and stated that ""Heretic II" has a lot going for it. It easily earns it space on the shelf with the heavy hitters this season, but it also serves as a reminder to all that every aspect of game design needs to be pushed if you want your project to truly stand out."
"Edge" praised the game for its mixture of platform and shoot 'em up action, stating that "Heretic II" is different enough to stand out from both first-person and third-person games like id Software's first-person shooters or Core Design's "Tomb Raider" games. "Heretic II" was a finalist for "Computer Gaming World"s 1998 "Best Action" award, which ultimately went to "Battlezone". The editors wrote that "Heretic II" "proved that the "Quake II" engine could work in a third-person game "and" that a spell-casting, shirtless elf could actually kick ass." | https://en.wikipedia.org/wiki?curid=13614 |
Household hardware
Household hardware (or simply, hardware) is equipment that can be touched or held by hand such as keys, locks, nuts, screws, washers, hinges, latches, handles, wire, chains, belts, plumbing supplies, electrical supplies, tools, utensils, cutlery and machine parts. Household hardware is typically sold in hardware stores. | https://en.wikipedia.org/wiki?curid=13615 |
Howard Carter
Howard Carter (9 May 18742 March 1939) was a British archaeologist and Egyptologist who became world-famous after discovering the intact tomb (designated KV62) of the 18th Dynasty Pharaoh, Tutankhamun (colloquially known as "King Tut" and "the boy king"), in November 1922.
Howard Carter was born in Kensington on 9 May 1874, the youngest child (of eleven) of artist and illustrator Samuel John Carter and Martha Joyce Sands. His father trained and developed Howard's artistic talents.
Carter spent much of his childhood with relatives in the Norfolk market town of Swaffham, the birthplace of both his parents. Nearby was the mansion of the Amherst family, Didlington Hall, containing a sizable collection of Egyptian antiques, which sparked Carter's interest in that subject. In 1891 the Egypt Exploration Fund (EEF), on the prompting of Mary Cecil, sent Carter to assist an Amherst family friend, Percy Newberry, in the excavation and recording of Middle Kingdom tombs at Beni Hasan.
Although only 17, Carter was innovative in improving the methods of copying tomb decoration. In 1892, he worked under the tutelage of Flinders Petrie for one season at Amarna, the capital founded by the pharaoh Akhenaten. From 1894 to 1899, he worked with Édouard Naville at Deir el-Bahari, where he recorded the wall reliefs in the temple of Hatshepsut.
In 1899, Carter was appointed to the position of Chief Inspector of the Egyptian Antiquities Service (EAS). He supervised a number of excavations at Thebes (now known as Luxor). In 1904, he was transferred to the Inspectorate of Lower Egypt. Carter was praised for his improvements in the protection of (and accessibility to) existing excavation sites, and his development of a grid-block system for searching for tombs. The Antiquities Service also provided funding for Carter to head his own excavation projects.
Carter resigned from the Antiquities Service in 1905 after a formal inquiry into what became known as the Saqqara Affair, a noisy confrontation between Egyptian site guards and a group of French tourists. Carter sided with the Egyptian personnel.
In 1907, after three hard years for Carter, Lord Carnarvon employed him to supervise excavations of nobles' tombs in Deir el-Bahri, near Thebes. Gaston Maspero had recommended Carter to Carnarvon as he knew he would apply modern archaeological methods and systems of recording.
In 1914, Lord Carnarvon received the concession to dig in the Valley of the Kings, Carter was again employed to lead the work. However excavations and study were soon interrupted by the First World War, Carter spending these war years working for the British Government as a diplomatic courier and translator. He enthusiastically resumed his excavation work towards the end of 1917.
By 1922, Lord Carnarvon had become dissatisfied with the lack of results after many years of finding little. He informed Carter that he had one more season of funding to make a significant find in the Valley of the Kings.
Carter returned to the Valley of Kings, and investigated a line of huts that he had abandoned a few seasons earlier. The crew cleared the huts and rock debris beneath. On 4 November 1922, their young water boy accidentally stumbled on a stone that turned out to be the top of a flight of steps cut into the bedrock. Carter had the steps partially dug out until the top of a mud-plastered doorway was found. The doorway was stamped with indistinct cartouches (oval seals with hieroglyphic writing). Carter ordered the staircase to be refilled, and sent a telegram to Carnarvon, who arrived two-and-a-half weeks later on 23 November.
On 26 November 1922, Carter made a "tiny breach in the top left-hand corner" of the doorway, with Carnarvon, his daughter Lady Evelyn Herbert, and others in attendance, using a chisel that his grandmother had given him for his 17th birthday. He was able to peer in by the light of a candle and see that many of the gold and ebony treasures were still in place. He did not yet know whether it was "a tomb or merely an old cache", but he did see a promising sealed doorway between two sentinel statues. Carnarvon asked, "Can you see anything?" Carter replied with the famous words: "Yes, wonderful things!" Carter had, in fact, discovered Tutankhamun's tomb (subsequently designated KV62).
Carter's notes and photographic evidence indicate that he, Lord Carnarvon, and Lady Evelyn Herbert entered the burial chamber in November 1922, before the official opening.
The next several months were spent cataloguing the contents of the antechamber under the "often stressful" supervision of Pierre Lacau, director general of the Department of Antiquities of Egypt. On 16 February 1923, Carter opened the sealed doorway and found that it did indeed lead to a burial chamber, and he got his first glimpse of the sarcophagus of Tutankhamun. The tomb was considered the best preserved and most intact pharaonic tomb ever found in the Valley of the Kings, and the discovery was eagerly covered by the world's press, but most of their representatives were kept in their hotels, much to their annoyance. Only H. V. Morton from "The Times" newspaper was allowed on the scene, and his vivid descriptions helped to cement Carter's reputation with the British public.
Towards the end of February 1923, a rift between Lord Carnarvon and Carter, probably caused by a disagreement on how to manage the supervising Egyptian authorities, temporarily closed excavation. Work recommenced in early March after Lord Carnarvon apologised to Carter. Later that month Lord Carnarvon contracted blood poisoning while staying in Luxor near the tomb site. He died in Cairo on 5 April 1923. Lady Carnarvon retained her late husband's concession in the Valley of the Kings, allowing Carter to continue his work.
Carter's meticulous cataloguing of the thousands of objects in the tomb continued until 1932, most being moved to the Egyptian Museum in Cairo. There were several breaks in the work, including one lasting nearly a year in 1924–25, caused by a dispute over what Carter saw as excessive control of the excavation by the Egyptian Antiquities Service. The Egyptian authorities eventually agreed that Carter should complete the tomb's clearance.
Despite being involved in the greatest archaeological find of his time, Carter received no honour from the British government. However, in 1926, Carter received the Order of the Nile, third class, from King Fuad I of Egypt.
Carter had authored a number of books on Egyptology during his career. During those years he had also been awarded an honorary degree of Doctor of Science by Yale University and honorary membership in the Real Academia de la Historia of Madrid, Spain.
The suggestion that Carter had an affair with Lady Evelyn Herbert, the daughter of the 5th Earl of Carnarvon, was dismissed by the 8th Earl, who termed Carter a "stoical loner". A former colleague of Carter's at the British Museum suggested he was homosexual, which an Egyptian guide who knew Carter confirmed, indicating that his tastes extended to "both boys and the occasional 'dancing girl'". There is, however, no sign that Carter enjoyed any close relationships throughout his life.
After the clearance of the tomb had been completed, Carter retired from archaeology and became a part-time agent for collectors and museums, including the Cleveland Museum of Art and the Detroit Institute of Arts. In 1924 he toured Britain, as well as France, Spain and the United States, delivering a series of illustrated lectures. Those in New York City and other US cities were attended by large and enthusiastic audiences, sparking American Egyptomania. Carter never married nor had children.
Carter died at his London flat at 49 Albert Court, next to the Royal Albert Hall, on 2 March 1939, aged 64 from Hodgkin's Disease. Few people attended his funeral. Carter is buried in Putney Vale Cemetery in London.
Probate was granted on 5 July 1939 to English Egyptologist Henry Burton and to publishing entrepreneur Bruce Sterling Ingram. Carter is described as Howard Carter of Luxor, Upper Egypt, Africa, and of 49 Albert Court, Kensington Grove, Kensington, London. His estate was valued at £2002. The second grant of Probate was issued in Cairo on 1 September 1939.
The epitaph on the gravestone reads: ""May your spirit live, may you spend millions of years, you who love Thebes, sitting with your face to the north wind, your eyes beholding happiness"", a quotation taken from the Wishing Cup of Tutankhamun, and ""O night, spread thy wings over me as the imperishable stars"." | https://en.wikipedia.org/wiki?curid=13616 |
History of Scotland
The recorded begins with the arrival of the Roman Empire in the 1st century, when the province of Britannia reached as far north as the Antonine Wall. North of this was Caledonia, inhabited by the "Picti", whose uprisings forced Rome's legions back to Hadrian's Wall. As Rome finally withdrew from Britain, Gaelic raiders called the "Scoti" began colonising Western Scotland and Wales. Prior to Roman times, prehistoric Scotland entered the Neolithic Era about 4000 BC, the Bronze Age about 2000 BC, and the Iron Age around 700 BC.
The Gaelic kingdom of Dál Riata was founded on the west coast of Scotland in the 6th century. In the following century, Irish missionaries introduced the previously pagan Picts to Celtic Christianity. Following England's Gregorian mission, the Pictish king Nechtan chose to abolish most Celtic practices in favour of the Roman rite, restricting Gaelic influence on his kingdom and avoiding war with Anglian Northumbria. Towards the end of the 8th century, the Viking invasions began, forcing the Picts and Gaels to cease their historic hostility to each other and to unite in the 9th century, forming the Kingdom of Scotland.
The Kingdom of Scotland was united under the House of Alpin, whose members fought among each other during frequent disputed successions. The last Alpin king, Malcolm II, died without a male issue in the early 11th century and the kingdom passed through his daughter's son to the House of Dunkeld or Canmore. The last Dunkeld king, Alexander III, died in 1286. He left only his infant granddaughter Margaret, Maid of Norway as heir, who died herself four years later. England, under Edward I, would take advantage of this questioned succession to launch a series of conquests, resulting in the Wars of Scottish Independence, as Scotland passed back and forth between the House of Balliol and the House of Bruce. Scotland's ultimate victory confirmed Scotland as a fully independent and sovereign kingdom.
When King David II died without issue, his nephew Robert II established the House of Stuart, which would rule Scotland uncontested for the next three centuries. James VI, Stuart king of Scotland, also inherited the throne of England in 1603, and the Stuart kings and queens ruled both independent kingdoms until the Acts of Union in 1707 merged the two kingdoms into a new state, the Kingdom of Great Britain. Ruling until 1714, Queen Anne was the last Stuart monarch. Since 1714, the succession of the British monarchs of the houses of Hanover and Saxe-Coburg and Gotha (Windsor) has been due to their descent from James VI and I of the House of Stuart.
During the Scottish Enlightenment and Industrial Revolution, Scotland became one of the commercial, intellectual and industrial powerhouses of Europe. Later, its industrial decline following the Second World War was particularly acute. In recent decades Scotland has enjoyed something of a cultural and economic renaissance, fuelled in part by a resurgent financial services sector and the proceeds of North Sea oil and gas. Since the 1950s, nationalism has become a strong political topic, with serious debates on Scottish independence, and a referendum in 2014 about leaving the British Union.
People lived in Scotland for at least 8,500 years before Britain's recorded history. At times during the last interglacial period (130,000–70,000 BC) Europe had a climate warmer than today's, and early humans may have made their way to Scotland, with the possible discovery of pre-Ice Age axes on Orkney and mainland Scotland. Glaciers then scoured their way across most of Britain, and only after the ice retreated did Scotland again become habitable, around 9600 BC. Upper Paleolithic hunter-gatherer encampments formed the first known settlements, and archaeologists have dated an encampment near Biggar to around 12000 BC. Numerous other sites found around Scotland build up a picture of highly mobile boat-using people making tools from bone, stone and antlers. The oldest house for which there is evidence in Britain is the oval structure of wooden posts found at South Queensferry near the Firth of Forth, dating from the Mesolithic period, about 8240 BC. The earliest stone structures are probably the three hearths found at Jura, dated to about 6000 BC.
Neolithic farming brought permanent settlements. Evidence of these includes the well-preserved stone house at Knap of Howar on Papa Westray, dating from around 3500 BC and the village of similar houses at Skara Brae on West Mainland, Orkney from about 500 years later. The settlers introduced chambered cairn tombs from around 3500 BC, as at Maeshowe, and from about 3000 BC the many standing stones and circles such as those at Stenness on the mainland of Orkney, which date from about 3100 BC, of four stones, the tallest of which is in height. These were part of a pattern that developed in many regions across Europe at about the same time.
The creation of cairns and Megalithic monuments continued into the Bronze Age, which began in Scotland about 2000 BC. As elsewhere in Europe, hill forts were first introduced in this period, including the occupation of Eildon Hill near Melrose in the Scottish Borders, from around 1000 BC, which accommodated several hundred houses on a fortified hilltop. From the Early and Middle Bronze Age there is evidence of cellular round houses of stone, as at Jarlshof and Sumburgh in Shetland. There is also evidence of the occupation of crannogs, roundhouses partially or entirely built on artificial islands, usually in lakes, rivers and estuarine waters.
In the early Iron Age, from the seventh century BC, cellular houses began to be replaced on the northern isles by simple Atlantic roundhouses, substantial circular buildings with a dry stone construction. From about 400 BC, more complex Atlantic roundhouses began to be built, as at Howe, Orkney and Crosskirk, Caithness. The most massive constructions that date from this era are the circular broch towers, probably dating from about 200 BC. This period also saw the first wheelhouses, a roundhouse with a characteristic outer wall, within which was a circle of stone piers (bearing a resemblance to the spokes of a wheel), but these would flourish most in the era of Roman occupation. There is evidence for about 1,000 Iron Age hill forts in Scotland, most located below the Clyde-Forth line, which have suggested to some archaeologists the emergence of a society of petty rulers and warrior elites recognisable from Roman accounts.
The surviving pre-Roman accounts of Scotland originated with the Greek Pytheas of Massalia, who may have circumnavigated the British Isles of Albion (Britain) and Ierne (Ireland) sometime around 325 BC. The most northerly point of Britain was called "Orcas" (Orkney). By the time of Pliny the Elder, who died in AD 79, Roman knowledge of the geography of Scotland had extended to the "Hebudes" (The Hebrides), "Dumna" (probably the Outer Hebrides), the Caledonian Forest and the people of the Caledonii, from whom the Romans named the region north of their control Caledonia. Ptolemy, possibly drawing on earlier sources of information as well as more contemporary accounts from the Agricolan invasion, identified 18 tribes in Scotland in his "Geography", but many of the names are obscure and the geography becomes less reliable in the north and west, suggesting early Roman knowledge of these areas was confined to observations from the sea.
The Roman invasion of Britain began in earnest in AD 43, leading to the establishment of the Roman province of Britannia in the south. By the year 71, the Roman governor Quintus Petillius Cerialis had launched an invasion of what is now Scotland. In the year 78, Gnaeus Julius Agricola arrived in Britain to take up his appointment as the new governor and began a series of major incursions. He is said to have pushed his armies to the estuary of the "River Taus" (usually assumed to be the River Tay) and established forts there, including a legionary fortress at Inchtuthil. After his victory over the northern tribes at Mons Graupius in 84, a series of forts and towers were established along the Gask Ridge, which marked the boundary between the Lowland and Highland zones, probably forming the first Roman "limes" or frontier in Scotland. Agricola's successors were unable or unwilling to further subdue the far north. By the year 87, the occupation was limited to the Southern Uplands and by the end of the first century the northern limit of Roman expansion was a line drawn between the Tyne and Solway Firth. The Romans eventually withdrew to a line in what is now northern England, building the fortification known as Hadrian's Wall from coast to coast.
Around 141, the Romans undertook a reoccupation of southern Scotland, moving up to construct a new "limes" between the Firth of Forth and the Firth of Clyde, which became the Antonine Wall. The largest Roman construction inside Scotland, it is a sward-covered wall made of turf around high, with nineteen forts. It extended for . Having taken twelve years to build, the wall was overrun and abandoned soon after 160. The Romans retreated to the line of Hadrian's Wall. Roman troops penetrated far into the north of modern Scotland several more times, with at least four major campaigns. The most notable invasion was in 209 when the emperor Septimius Severus led a major force north. After the death of Severus in 210 they withdrew south to Hadrian's Wall, which would be Roman frontier until it collapsed in the 5th century. By the close of the Roman occupation of southern and central Britain in the 5th century, the Picts had emerged as the dominant force in northern Scotland, with the various Brythonic tribes the Romans had first encountered there occupying the southern half of the country. Roman influence on Scottish culture and history was not enduring.
In the centuries after the departure of the Romans from Britain, there were four groups within the borders of what is now Scotland. In the east were the Picts, with kingdoms between the river Forth and Shetland. In the late 6th century the dominant force was the Kingdom of Fortriu, whose lands were centred on Strathearn and Menteith and who raided along the eastern coast into modern England. In the west were the Gaelic (Goidelic)-speaking people of Dál Riata with their royal fortress at Dunadd in Argyll, with close links with the island of Ireland, from whom comes the name Scots. In the south was the British (Brythonic) Kingdom of Strathclyde, descendants of the peoples of the Roman influenced kingdoms of "Hen Ogledd" (Old north), often named Alt Clut, the Brythonic name for their capital at Dumbarton Rock. Finally, there were the English or "Angles", Germanic invaders who had overrun much of southern Britain and held the Kingdom of Bernicia, in the south-east. The first English king in the historical record is Ida, who is said to have obtained the throne and the kingdom about 547. Ida's grandson, Æthelfrith, united his kingdom with Deira to the south to form Northumbria around the year 604. There were changes of dynasty, and the kingdom was divided, but it was re-united under Æthelfrith's son Oswald (r. 634-42).
Scotland was largely converted to Christianity by Irish-Scots missions associated with figures such as St Columba, from the fifth to the seventh centuries. These missions tended to found monastic institutions and collegiate churches that served large areas. Partly as a result of these factors, some scholars have identified a distinctive form of Celtic Christianity, in which abbots were more significant than bishops, attitudes to clerical celibacy were more relaxed and there was some significant differences in practice with Roman Christianity, particularly the form of tonsure and the method of calculating Easter, although most of these issues had been resolved by the mid-7th century.
Conversion to Christianity may have sped a long-term process of gaelicisation of the Pictish kingdoms, which adopted Gaelic language and customs. There was also a merger of the Gaelic and Pictish crowns, although historians debate whether it was a Pictish takeover of Dál Riata, or the other way around. This culminated in the rise of Cínaed mac Ailpín (Kenneth MacAlpin) in the 840s, which brought to power the House of Alpin. In 867 AD the Vikings seized the southern half of Northumbria, forming the Kingdom of York; three years later they stormed the Britons' fortress of Dumbarton and subsequently conquered much of England except for a reduced Kingdom of Wessex, leaving the new combined Pictish and Gaelic kingdom almost encircled. When he died as king of the combined kingdom in 900, Domnall II (Donald II) was the first man to be called "rí Alban" (i.e. "King of Alba"). The term Scotia was increasingly used to describe the kingdom between North of the Forth and Clyde and eventually the entire area controlled by its kings was referred to as Scotland.
The long reign (900–942/3) of Causantín (Constantine II) is often regarded as the key to formation of the Kingdom of Alba. He was later credited with bringing Scottish Christianity into conformity with the Catholic Church. After fighting many battles, his defeat at Brunanburh was followed by his retirement as a Culdee monk at St. Andrews. The period between the accession of his successor Máel Coluim I (Malcolm I) and Máel Coluim mac Cináeda (Malcolm II) was marked by good relations with the Wessex rulers of England, intense internal dynastic disunity and relatively successful expansionary policies. In 945, Máel Coluim I annexed Strathclyde as part of a deal with King Edmund of England, where the kings of Alba had probably exercised some authority since the later 9th century, an event offset somewhat by loss of control in Moray. The reign of King Donnchad I (Duncan I) from 1034 was marred by failed military adventures, and he was defeated and killed by MacBeth, the Mormaer of Moray, who became king in 1040. MacBeth ruled for seventeen years before he was overthrown by Máel Coluim, the son of Donnchad, who some months later defeated MacBeth's step-son and successor Lulach to become King Máel Coluim III (Malcolm III).
It was Máel Coluim III, who acquired the nickname "Canmore" ("Cenn Mór", "Great Chief"), which he passed to his successors and who did most to create the Dunkeld dynasty that ruled Scotland for the following two centuries. Particularly important was his second marriage to the Anglo-Hungarian princess Margaret. This marriage, and raids on northern England, prompted William the Conqueror to invade and Máel Coluim submitted to his authority, opening up Scotland to later claims of sovereignty by English kings. When Malcolm died in 1093, his brother Domnall III (Donald III) succeeded him. However, William II of England backed Máel Coluim's son by his first marriage, Donnchad, as a pretender to the throne and he seized power. His murder within a few months saw Domnall restored with one of Máel Coluim sons by his second marriage, Edmund, as his heir. The two ruled Scotland until two of Edmund's younger brothers returned from exile in England, again with English military backing. Victorious, Edgar, the oldest of the three, became king in 1097. Shortly afterwards Edgar and the King of Norway, Magnus Barefoot concluded a treaty recognising Norwegian authority over the Western Isles. In practice Norse control of the Isles was loose, with local chiefs enjoying a high degree of independence. He was succeeded by his brother Alexander, who reigned 1107–24.
When Alexander died in 1124, the crown passed to Margaret's fourth son David I, who had spent most of his life as a Norman French baron in England. His reign saw what has been characterised as a "Davidian Revolution", by which native institutions and personnel were replaced by English and French ones, underpinning the development of later Medieval Scotland. Members of the Anglo-Norman nobility took up places in the Scottish aristocracy and he introduced a system of feudal land tenure, which produced knight service, castles and an available body of heavily armed cavalry. He created an Anglo-Norman style of court, introduced the office of justicar to oversee justice, and local offices of sheriffs to administer localities. He established the first royal burghs in Scotland, granting rights to particular settlements, which led to the development of the first true Scottish towns and helped facilitate economic development as did the introduction of the first recorded Scottish coinage. He continued a process begun by his mother and brothers helping to establish foundations that brought reform to Scottish monasticism based on those at Cluny and he played a part in organising diocese on lines closer to those in the rest of Western Europe.
These reforms were pursued under his successors and grandchildren Malcolm IV of Scotland and William I, with the crown now passing down the main line of descent through primogeniture, leading to the first of a series of minorities. The benefits of greater authority were reaped by William's son Alexander II and his son Alexander III, who pursued a policy of peace with England to expand their authority in the Highlands and Islands. By the reign of Alexander III, the Scots were in a position to annexe the remainder of the western seaboard, which they did following Haakon Haakonarson's ill-fated invasion and the stalemate of the Battle of Largs with the Treaty of Perth in 1266.
The death of King Alexander III in 1286, and the death of his granddaughter and heir Margaret, Maid of Norway in 1290, left 14 rivals for succession. To prevent civil war the Scottish magnates asked Edward I of England to arbitrate, for which he extracted legal recognition that the realm of Scotland was held as a feudal dependency to the throne of England before choosing John Balliol, the man with the strongest claim, who became king in 1292. Robert Bruce, 5th Lord of Annandale, the next strongest claimant, accepted this outcome with reluctance. Over the next few years Edward I used the concessions he had gained to systematically undermine both the authority of King John and the independence of Scotland. In 1295, John, on the urgings of his chief councillors, entered into an alliance with France, known as the Auld Alliance.
In 1296, Edward invaded Scotland, deposing King John. The following year William Wallace and Andrew de Moray raised forces to resist the occupation and under their joint leadership an English army was defeated at the Battle of Stirling Bridge. For a short time Wallace ruled Scotland in the name of John Balliol as Guardian of the realm. Edward came north in person and defeated Wallace at the Battle of Falkirk in 1298. Wallace escaped but probably resigned as Guardian of Scotland. In 1305, he fell into the hands of the English, who executed him for treason despite the fact that he owed no allegiance to England.
Rivals John Comyn and Robert the Bruce, grandson of the claimant, were appointed as joint guardians in his place. On 10 February 1306, Bruce participated in the murder of Comyn, at Greyfriars Kirk in Dumfries. Less than seven weeks later, on 25 March, Bruce was crowned as King. However, Edward's forces overran the country after defeating Bruce's small army at the Battle of Methven. Despite the excommunication of Bruce and his followers by Pope Clement V, his support slowly strengthened; and by 1314 with the help of leading nobles such as Sir James Douglas and Thomas Randolph only the castles at Bothwell and Stirling remained under English control. Edward I had died in 1307. His heir Edward II moved an army north to break the siege of Stirling Castle and reassert control. Robert defeated that army at the Battle of Bannockburn in 1314, securing "de facto" independence. In 1320, the Declaration of Arbroath, a remonstrance to the Pope from the nobles of Scotland, helped convince Pope John XXII to overturn the earlier excommunication and nullify the various acts of submission by Scottish kings to English ones so that Scotland's sovereignty could be recognised by the major European dynasties. The Declaration has also been seen as one of the most important documents in the development of a Scottish national identity.
In 1326, what may have been the first full Parliament of Scotland met. The parliament had evolved from an earlier council of nobility and clergy, the "colloquium", constituted around 1235, but perhaps in 1326 representatives of the burghs – the burgh commissioners – joined them to form the Three Estates. In 1328, Edward III signed the Treaty of Edinburgh–Northampton acknowledging Scottish independence under the rule of Robert the Bruce. However, four years after Robert's death in 1329, England once more invaded on the pretext of restoring Edward Balliol, son of John Balliol, to the Scottish throne, thus starting the Second War of Independence. Despite victories at Dupplin Moor and Halidon Hill, in the face of tough Scottish resistance led by Sir Andrew Murray, the son of Wallace's comrade in arms, successive attempts to secure Balliol on the throne failed. Edward III lost interest in the fate of his protégé after the outbreak of the Hundred Years' War with France. In 1341, David II, King Robert's son and heir, was able to return from temporary exile in France. Balliol finally resigned his claim to the throne to Edward in 1356, before retiring to Yorkshire, where he died in 1364.
After David II's death, Robert II, the first of the Stewart kings, came to the throne in 1371. He was followed in 1390 by his ailing son John, who took the regnal name Robert III. During Robert III's reign (1390–1406), actual power rested largely in the hands of his brother, Robert Stewart, Duke of Albany. After the suspicious death (possibly on the orders of the Duke of Albany) of his elder son, David, Duke of Rothesay in 1402, Robert, fearful for the safety of his younger son, the future James I, sent him to France in 1406. However, the English captured him en route and he spent the next 18 years as a prisoner held for ransom. As a result, after the death of Robert III, regents ruled Scotland: first, the Duke of Albany; and later his son Murdoch. When Scotland finally paid the ransom in 1424, James, aged 32, returned with his English bride determined to assert this authority. Several of the Albany family were executed; but he succeeded in centralising control in the hands of the crown, at the cost of increasing unpopularity, and was assassinated in 1437. His son James II (reigned 1437–1460), when he came of age in 1449, continued his father's policy of weakening the great noble families, most notably taking on the powerful Black Douglas family that had come to prominence at the time of the Bruce.
In 1468, the last significant acquisition of Scottish territory occurred when James III was engaged to Margaret of Denmark, receiving the Orkney Islands and the Shetland Islands in payment of her dowry. Berwick upon Tweed was captured by England in 1482. With the death of James III in 1488 at the Battle of Sauchieburn, his successor James IV successfully ended the quasi-independent rule of the Lord of the Isles, bringing the Western Isles under effective Royal control for the first time. In 1503, he married Margaret Tudor, daughter of Henry VII of England, thus laying the foundation for the 17th-century Union of the Crowns.
Scotland advanced markedly in educational terms during the 15th century with the founding of the University of St Andrews in 1413, the University of Glasgow in 1450 and the University of Aberdeen in 1495, and with the passing of the Education Act 1496, which decreed that all sons of barons and freeholders of substance should attend grammar schools. James IV's reign is often considered to have seen a flowering of Scottish culture under the influence of the European Renaissance.
In 1512, the Auld Alliance was renewed and under its terms, when the French were attacked by the English under Henry VIII, James IV invaded England in support. The invasion was stopped decisively at the Battle of Flodden Field during which the King, many of his nobles, and a large number of ordinary troops were killed, commemorated by the song "Flowers of the Forest". Once again Scotland's government lay in the hands of regents in the name of the infant James V.
James V finally managed to escape from the custody of the regents in 1528. He continued his father's policy of subduing the rebellious Highlands, Western and Northern isles and the troublesome borders. He also continued the French alliance, marrying first the French noblewoman Madeleine of Valois and then after her death Marie of Guise. James V's domestic and foreign policy successes were overshadowed by another disastrous campaign against England that led to defeat at the Battle of Solway Moss (1542). James died a short time later, a demise blamed by contemporaries on "a broken heart". The day before his death, he was brought news of the birth of an heir: a daughter, who would become Mary, Queen of Scots.
Once again, Scotland was in the hands of a regent. Within two years, the Rough Wooing began, Henry VIII's military attempt to force a marriage between Mary and his son, Edward. This took the form of border skirmishing and several English campaigns into Scotland. In 1547, after the death of Henry VIII, forces under the English regent Edward Seymour, 1st Duke of Somerset were victorious at the Battle of Pinkie Cleugh, the climax of the Rough Wooing, and followed up by the occupation of Haddington. Mary was then sent to France at the age of five, as the intended bride of the heir to the French throne. Her mother, Marie de Guise, stayed in Scotland to look after the interests of Mary – and of France – although the Earl of Arran acted officially as regent. Guise responded by calling on French troops, who helped stiffen resistance to the English occupation. By 1550, after a change of regent in England, the English withdrew from Scotland completely.
From 1554, Marie de Guise, took over the regency, and continued to advance French interests in Scotland. French cultural influence resulted in a large influx of French vocabulary into Scots. But anti-French sentiment also grew, particularly among Protestants, who saw the English as their natural allies. In 1560, Marie de Guise died, and soon after the Auld Alliance also ended, with the signing of the Treaty of Edinburgh, which provided for the removal of French and English troops from Scotland. The Scottish Reformation took place only days later when the Scottish Parliament abolished the Roman Catholic religion and outlawed the Mass.
Meanwhile, Queen Mary had been raised as a Catholic in France, and married to the Dauphin, who became king as Francis II in 1559, making her queen consort of France. When Francis died in 1560, Mary, now 19, returned to Scotland to take up the government. Despite her private religion, she did not attempt to re-impose Catholicism on her largely Protestant subjects, thus angering the chief Catholic nobles. Her six-year personal reign was marred by a series of crises, largely caused by the intrigues and rivalries of the leading nobles. The murder of her secretary, David Riccio, was followed by that of her unpopular second husband Lord Darnley, and her abduction by and marriage to the Earl of Bothwell, who was implicated in Darnley's murder. Mary and Bothwell confronted the lords at Carberry Hill and after their forces melted away, he fled and she was captured by Bothwell's rivals. Mary was imprisoned in Loch Leven Castle, and in July 1567, was forced to abdicate in favour of her infant son James VI. Mary eventually escaped and attempted to regain the throne by force. After her defeat at the Battle of Langside in 1568, she took refuge in England, leaving her young son in the hands of regents. In Scotland the regents fought a civil war on behalf of James VI against his mother's supporters. In England, Mary became a focal point for Catholic conspirators and was eventually tried for treason and executed on the orders of her kinswoman Elizabeth I.
During the 16th century, Scotland underwent a Protestant Reformation that created a predominantly Calvinist national Kirk, which became Presbyterian in outlook and severely reduced the powers of bishops. In the earlier part of the century, the teachings of first Martin Luther and then John Calvin began to influence Scotland, particularly through Scottish scholars, often training for the priesthood, who had visited Continental universities. The Lutheran preacher Patrick Hamilton was executed for heresy in St. Andrews in 1528. The execution of others, especially the Zwingli-influenced George Wishart, who was burnt at the stake on the orders of Cardinal Beaton in 1546, angered Protestants. Wishart's supporters assassinated Beaton soon after and seized St. Andrews Castle, which they held for a year before they were defeated with the help of French forces. The survivors, including chaplain John Knox, were condemned to be galley slaves in France, stoking resentment of the French and creating martyrs for the Protestant cause.
Limited toleration and the influence of exiled Scots and Protestants in other countries, led to the expansion of Protestantism, with a group of lairds declaring themselves Lords of the Congregation in 1557 and representing their interests politically. The collapse of the French alliance and English intervention in 1560 meant that a relatively small, but highly influential, group of Protestants were in a position to impose reform on the Scottish church. A confession of faith, rejecting papal jurisdiction and the mass, was adopted by Parliament in 1560, while the young Mary, Queen of Scots, was still in France.
Knox, having escaped the galleys and spent time in Geneva as a follower of Calvin, emerged as the most significant figure of the period. The Calvinism of the reformers led by Knox resulted in a settlement that adopted a Presbyterian system and rejected most of the elaborate trappings of the medieval church. The reformed Kirk gave considerable power to local lairds, who often had control over the appointment of the clergy. There were widespread, but generally orderly outbreaks of iconoclasm. At this point the majority of the population was probably still Catholic in persuasion and the Kirk found it difficult to penetrate the Highlands and Islands, but began a gradual process of conversion and consolidation that, compared with reformations elsewhere, was conducted with relatively little persecution.
Women shared in the religiosity of the day. The egalitarian and emotional aspects of Calvinism appealed to men and women alike. Historian Alasdair Raffe finds that, "Men and women were thought equally likely to be among the elect...Godly men valued the prayers and conversation of their female co-religionists, and this reciprocity made for loving marriages and close friendships between men and women." Furthermore, there was an increasingly intense relationship in the pious bonds between minister and his women parishioners. For the first time, laywomen gained numerous new religious roles,And took a prominent place in prayer societies.
In 1603, James VI King of Scots inherited the throne of the Kingdom of England, and became King James I of England, leaving Edinburgh for London, uniting England and Scotland under one monarch. The Union was a personal or dynastic union, with the Crowns remaining both distinct and separate—despite James's best efforts to create a new "imperial" throne of "Great Britain". The acquisition of the Irish crown along with the English, facilitated a process of settlement by Scots in what was historically the most troublesome area of the kingdom in Ulster, with perhaps 50,000 Scots settling in the province by the mid-17th century. James adopted a different approach to impose his authority in the western Highlands and Islands. The additional military resource that was now available, particularly the English navy, resulted in the enactment of the Statutes of Iona which compelled integration of Hebridean clan leaders with the rest of Scottish society. Attempts to found a Scottish colony in North America in Nova Scotia were largely unsuccessful, with insufficient funds and willing colonists.
Although James had tried to get the Scottish Church to accept some of the High Church Anglicanism of his southern kingdom, he met with limited success. His son and successor, Charles I, took matters further, introducing an English-style Prayer Book into the Scottish church in 1637. This resulted in anger and widespread rioting. (The story goes that it was initiated by a certain Jenny Geddes who threw a stool in St Giles Cathedral.) Representatives of various sections of Scottish society drew up the National Covenant in 1638, objecting to the King's liturgical innovations. In November of the same year matters were taken even further, when at a meeting of the General Assembly in Glasgow the Scottish bishops were formally expelled from the Church, which was then established on a full Presbyterian basis. Charles gathered a military force; but as neither side wished to push the matter to a full military conflict, a temporary settlement was concluded at Pacification of Berwick. Matters remained unresolved until 1640 when, in a renewal of hostilities, Charles's northern forces were defeated by the Scots at the Battle of Newburn to the west of Newcastle. During the course of these Bishops' Wars Charles tried to raise an army of Irish Catholics, but was forced to back down after a storm of protest in Scotland and England. The backlash from this venture provoked a rebellion in Ireland and Charles was forced to appeal to the English Parliament for funds. Parliament's demands for reform in England eventually resulted in the English Civil War. This series of civil wars that engulfed England, Ireland and Scotland in the 1640s and 1650s is known to modern historians as the Wars of the Three Kingdoms. The Covenanters meanwhile, were left governing Scotland, where they raised a large army of their own and tried to impose their religious settlement on Episcopalians and Roman Catholics in the north of the country. In England his religious policies caused similar resentment and he ruled without recourse to parliament from 1629.
As the civil wars developed, the English Parliamentarians appealed to the Scots Covenanters for military aid against the King. A Solemn League and Covenant was entered into, guaranteeing the Scottish Church settlement and promising further reform in England. Scottish troops played a major part in the defeat of Charles I, notably at the battle of Marston Moor. An army under the Earl of Leven occupied the North of England for some time.
However, not all Scots supported the Covenanter's taking arms against their King. In 1644, James Graham, 1st Marquess of Montrose attempted to raise the Highlands for the King. Few Scots would follow him, but, aided by 1,000 Irish, Highland and Islesmen troops sent by the Irish Confederates under Alasdair MacDonald (MacColla), and an instinctive genius for mobile warfare, he was stunningly successful. A Scottish Civil War began in September 1644 with his victory at battle of Tippermuir. After a series of victories over poorly trained Covenanter militias, the lowlands were at his mercy. However, at this high point, his army was reduced in size, as MacColla and the Highlanders preferred to continue the war in the north against the Campbells. Shortly after, what was left of his force was defeated at the Battle of Philiphaugh. Escaping to the north, Montrose attempted to continue the struggle with fresh troops; but in July 1646 his army was disbanded after the King surrendered to the Scots army at Newark, and the civil war came to an end.
The following year Charles, while he was being held captive in Carisbrooke Castle, entered into an agreement with moderate Scots Presbyterians. In this secret 'Engagement', the Scots promised military aid in return for the King's agreement to implement Presbyterianism in England on a three-year trial basis. The Duke of Hamilton led an invasion of England to free the King, but he was defeated by Oliver Cromwell in August 1648 at the Battle of Preston.
The execution of Charles I in 1649 was carried out in the face of objections by the Covenanter government and his son was immediately proclaimed as King Charles II in Edinburgh. Oliver Cromwell led an invasion of Scotland in 1650, and defeated the Scottish army at Dunbar and then defeated a Scottish invasion of England at Worcester on 3 September 1651 (the anniversary of his victory at Dunbar). Cromwell emerged as the leading figure in the English government and Scotland was occupied by an English force under George Monck. The country was incorporated into the Puritan-governed Commonwealth and lost its independent church government, parliament and legal system, but gained access to English markets. Various attempts were made to legitimise the union, calling representatives from the Scottish burghs and shires to negotiations and to various English parliaments, where they were always under-represented and had little opportunity for dissent. However, final ratification was delayed by Cromwell's problems with his various parliaments and the union did not become the subject of an act until 1657 (see Tender of Union).
After the death of Cromwell and the regime's collapse, Charles II was restored in 1660 and Scotland again became an independent kingdom. Scotland regained its system of law, parliament and kirk, but also the Lords of the Articles (by which the crown managed parliament), bishops and a king who did not visit the country. He ruled largely without reference to Parliament, through a series of commissioners. These began with John, Earl of Middleton and ended with the king's brother and heir, James, Duke of York (known in Scotland as the Duke of Albany). The English Navigation Acts prevented the Scots engaging in what would have been lucrative trading with England's colonies. The restoration of episcopacy was a source of trouble, particularly in the south-west of the country, an area with strong Presbyterian sympathies. Abandoning the official church, many of the inhabitants began to attend illegal field assemblies, known as conventicles. Official attempts to suppress these led to a rising in 1679, defeated by James, Duke of Monmouth, the King's illegitimate son, at the Battle of Bothwell Bridge. In the early 1680s a more intense phase of persecution began, later to be called "the Killing Time". When Charles died in 1685 and his brother, a Roman Catholic, succeeded him as James VII of Scotland (and II of England), matters came to a head.
James put Catholics in key positions in the government and attendance at conventicles was made punishable by death. He disregarded parliament, purged the Council and forced through religious toleration to Roman Catholics, alienating his Protestant subjects. It was believed that the king would be succeeded by his daughter Mary, a Protestant and the wife of William of Orange, Stadtholder of the Netherlands, but when in 1688, James produced a male heir, James Francis Edward Stuart, it was clear that his policies would outlive him. An invitation by seven leading Englishmen led William to land in England with 40,000 men, and James fled, leading to the almost bloodless "Glorious Revolution". The Estates issued a "Claim of Right" that suggested that James had forfeited the crown by his actions (in contrast to England, which relied on the legal fiction of an abdication) and offered it to William and Mary, which William accepted, along with limitations on royal power. The final settlement restored Presbyterianism and abolished the bishops who had generally supported James. However, William, who was more tolerant than the Kirk tended to be, passed acts restoring the Episcopalian clergy excluded after the Revolution.
Although William's supporters dominated the government, there remained a significant following for James, particularly in the Highlands. His cause, which became known as Jacobitism, from the Latin "(Jacobus)" for James, led to a series of risings. An initial Jacobite military attempt was led by John Graham, Viscount Dundee. His forces, almost all Highlanders, defeated William's forces at the Battle of Killiecrankie in 1689, but they took heavy losses and Dundee was slain in the fighting. Without his leadership the Jacobite army was soon defeated at the Battle of Dunkeld. In the aftermath of the Jacobite defeat on 13 February 1692, in an incident since known as the Massacre of Glencoe, 38 members of the Clan MacDonald of Glencoe were killed by members of the Earl of Argyll's Regiment of Foot, on the grounds that they had not been prompt in pledging allegiance to the new monarchs.
The closing decade of the 17th century saw the generally favourable economic conditions that had dominated since the Restoration come to an end. There was a slump in trade with the Baltic and France from 1689 to 1691, caused by French protectionism and changes in the Scottish cattle trade, followed by four years of failed harvests (1695, 1696 and 1698–9), an era known as the "seven ill years". The result was severe famine and depopulation, particularly in the north. The Parliament of Scotland of 1695 enacted proposals to help the desperate economic situation, including setting up the Bank of Scotland. The "Company of Scotland Trading to Africa and the Indies" received a charter to raise capital through public subscription.
With the dream of building a lucrative overseas colony for Scotland, the Company of Scotland invested in the Darien scheme, an ambitious plan devised by William Paterson to establish a colony on the Isthmus of Panama in the hope of establishing trade with the Far East. The Darién scheme won widespread support in Scotland as the landed gentry and the merchant class were in agreement in seeing overseas trade and colonialism as routes to upgrade Scotland's economy. Since the capital resources of the Edinburgh merchants and landholder elite were insufficient, the company appealed to middling social ranks, who responded with patriotic fervour to the call for money; the lower classes volunteered as colonists. But the English government opposed the idea: involved in the War of the Grand Alliance from 1689 to 1697 against France, it did not want to offend Spain, which claimed the territory as part of New Granada. The English investors withdrew. Returning to Edinburgh, the Company raised 400,000 pounds in a few weeks. Three small fleets with a total of 3,000 men eventually set out for Panama in 1698. The exercise proved a disaster. Poorly equipped; beset by incessant rain; under attack by the Spanish from nearby Cartagena; and refused aid by the English in the West Indies, the colonists abandoned their project in 1700. Only 1,000 survived and only one ship managed to return to Scotland.
Scotland was a poor rural, agricultural society with a population of 1.3 million in 1755. Although Scotland lost home rule, the Union allowed it to break free of a stultifying system and opened the way for the Scottish enlightenment as well as a great expansion of trade and increase in opportunity and wealth. Edinburgh economist Adam Smith concluded in 1776 that "By the union with England, the middling and inferior ranks of people in Scotland gained a complete deliverance from the power of an aristocracy which had always before oppressed them." Historian Jonathan Israel holds that the Union "proved a decisive catalyst politically and economically," by allowing ambitious Scots entry on an equal basis to a rich expanding empire and its increasing trade.
Scotland's transformation into a rich leader of modern industry came suddenly and unexpectedly in the next 150 years, following its union with England in 1707 and its integration with the advanced English and imperial economies. The transformation was led by two cities that grew rapidly after 1770. Glasgow, on the river Clyde, was the base for the tobacco and sugar trade with an emerging textile industry. Edinburgh was the administrative and intellectual centre where the Scottish Enlightenment was chiefly based.
By the start of the 18th century, a political union between Scotland and England became politically and economically attractive, promising to open up the much larger markets of England, as well as those of the growing English Empire. With economic stagnation since the late 17th century, which was particularly acute in 1704, the country depended more and more heavily on sales of cattle and linen to England, who used this to create pressure for a union. The Scottish parliament voted on 6 January 1707, by 110 to 69, to adopt the Treaty of Union. It was also a full economic union; indeed, most of its 25 articles dealt with economic arrangements for the new state known as "Great Britain". It added 45 Scots to the 513 members of the House of Commons and 16 Scots to the 190 members of the House of Lords, and ended the Scottish parliament. It also replaced the Scottish systems of currency, taxation and laws regulating trade with laws made in London. Scottish law remained separate from English law, and the religious system was not changed. England had about five times the population of Scotland at the time, and about 36 times as much wealth.
Jacobitism was revived by the unpopularity of the union. In 1708, James Francis Edward Stuart, the son of James VII, who became known as "The Old Pretender", attempted an invasion with a French fleet carrying 6,000 men, but the Royal Navy prevented it from landing troops. A more serious attempt occurred in 1715, soon after the death of Anne and the accession of the first Hanoverian king, the eldest son of Sophie, as George I of Great Britain. This rising (known as "The 'Fifteen") envisaged simultaneous uprisings in Wales, Devon, and Scotland. However, government arrests forestalled the southern ventures. In Scotland, John Erskine, Earl of Mar, nicknamed "Bobbin' John", raised the Jacobite clans but proved to be an indecisive leader and an incompetent soldier. Mar captured Perth, but let a smaller government force under the Duke of Argyll hold the Stirling plain. Part of Mar's army joined up with risings in northern England and southern Scotland, and the Jacobites fought their way into England before being defeated at the Battle of Preston, surrendering on 14 November 1715. The day before, Mar had failed to defeat Argyll at the Battle of Sheriffmuir. At this point, James belatedly landed in Scotland, but was advised that the cause was hopeless. He fled back to France. An attempted Jacobite invasion with Spanish assistance in 1719 met with little support from the clans and ended in defeat at the Battle of Glen Shiel.
In 1745, the Jacobite rising known as "The 'Forty-Five" began. Charles Edward Stuart, son of the "Old Pretender", often referred to as "Bonnie Prince Charlie" or the "Young Pretender", landed on the island of Eriskay in the Outer Hebrides. Several clans unenthusiastically joined him. At the outset he was successful, taking Edinburgh and then defeating the only government army in Scotland at the Battle of Prestonpans. The Jacobite army marched into England, took Carlisle and advanced as far as south as Derby. However, it became increasingly evident that England would not support a Roman Catholic Stuart monarch. The Jacobite leadership had a crisis of confidence and they retreated to Scotland as two English armies closed in and Hanoverian troops began to return from the continent. Charles' position in Scotland began to deteriorate as the Whig supporters rallied and regained control of Edinburgh. After an unsuccessful attempt on Stirling, he retreated north towards Inverness. He was pursued by the Duke of Cumberland and gave battle with an exhausted army at Culloden on 16 April 1746, where the Jacobite cause was crushed. Charles hid in Scotland with the aid of Highlanders until September 1746, when he escaped back to France. There were bloody reprisals against his supporters and foreign powers abandoned the Jacobite cause, with the court in exile forced to leave France. The Old Pretender died in 1760 and the Young Pretender, without legitimate issue, in 1788. When his brother, Henry, Cardinal of York, died in 1807, the Jacobite cause was at an end.
With the advent of the Union and the demise of Jacobitism, access to London and the Empire opened up very attractive career opportunities for ambitious middle-class and upper-class Scots, who seized the chance to become entrepreneurs, intellectuals, and soldiers. Thousands of Scots, mainly Lowlanders, took up positions of power in politics, civil service, the army and navy, trade, economics, colonial enterprises and other areas across the nascent British Empire. Historian Neil Davidson notes that "after 1746 there was an entirely new level of participation by Scots in political life, particularly outside Scotland". Davidson also states that "far from being ‘peripheral’ to the British economy, Scotland – or more precisely, the Lowlands – lay at its core". British officials especially appreciated Scottish soldiers. As the Secretary of War told Parliament in 1751, "I am for having always in our army as many Scottish soldiers as possible...because they are generally more hardy and less mutinous". The national policy of aggressively recruiting Scots for senior civilian positions stirred up resentment among Englishmen, ranging from violent diatribes by John Wilkes, to vulgar jokes and obscene cartoons in the popular press, and the haughty ridicule by intellectuals such as Samuel Johnson that was much resented by Scots. In his great "Dictionary" Johnson defined oats as, "a grain, which in England is generally given to horses, but in Scotland supports the people." To which Lord Elibank retorted, "Very true, and where will you find such men and such horses?"
Scottish politics in the late 18th century was dominated by the Whigs, with the benign management of Archibald Campbell, 3rd Duke of Argyll (1682–1761), who was in effect the "viceroy of Scotland" from the 1720s until his death in 1761. Scotland generally supported the king with enthusiasm during the American Revolution. Henry Dundas (1742–1811) dominated political affairs in the latter part of the century. Dundas put a brake on intellectual and social change through his ruthless manipulation of patronage in alliance with Prime Minister William Pitt the Younger, until he lost power in 1806.
The main unit of local government was the parish, and since it was also part of the church, the elders imposed public humiliation for what the locals considered immoral behaviour, including fornication, drunkenness, wife beating, cursing and Sabbath breaking. The main focus was on the poor and the landlords ("lairds") and gentry, and their servants, were not subject to the parish's control. The policing system weakened after 1800 and disappeared in most places by the 1850s.
The clan system of the Highlands and Islands had been seen as a challenge to the rulers of Scotland from before the 17th century. James VI's various measures to exert control included the Statutes of Iona, an attempt to force clan leaders to become integrated into the rest of Scottish society. This started a slow process of change which, by the second half of the 18th century, saw clan chiefs start to think of themselves as commercial landlords, rather than as patriarchs of their people. To their tenants, initially this meant that monetary rents replaced those paid in kind. Later, rent increases became common. In the 1710s the Dukes of Argyll started putting leases of some of their land up for auction; by 1737 this was done across the Argyll property. This commercial attitude replaced the principle of "", which included the obligation on clan chiefs to provide land for clan members. The shift of this attitude slowly spread through the Highland elite (but not among their tenants). As clan chiefs became more integrated into Scottish and British society, many of them built up large debts. It became easier to borrow against the security of a Highland estate from the 1770s onwards. As the lenders became predominantly people and organisations outside the Highlands, there was a greater willingness to foreclose if the borrower defaulted. Combined with an astounding level of financial incompetence among the Highland elite, this ultimately forced the sale of the estates of many Highland landed families over the period 1770–1850. (The greatest number of sales of whole estates was toward the end of this period.)
The Jacobite rebellion of 1745 gave a final period of importance to the ability of Highland clans to raise bodies of fighting men at short notice. With the defeat at Culloden, any enthusiasm for continued warfare disappeared and clan leaders returned to their transition to being commercial landlords. This was arguably accelerated by some of the punitive laws enacted after the rebellion. These included the Heritable Jurisdictions Act of 1746, which removed judicial roles from clan chiefs and gave them to the Scottish law courts. T. M. Devine warns against seeing a clear cause and effect relationship between the post-Culloden legislation and the collapse of clanship. He questions the basic effectiveness of the measures, quoting W. A. Speck who ascribes the pacification of the area more to "a disinclination to rebel than to the government's repressive measures." Devine points out that social change in Gaeldom did not pick up until the 1760s and 1770s, as this coincided with the increased market pressures from the industrialising and urbanising Lowlands.
41 properties belonging to rebels were forfeited to the Crown in the aftermath of the '45. The vast majority of these were sold by auction to pay creditors. 13 were retained and managed on behalf of the government between 1752 and 1784.
The changes by the Dukes of Argyll in the 1730s displaced many of the tacksmen in the area. From the 1770s onwards, this became a matter of policy throughout the Highlands. The restriction on subletting by tacksmen meant that landlords received all the rent paid by the actual farming tenants – thereby increasing their income. By the early part of the 19th century, the tacksman had become a rare component of Highland society. T. M. Devine describes "the displacement of this class as one of the clearest demonstrations of the death of the old Gaelic society." Many emigrated, leading parties of their tenants to North America. These tenants were from the better off part of Highland peasant society, and, together with the tacksmen, they took their capital and entrepreneurial energy to the New World, unwilling to participate in economic changes imposed by their landlords which often involved a loss of status for the tenant.
Agricultural improvement was introduced across the Highlands over the relatively short period of 1760–1850. The evictions involved in this became known as the Highland clearances. There was regional variation. In the east and south of the Highlands, the old townships or "", which were farmed under the run rig system were replaced by larger enclosed farms, with fewer people holding leases and proportionately more of the population working as employees on these larger farms. (This was broadly similar to the situation in the Lowlands.) In the north and west, including the Hebrides, as land was taken out of run rig, Crofting communities were established. Much of this change involved establishing large pastoral sheep farms, with the old displaced tenants moving to new crofts in coastal areas or on poor quality land. Sheep farming was increasingly profitable at the end of the 18th century, so could pay substantially higher rents than the previous tenants. Particularly in the Hebrides, some crofting communities were established to work in the kelp industry. Others were engaged in fishing. Croft sizes were kept small, so that the occupiers were forced to seek employment to supplement what they could grow. This increased the number of seasonal migrant workers travelling to the Lowlands. The resulting connection with the Lowlands was highly influential on all aspects of Highland life, touching on income levels, social attitudes and language. Migrant working gave an advantage in speaking English, which came to be considered "the language of work".
In 1846 the Highland potato famine struck the crofting communities of the North and West Highlands. By 1850 the charitable relief effort was wound up, despite the continuing crop failure, and landlords, charities and the government resorted to encouraging emigration. The overall result was that almost 11,000 people were provided with "assisted passages" by their landlords between 1846 and 1856, with the greatest number travelling in 1851. A further 5,000 emigrated to Australia, through the Highland and Island Emigration Society. To this should be added an unknown, but significant number, who paid their own fares to emigrate, and a further unknown number assisted by the Colonial Land and Emigration commission. This was out of a famine-affected population of about 200,000 people. Many of those who remained became even more involved in temporary migration for work in the Lowlands, both out of necessity during the famine and having become accustomed to working away by the time the famine ceased. Much longer periods were spent out of the Highlands – often for much of the year or more. One illustration of this migrant working was the estimated 30,000 men and women from the far west of the Gaelic speaking area who travelled to the east coast fishing ports for the herring fishing season – providing labour in an industry that grew by 60% between 1854 and 1884.
The clearances were followed by a period of even greater emigration from the Highlands, which continued (with a brief lull for the First World War) up to the start of the Great Depression.
Historian Jonathan Israel argues that by 1750 Scotland's major cities had created an intellectual infrastructure of mutually supporting institutions, such as universities, reading societies, libraries, periodicals, museums and masonic lodges. The Scottish network was "predominantly liberal Calvinist, Newtonian, and 'design' oriented in character which played a major role in the further development of the transatlantic Enlightenment ." In France Voltaire said "we look to Scotland for all our ideas of civilization," and the Scots in turn paid close attention to French ideas. Historian Bruce Lenman says their "central achievement was a new capacity to recognize and interpret social patterns." The first major philosopher of the Scottish Enlightenment was Francis Hutcheson, who held the Chair of Philosophy at the University of Glasgow from 1729 to 1746. A moral philosopher who produced alternatives to the ideas of Thomas Hobbes, one of his major contributions to world thought was the utilitarian and consequentialist principle that virtue is that which provides, in his words, "the greatest happiness for the greatest numbers". Much of what is incorporated in the scientific method (the nature of knowledge, evidence, experience, and causation) and some modern attitudes towards the relationship between science and religion were developed by his protégés David Hume and Adam Smith. Hume became a major figure in the skeptical philosophical and empiricist traditions of philosophy. He and other Scottish Enlightenment thinkers developed what he called a 'science of man', which was expressed historically in works by authors including James Burnett, Adam Ferguson, John Millar and William Robertson, all of whom merged a scientific study of how humans behave in ancient and primitive cultures with a strong awareness of the determining forces of modernity. Modern sociology largely originated from this movement and Hume's philosophical concepts that directly influenced James Madison (and thus the US Constitution) and when popularised by Dugald Stewart, would be the basis of classical liberalism. Adam Smith published "The Wealth of Nations", often considered the first work on modern economics. It had an immediate impact on British economic policy and in the 21st century still framed discussions on globalisation and tariffs. The focus of the Scottish Enlightenment ranged from intellectual and economic matters to the specifically scientific as in the work of the physician and chemist William Cullen, the agriculturalist and economist James Anderson, chemist and physician Joseph Black, natural historian John Walker and James Hutton, the first modern geologist.
With tariffs with England now abolished, the potential for trade for Scottish merchants was considerable. However, Scotland in 1750 was still a poor rural, agricultural society with a population of 1.3 million. Some progress was visible: agriculture in the Lowlands was steadily upgraded after 1700 and standards remained high. There were the sales of linen and cattle to England, the cash flows from military service, and the tobacco trade that was dominated by Glasgow Tobacco Lords after 1740. Merchants who profited from the American trade began investing in leather, textiles, iron, coal, sugar, rope, sailcloth, glassworks, breweries, and soapworks, setting the foundations for the city's emergence as a leading industrial centre after 1815. The tobacco trade collapsed during the American Revolution (1776–83), when its sources were cut off by the British blockade of American ports. However, trade with the West Indies began to make up for the loss of the tobacco business, reflecting the British demand for sugar and the demand in the West Indies for herring and linen goods.
Linen was Scotland's premier industry in the 18th century and formed the basis for the later cotton, jute, and woollen industries. Scottish industrial policy was made by the Board of Trustees for Fisheries and Manufactures in Scotland, which sought to build an economy complementary, not competitive, with England. Since England had woollens, this meant linen. Encouraged and subsidised by the Board of Trustees so it could compete with German products, merchant entrepreneurs became dominant in all stages of linen manufacturing and built up the market share of Scottish linens, especially in the American colonial market. The British Linen Company, established in 1746, was the largest firm in the Scottish linen industry in the 18th century, exporting linen to England and America. As a joint-stock company, it had the right to raise funds through the issue of promissory notes or bonds. With its bonds functioning as bank notes, the company gradually moved into the business of lending and discounting to other linen manufacturers, and in the early 1770s banking became its main activity. It joined the established Scottish banks such as the Bank of Scotland (Edinburgh, 1695) and the Royal Bank of Scotland (Edinburgh, 1727). Glasgow would soon follow and Scotland had a flourishing financial system by the end of the century. There were over 400 branches, amounting to one office per 7,000 people, double the level in England, where banks were also more heavily regulated. Historians have emphasised that the flexibility and dynamism of the Scottish banking system contributed significantly to the rapid development of the economy in the 19th century.
German sociologist Max Weber mentioned Scottish Presbyterianism in The Protestant Ethic and the Spirit of Capitalism (1905), and many scholars in recent decades argued that "this worldly asceticism" of Calvinism was integral to Scotland's rapid economic modernisation.
In the 1690s the Presbyterian establishment purged the land of Episcopalians and heretics, and made blasphemy a capital crime. Thomas Aitkenhead, the son of an Edinburgh surgeon, aged 18, was indicted for blasphemy by order of the Privy Council for calling the New Testament "The History of the Imposter Christ"; he was hanged in 1696. Their extremism led to a reaction known as the "Moderate" cause that ultimately prevailed and opened the way for liberal thinking in the cities.
The early 18th century saw the beginnings of a fragmentation of the Church of Scotland. These fractures were prompted by issues of government and patronage, but reflected a wider division between the hard-line Evangelicals and the theologically more tolerant Moderate Party. The battle was over fears of fanaticism by the former and the promotion of Enlightenment ideas by the latter. The Patronage Act of 1712 was a major blow to the evangelicals, for it meant that local landlords could choose the minister, not the members of the congregation. Schisms erupted as the evangelicals left the main body, starting in 1733 with the First Secession headed by figures including Ebenezer Erskine. The second schism in 1761 lead to the foundation of the independent Relief Church. These churches gained strength in the Evangelical Revival of the later 18th century. A key result was the main Presbyterian church was in the hands of the Moderate faction, which provided critical support for the Enlightenment in the cities.
Long after the triumph of the Church of Scotland in the Lowlands, Highlanders and Islanders clung to an old-fashioned Christianity infused with animistic folk beliefs and practices. The remoteness of the region and the lack of a Gaelic-speaking clergy undermined the missionary efforts of the established church. The later 18th century saw some success, owing to the efforts of the SSPCK missionaries and to the disruption of traditional society. Catholicism had been reduced to the fringes of the country, particularly the Gaelic-speaking areas of the Highlands and Islands. Conditions also grew worse for Catholics after the Jacobite rebellions and Catholicism was reduced to little more than a poorly run mission. Also important was Episcopalianism, which had retained supporters through the civil wars and changes of regime in the 17th century. Since most Episcopalians had given their support to the Jacobite rebellions in the early 18th century, they also suffered a decline in fortunes.
Although Scotland increasingly adopted the English language and wider cultural norms, its literature developed a distinct national identity and began to enjoy an international reputation. Allan Ramsay (1686–1758) laid the foundations of a reawakening of interest in older Scottish literature, as well as leading the trend for pastoral poetry, helping to develop the Habbie stanza as a poetic form. James Macpherson was the first Scottish poet to gain an international reputation, claiming to have found poetry written by Ossian, he published translations that acquired international popularity, being proclaimed as a Celtic equivalent of the Classical epics. "Fingal" written in 1762 was speedily translated into many European languages, and its deep appreciation of natural beauty and the melancholy tenderness of its treatment of the ancient legend did more than any single work to bring about the Romantic movement in European, and especially in German, literature, influencing Herder and Goethe. Eventually it became clear that the poems were not direct translations from the Gaelic, but flowery adaptations made to suit the aesthetic expectations of his audience. Both the major literary figures of the following century, Robert Burns and Walter Scott, would be highly influenced by the Ossian cycle. Burns, an Ayrshire poet and lyricist, is widely regarded as the national poet of Scotland and a major figure in the Romantic movement. As well as making original compositions, Burns also collected folk songs from across Scotland, often revising or adapting them. His poem (and song) "Auld Lang Syne" is often sung at Hogmanay (the last day of the year), and "Scots Wha Hae" served for a long time as an unofficial national anthem of the country.
A legacy of the Reformation in Scotland was the aim of having a school in every parish, which was underlined by an act of the Scottish parliament in 1696 (reinforced in 1801). In rural communities this obliged local landowners (heritors) to provide a schoolhouse and pay a schoolmaster, while ministers and local presbyteries oversaw the quality of the education. The headmaster or "dominie" was often university educated and enjoyed high local prestige. The kirk schools were active in the rural lowlands but played a minor role in the Highlands, the islands, and in the fast-growing industrial towns and cities. The schools taught in English, not in Gaelic, because that language was seen as a leftover of Catholicism and was not an expression of Scottish nationalism. In cities such as Glasgow the Catholics operated their own schools, which directed their youth into clerical and middle class occupations, as well as religious vocations.
A "democratic myth" emerged in the 19th century to the effect that many a "lad of pairts" had been able to rise up through the system to take high office and that literacy was much more widespread in Scotland than in neighbouring states, particularly England. Historical research has largely undermined the myth. Kirk schools were not free, attendance was not compulsory and they generally imparted only basic literacy such as the ability to read the Bible. Poor children, starting at age 7, were done by age 8 or 9; the majority were finished by age 11 or 12. The result was widespread basic reading ability; since there was an extra fee for writing, half the people never learned to write. Scots were not significantly better educated than the English and other contemporary nations. A few talented poor boys did go to university, but usually they were helped by aristocratic or gentry sponsors. Most of them became poorly paid teachers or ministers, and none became important figures in the Scottish Enlightenment or the Industrial Revolution.
By the 18th century there were five universities in Scotland, at Edinburgh, Glasgow, St. Andrews and King's and Marischial Colleges in Aberdeen, compared with only two in England. Originally oriented to clerical and legal training, after the religious and political upheavals of the 17th century they recovered with a lecture-based curriculum that was able to embrace economics and science, offering a high quality liberal education to the sons of the nobility and gentry. It helped the universities to become major centres of medical education and to put Scotland at the forefront of Enlightenment thinking.
Scotland's transformation into a rich leader of modern industry came suddenly and unexpectedly. The population grew steadily in the 19th century, from 1,608,000 in the census of 1801 to 2,889,000 in 1851 and 4,472,000 in 1901. The economy, long based on agriculture, began to industrialise after 1790. At first the leading industry, based in the west, was the spinning and weaving of cotton. In 1861, the American Civil War suddenly cut off the supplies of raw cotton and the industry never recovered. Thanks to its many entrepreneurs and engineers, and its large stock of easily mined coal, Scotland became a world centre for engineering, shipbuilding, and locomotive construction, with steel replacing iron after 1870.
The Scottish Reform Act 1832 increased the number of Scottish MPs and significantly widened the franchise to include more of the middle classes. From this point until the end of the century, the Whigs and (after 1859) their successors the Liberal Party, managed to gain a majority of the Westminster Parliamentary seats for Scotland, although these were often outnumbered by the much larger number of English and Welsh Conservatives. The English-educated Scottish peer Lord Aberdeen (1784–1860) led a coalition government from 1852–5, but in general very few Scots held office in the government. From the mid-century there were increasing calls for Home Rule for Scotland and when the Conservative Lord Salisbury became prime minister in 1885 he responded to pressure by reviving the post of Secretary of State for Scotland, which had been in abeyance since 1746. He appointed the Duke of Richmond, a wealthy landowner who was both Chancellor of Aberdeen University and Lord Lieutenant of Banff. Towards the end of the century Prime Ministers of Scottish descent included the Tory, Peelite and Liberal William Gladstone, who held the office four times between 1868 and 1894. The first Scottish Liberal to become prime minister was the Earl of Rosebery, from 1894 to 1895, like Aberdeen before him a product of the English education system. In the later 19th century the issue of Irish Home Rule led to a split among the Liberals, with a minority breaking away to form the Liberal Unionists in 1886. The growing importance of the working classes was marked by Keir Hardie's success in the 1888 Mid Lanarkshire by-election, leading to the foundation of the Scottish Labour Party, which was absorbed into the Independent Labour Party in 1895, with Hardie as its first leader.
From about 1790 textiles became the most important industry in the west of Scotland, especially the spinning and weaving of cotton, which flourished until in 1861 the American Civil War cut off the supplies of raw cotton. The industry never recovered, but by that time Scotland had developed heavy industries based on its coal and iron resources. The invention of the hot blast for smelting iron (1828) revolutionised the Scottish iron industry. As a result, Scotland became a centre for engineering, shipbuilding and the production of locomotives. Toward the end of the 19th century, steel production largely replaced iron production. Coal mining continued to grow into the 20th century, producing the fuel to heat homes, factories and drive steam engines locomotives and steamships. By 1914, there were 1,000,000 coal miners in Scotland. The stereotype emerged early on of Scottish colliers as brutish, non-religious and socially isolated serfs; that was an exaggeration, for their life style resembled the miners everywhere, with a strong emphasis on masculinity, equalitarianism, group solidarity, and support for radical labour movements.
Britain was the world leader in the construction of railways, and their use to expand trade and coal supplies. The first successful locomotive-powered line in Scotland, between Monkland and Kirkintilloch, opened in 1831. Not only was good passenger service established by the late 1840s, but an excellent network of freight lines reduce the cost of shipping coal, and made products manufactured in Scotland competitive throughout Britain. For example, railways opened the London market to Scottish beef and milk. They enabled the Aberdeen Angus to become a cattle breed of worldwide reputation. By 1900, Scotland had 3500 miles of railway; their main economic contribution was moving supplies in and product out for heavy industry, especially coal-mining.
Scotland was already one of the most urbanised societies in Europe by 1800. The industrial belt ran across the country from southwest to northeast; by 1900 the four industrialised counties of Lanarkshire, Renfrewshire, Dunbartonshire, and Ayrshire contained 44 per cent of the population. Glasgow became one of the largest cities in the world, and known as "the Second City of the Empire" after London. Shipbuilding on Clydeside (the river Clyde through Glasgow and other points) began when the first small yards were opened in 1712 at the Scott family's shipyard at Greenock. After 1860, the Clydeside shipyards specialised in steamships made of iron (after 1870, made of steel), which rapidly replaced the wooden sailing vessels of both the merchant fleets and the battle fleets of the world. It became the world's pre-eminent shipbuilding centre. "Clydebuilt" became an industry benchmark of quality, and the river's shipyards were given contracts for warships.
The industrial developments, while they brought work and wealth, were so rapid that housing, town-planning, and provision for public health did not keep pace with them, and for a time living conditions in some of the towns and cities were notoriously bad, with overcrowding, high infant mortality, and growing rates of tuberculosis. The companies attracted rural workers, as well as immigrants from Catholic Ireland, by inexpensive company housing that was a dramatic move upward from the inner-city slums. This paternalistic policy led many owners to endorse government sponsored housing programs as well as self-help projects among the respectable working class.
While the Scottish Enlightenment is traditionally considered to have concluded toward the end of the 18th century, disproportionately large Scottish contributions to British science and letters continued for another 50 years or more, thanks to such figures as the mathematicians and physicists James Clerk Maxwell, Lord Kelvin, and the engineers and inventors James Watt and William Murdoch, whose work was critical to the technological developments of the Industrial Revolution throughout Britain.
In literature the most successful figure of the mid-nineteenth century was Walter Scott, who began as a poet and also collected and published Scottish ballads. His first prose work, Waverley in 1814, is often called the first historical novel. It launched a highly successful career that probably more than any other helped define and popularise Scottish cultural identity. In the late 19th century, a number of Scottish-born authors achieved international reputations. Robert Louis Stevenson's work included the urban Gothic novella "Strange Case of Dr Jekyll and Mr Hyde" (1886), and played a major part in developing the historical adventure in books like "Kidnapped" and "Treasure Island". Arthur Conan Doyle's "Sherlock Holmes" stories helped found the tradition of detective fiction. The "kailyard tradition" at the end of the century, brought elements of fantasy and folklore back into fashion as can be seen in the work of figures like J. M. Barrie, most famous for his creation of Peter Pan, and George MacDonald, whose works, including "Phantasies", played a major part in the creation of the fantasy genre.
Scotland also played a major part in the development of art and architecture. The Glasgow School, which developed in the late 19th century, and flourished in the early 20th century, produced a distinctive blend of influences including the Celtic Revival the Arts and Crafts Movement, and Japonisme, which found favour throughout the modern art world of continental Europe and helped define the Art Nouveau style. Among the most prominent members were the loose collective of The Four: acclaimed architect Charles Rennie Mackintosh, his wife the painter and glass artist Margaret MacDonald, her sister the artist Frances, and her husband, the artist and teacher Herbert MacNair.
This period saw a process of rehabilitation for highland culture. Tartan had already been adopted for highland regiments in the British army, which poor highlanders joined in large numbers until the end of the Napoleonic Wars in 1815, but by the 19th century it had largely been abandoned by the ordinary people. In the 1820s, as part of the Romantic revival, tartan and the kilt were adopted by members of the social elite, not just in Scotland, but across Europe, prompted by the popularity of Macpherson's Ossian cycle and then Walter Scott's Waverley novels. The world paid attention to their literary redefinition of Scottishness, as they forged an image largely based on characteristics in polar opposition to those associated with England and modernity. This new identity made it possible for Scottish culture to become integrated into a wider European and North American context, not to mention tourist sites, but it also locked in a sense of "otherness" which Scotland began to shed only in the late 20th century. Scott's "staging" of the royal Visit of King George IV to Scotland in 1822 and the king's wearing of tartan, resulted in a massive upsurge in demand for kilts and tartans that could not be met by the Scottish linen industry. The designation of individual clan tartans was largely defined in this period and became a major symbol of Scottish identity. The fashion for all things Scottish was maintained by Queen Victoria, who helped secure the identity of Scotland as a tourist resort, with Balmoral Castle in Aberdeenshire becoming a major royal residence from 1852.
Despite these changes the highlands remained very poor and traditional, with few connections to the uplift of the Scottish Enlightenment and little role in the Industrial Revolution. A handful of powerful families, typified by the dukes of Argyll, Atholl, Buccleuch, and Sutherland, owned large amounts of land and controlled local political, legal and economic affairs. Particularly after the end of the boom created by the Revolutionary and Napoleonic Wars (1790–1815), these landlords needed cash to maintain their position in London society, and had less need of soldiers. They turned to money rents, displaced farmers to raise sheep, and downplayed the traditional patriarchal relationship that had historically sustained the clans. Potato blight reached the Highlands in 1846, where 150,000 people faced disaster because their food supply was largely potatoes (with a little herring, oatmeal and milk). They were rescued by an effective emergency relief system that stands in dramatic contrast to the failures of relief in Ireland. As the famine continued, landlords, charities and government agencies provided "assisted passages" for destitute tenants to emigrate to Canada and Australia; in excess of 16,000 people emigrated, with most travelling in 1851.
Caused by the advent of refrigeration and imports of lamb, mutton and wool from overseas, the 1870s brought with them a collapse of sheep prices and an abrupt halt in the previous sheep farming boom. Land prices subsequently plummeted, too, and accelerated the process of the so-called "Balmoralisation" of Scotland, an era in the second half of the 19th century that saw an increase in tourism and the establishment of large estates dedicated to field sports like deer stalking and grouse shooting, especially in the Scottish Highlands. The process was named after Balmoral Estate, purchased by Queen Victoria in 1848, that fueled the romanticisation of upland Scotland and initiated an influx of newly the wealthy acquiring similar estates in the following decades. By the late 19th century just 118 people owned half of Scotland, with nearly 60 per cent of the whole country being part of shooting estates. While their relative importance has somewhat declined due to changing recreational interests throughout the 20th century, deer stalking and grouse shooting remain of prime importance on many private estates in Scotland.
The unequal concentration of land ownership remained an emotional subject and eventually became a cornerstone of liberal radicalism. The politically powerless poor crofters embraced the popularly oriented, fervently evangelical Presbyterian revival after 1800, and the breakaway "Free Church" after 1843. This evangelical movement was led by lay preachers who themselves came from the lower strata, and whose preaching was implicitly critical of the established order. This energised the crofters and separated them from the landlords, preparing them for their successful and violent challenge to the landlords in the 1880s through the Highland Land League. Violence began on the Isle of Skye when Highland landlords cleared their lands for sheep and deer parks. It was quieted when the government stepped in passing the Crofters' Holdings (Scotland) Act, 1886 to reduce rents, guarantee fixity of tenure, and break up large estates to provide crofts for the homeless. In 1885, three Independent Crofter candidates were elected to Parliament, leading to explicit security for the Scottish smallholders; the legal right to bequeath tenancies to descendants; and creating a Crofting Commission. The Crofters as a political movement faded away by 1892, and the Liberal Party gained most of their votes.
The population of Scotland grew steadily in the 19th century, from 1,608,000 in the census of 1801 to 2,889,000 in 1851 and 4,472,000 in 1901. Even with the development of industry there were insufficient good jobs; as a result, during the period 1841–1931, about 2 million Scots emigrated to North America and Australia, and another 750,000 Scots relocated to England. Scotland lost a much higher proportion of its population than England and Wales, reaching perhaps as much as 30.2 per cent of its natural increase from the 1850s onwards. This not only limited Scotland's population increase, but meant that almost every family lost members due to emigration and, because more of them were young males, it skewed the sex and age ratios of the country.
Scots-born emigrants that played a leading role in the foundation and development of the United States included cleric and revolutionary John Witherspoon, sailor John Paul Jones, industrialist and philanthropist Andrew Carnegie, and scientist and inventor Alexander Graham Bell. In Canada they included soldier and governor of Quebec James Murray, Prime Minister John A. Macdonald and politician and social reformer Tommy Douglas. For Australia they included soldier and governor Lachlan Macquarie, governor and scientist Thomas Brisbane and Prime Minister Andrew Fisher. For New Zealand they included politician Peter Fraser and outlaw James Mckenzie. By the 21st century, there would be about as many people who were Scottish Canadians and Scottish Americans as the 5 million remaining in Scotland.
After prolonged years of struggle, in 1834 the Evangelicals gained control of the General Assembly and passed the Veto Act, which allowed congregations to reject unwanted "intrusive" presentations to livings by patrons. The following "Ten Years' Conflict" of legal and political wrangling ended in defeat for the non-intrusionists in the civil courts. The result was a schism from the church by some of the non-intrusionists led by Dr Thomas Chalmers known as the Great Disruption of 1843. Roughly a third of the clergy, mainly from the North and Highlands, formed the separate Free Church of Scotland. The evangelical Free Churches, which were more accepting of Gaelic language and culture, grew rapidly in the Highlands and Islands, appealing much more strongly than did the established church. Chalmers's ideas shaped the breakaway group. He stressed a social vision that revived and preserved Scotland's communal traditions at a time of strain on the social fabric of the country. Chalmers's idealised small equalitarian, kirk-based, self-contained communities that recognised the individuality of their members and the need for co-operation. That vision also affected the mainstream Presbyterian churches, and by the 1870s it had been assimilated by the established Church of Scotland. Chalmers's ideals demonstrated that the church was concerned with the problems of urban society, and they represented a real attempt to overcome the social fragmentation that took place in industrial towns and cities.
In the late 19th century the major debates were between fundamentalist Calvinists and theological liberals, who rejected a literal interpretation of the Bible. This resulted in a further split in the Free Church as the rigid Calvinists broke away to form the Free Presbyterian Church in 1893. There were, however, also moves towards reunion, beginning with the unification of some secessionist churches into the United Secession Church in 1820, which united with the Relief Church in 1847 to form the United Presbyterian Church, which in turn joined with the Free Church in 1900 to form the United Free Church of Scotland. The removal of legislation on lay patronage would allow the majority of the Free Church to rejoin Church of Scotland in 1929. The schisms left small denominations including the Free Presbyterians and a remnant that had not merged in 1900 as the Free Church.
Catholic Emancipation in 1829 and the influx of large numbers of Irish immigrants, particularly after the famine years of the late 1840s, principally to the growing lowland centres like Glasgow, led to a transformation in the fortunes of Catholicism. In 1878, despite opposition, a Roman Catholic ecclesiastical hierarchy was restored to the country, and Catholicism became a significant denomination within Scotland. Episcopalianism also revived in the 19th century as the issue of succession receded, becoming established as the Episcopal Church in Scotland in 1804, as an autonomous organisation in communion with the Church of England. Baptist, Congregationalist and Methodist churches had appeared in Scotland in the 18th century, but did not begin significant growth until the 19th century, partly because more radical and evangelical traditions already existed within the Church of Scotland and the free churches. From 1879 they were joined by the evangelical revivalism of the Salvation Army, which attempted to make major inroads in the growing urban centres.
Industrialisation, urbanisation and the Disruption of 1843 all undermined the tradition of parish schools. From 1830 the state began to fund buildings with grants, then from 1846 it was funding schools by direct sponsorship, and in 1872 Scotland moved to a system like that in England of state-sponsored largely free schools, run by local school boards. Overall administration was in the hands of the Scotch (later Scottish) Education Department in London. Education was now compulsory from five to thirteen and many new board schools were built. Larger urban school boards established "higher grade" (secondary) schools as a cheaper alternative to the burgh schools. The Scottish Education Department introduced a Leaving Certificate Examination in 1888 to set national standards for secondary education and in 1890 school fees were abolished, creating a state-funded national system of free basic education and common examinations.
At the beginning of the 19th century, Scottish universities had no entrance exam, students typically entered at ages of 15 or 16, attended for as little as two years, chose which lectures to attend and could leave without qualifications. After two commissions of enquiry in 1826 and 1876 and reforming acts of parliament in 1858 and 1889, the curriculum and system of graduation were reformed to meet the needs of the emerging middle classes and the professions. Entrance examinations equivalent to the School Leaving Certificate were introduced and average ages of entry rose to 17 or 18. Standard patterns of graduation in the arts curriculum offered 3-year ordinary and 4-year honours degrees and separate science faculties were able to move away from the compulsory Latin, Greek and philosophy of the old MA curriculum. The historic University of Glasgow became a leader in British higher education by providing the educational needs of youth from the urban and commercial classes, as well as the upper class. It prepared students for non-commercial careers in government, the law, medicine, education, and the ministry and a smaller group for careers in science and engineering. St Andrews pioneered the admission of women to Scottish universities, creating the Lady Licentiate in Arts (LLA), which proved highly popular. From 1892 Scottish universities could admit and graduate women and the numbers of women at Scottish universities steadily increased until the early 20th century.
The years before the First World War were the golden age of the inshore fisheries. Landings reached new heights, and Scottish catches dominated Europe's herring trade, accounting for a third of the British catch. High productivity came about thanks to the transition to more productive steam-powered boats, while the rest of Europe's fishing fleets were slower because they were still powered by sails.
In the Khaki Election of 1900, nationalist concern with the Boer War meant that the Conservatives and their Liberal Unionist allies gained a majority of Scottish seats for the first time, although the Liberals regained their ascendancy in the next election. The Unionists and Conservatives merged in 1912, usually known as the Conservatives in England and Wales, they adopted the name Unionist Party in Scotland. Scots played a major part in the leadership of UK political parties producing a Conservative Prime Minister in Arthur Balfour (1902–05) and a Liberal one in Henry Campbell-Bannerman (1905–08). Various organisations, including the Independent Labour Party, joined to make the British Labour Party in 1906, with Keir Hardie as its first chairman.
Scotland played a major role in the British effort in the First World War. It especially provided manpower, ships, machinery, food (particularly fish) and money, engaging with the conflict with some enthusiasm. Scotland's industries were directed at the war effort. For example, the Singer Clydebank sewing machine factory received over 5000 government contracts, and made 303 million artillery shells, shell components, fuses, and aeroplane parts, as well as grenades, rifle parts, and 361,000 horseshoes. Its labour force of 14,000 was about 70 percent female at war's end.
With a population of 4.8 million in 1911, Scotland sent 690,000 men to the war, of whom 74,000 died in combat or from disease, and 150,000 were seriously wounded. Scottish urban centres, with their poverty and unemployment, were favourite recruiting grounds of the regular British army, and Dundee, where the female-dominated jute industry limited male employment, had one of the highest proportion of reservists and serving soldiers than almost any other British city. Concern for their families' standard of living made men hesitate to enlist; voluntary enlistment rates went up after the government guaranteed a weekly stipend for life to the survivors of men who were killed or disabled. After the introduction of conscription from January 1916 every part of the country was affected. Occasionally Scottish troops made up large proportions of the active combatants, and suffered corresponding loses, as at the Battle of Loos, where there were three full Scots divisions and other Scottish units. Thus, although Scots were only 10 per cent of the British population, they made up 15 per cent of the national armed forces and eventually accounted for 20 per cent of the dead. Some areas, like the thinly populated island of Lewis and Harris, suffered some of the highest proportional losses of any part of Britain. Clydeside shipyards and the nearby engineering shops were the major centres of war industry in Scotland. In Glasgow, radical agitation led to industrial and political unrest that continued after the war ended. After the end of the war in June 1919 the German fleet interned at Scapa Flow was scuttled by its German crews, to avoid its ships being taken over by the victorious allies.
A boom was created by the First World War, with the shipbuilding industry expanding by a third, but a serious depression hit the economy by 1922. The most skilled craftsmen were especially hard hit, because there were few alternative uses for their specialised skills. The main social indicators such as poor health, bad housing, and long-term mass unemployment, pointed to terminal social and economic stagnation at best, or even a downward spiral. The heavy dependence on obsolescent heavy industry and mining was a central problem, and no one offered workable solutions. The despair reflected what Finlay (1994) describes as a widespread sense of hopelessness that prepared local business and political leaders to accept a new orthodoxy of centralised government economic planning when it arrived during the Second World War.
A few industries did grow, such as chemicals and whisky, which developed a global market for premium "Scotch". However, in general the Scottish economy stagnated leading to growing unemployment and political agitation among industrial workers.
After World War I the Liberal Party began to disintegrate and Labour emerged as the party of progressive politics in Scotland, gaining a solid following among working classes of the urban lowlands. As a result, the Unionists were able to gain most of the votes of the middle classes, who now feared Bolshevik revolution, setting the social and geographical electoral pattern in Scotland that would last until the late 20th century. The fear of the left had been fuelled by the emergence of a radical movement led by militant trades unionists. John MacLean emerged as a key political figure in what became known as Red Clydeside, and in January 1919, the British Government, fearful of a revolutionary uprising, deployed tanks and soldiers in central Glasgow. Formerly a Liberal stronghold, the industrial districts switched to Labour by 1922, with a base in the Irish Catholic working class districts. Women were especially active in building neighbourhood solidarity on housing and rent issues. However, the "Reds" operated within the Labour Party and had little influence in Parliament; in the face of heavy unemployment the workers' mood changed to passive despair by the late 1920s. Scottish educated Bonar Law led a Conservative government from 1922 to 1923 and another Scot, Ramsay MacDonald, would be the Labour Party's first Prime Minister in 1924 and again from 1929 to 1935.
With all the main parties committed to the Union, new nationalist and independent political groupings began to emerge, including the National Party of Scotland in 1928 and Scottish Party in 1930. They joined to form the Scottish National Party (SNP) in 1934, with the goal of creating an independent Scotland, but it enjoyed little electoral success in the Westminster system.
As in World War I, Scapa Flow in Orkney served as an important Royal Navy base. Attacks on Scapa Flow and Rosyth gave RAF fighters their first successes downing bombers in the Firth of Forth and East Lothian. The shipyards and heavy engineering factories in Glasgow and Clydeside played a key part in the war effort, and suffered attacks from the Luftwaffe, enduring great destruction and loss of life. As transatlantic voyages involved negotiating north-west Britain, Scotland played a key part in the battle of the North Atlantic. Shetland's relative proximity to occupied Norway resulted in the Shetland Bus by which fishing boats helped Norwegians flee the Nazis, and expeditions across the North Sea to assist resistance. Significant individual contributions to the war effort by Scots included the invention of radar by Robert Watson-Watt, which was invaluable in the Battle of Britain, as was the leadership at RAF Fighter Command of Air Chief Marshal Hugh Dowding.
In World War II, Prime Minister Winston Churchill appointed Labour politician Tom Johnston as Secretary of State for Scotland in February 1941; he controlled Scottish affairs until the war ended. He launched numerous initiatives to promote Scotland, attracting businesses and new jobs through his new Scottish Council of Industry. He set up 32 committees to deal with social and economic problems, ranging from juvenile delinquency to sheep farming. He regulated rents, and set up a prototype national health service, using new hospitals set up in the expectation of large numbers of casualties from German bombing. His most successful venture was setting up a system of hydro electricity using water power in the Highlands. A long-standing supporter of the Home Rule movement, Johnston persuaded Churchill of the need to counter the nationalist threat north of the border and created a Scottish Council of State and a Council of Industry as institutions to devolve some power away from Whitehall.
In World War II, despite extensive bombing by the Luftwaffe, Scottish industry came out of the depression slump by a dramatic expansion of its industrial activity, absorbing unemployed men and many women as well. The shipyards were the centre of more activity, but many smaller industries produced the machinery needed by the British bombers, tanks and warships. Agriculture prospered, as did all sectors except for coal mining, which was operating mines near exhaustion. Real wages, adjusted for inflation, rose 25 per cent, and unemployment temporarily vanished. Increased income, and the more equal distribution of food, obtained through a tight rationing system, dramatically improved the health and nutrition; the average height of 13-year-olds in Glasgow increased by .
While emigration began to tail off in England and Wales after the First World War, it continued apace in Scotland, with 400,000 Scots, ten per cent of the population, estimated to have left the country between 1921 and 1931. The economic stagnation was only one factor; other push factors included a zest for travel and adventure, and the pull factors of better job opportunities abroad, personal networks to link into, and the basic cultural similarity of the United States, Canada, and Australia. Government subsidies for travel and relocation facilitated the decision to emigrate. Personal networks of family and friends who had gone ahead and wrote back, or sent money, prompted emigrants to retrace their paths. When the Great Depression hit in the 1930s there were no easily available jobs in the US and Canada and the numbers leaving fell to less than 50,000 a year, bringing to an end the period of mass emigrations that had opened in the mid-18th century.
In the early 20th century there was a new surge of activity in Scottish literature, influenced by modernism and resurgent nationalism, known as the Scottish Renaissance. The leading figure in the movement was Hugh MacDiarmid (the pseudonym of Christopher Murray Grieve). MacDiarmid attempted to revive the Scots language as a medium for serious literature in poetic works including "A Drunk Man Looks at the Thistle" (1936), developing a form of Synthetic Scots that combined different regional dialects and archaic terms. Other writers that emerged in this period, and are often treated as part of the movement, include the poets Edwin Muir and William Soutar, the novelists Neil Gunn, George Blake, Nan Shepherd, A. J. Cronin, Naomi Mitchison, Eric Linklater and Lewis Grassic Gibbon, and the playwright James Bridie. All were born within a fifteen-year period (1887 and 1901) and, although they cannot be described as members of a single school, they all pursued an exploration of identity, rejecting nostalgia and parochialism and engaging with social and political issues.
In the 20th century, the centre of the education system became more focused on Scotland, with the ministry of education partly moving north in 1918 and then finally having its headquarters relocated to Edinburgh in 1939. The school leaving age was raised to 14 in 1901, but despite attempts to raise it to 15 this was only made law in 1939 and then postponed because of the outbreak of war. In 1918, Roman Catholic schools were brought into the state system, but retained their distinct religious character, access to schools by priests and the requirement that school staff be acceptable to the Church.
The first half of the 20th century saw Scottish universities fall behind those in England and Europe in terms of participation and investment. The decline of traditional industries between the wars undermined recruitment. English universities increased the numbers of students registered between 1924 and 1927 by 19 per cent, but in Scotland the numbers fell, particularly among women. In the same period, while expenditure in English universities rose by 90 per cent, in Scotland the increase was less than a third of that figure.
Scotland's Scapa Flow was the main base for the Royal Navy in the 20th century. As the Cold War intensified in 1961, the United States deployed Polaris ballistic missiles, and submarines, in the Firth of Clyde's Holy Loch. Public protests from CND campaigners proved futile. The Royal Navy successfully convinced the government to allow the base because it wanted its own Polaris submarines, and it obtained them in 1963. The RN's nuclear submarine base opened with four Polaris submarines at the expanded Faslane Naval Base on the Gare Loch. The first patrol of a Trident-armed submarine occurred in 1994, although the US base was closed at the end of the Cold War.
After World War II, Scotland's economic situation became progressively worse due to overseas competition, inefficient industry, and industrial disputes. This only began to change in the 1970s, partly due to the discovery and development of North Sea oil and gas and partly as Scotland moved towards a more service-based economy. This period saw the emergence of the Scottish National Party and movements for both Scottish independence and more popularly devolution. However, a referendum on devolution in 1979 was unsuccessful as it did not achieve the support of 40 per cent of the electorate (despite a small majority of those who voted supporting the proposal.)
A national referendum to decide on Scottish independence was held on 18 September 2014. Voters were asked to answer either "Yes" or "No" to the question: "Should Scotland be an independent country?" 55.3% of voters answered "No" and 44.7% answered "Yes", with a voter turnout of 84.5%.
In the second half of the 20th century the Labour Party usually won most Scottish seats in the Westminster parliament, losing this dominance briefly to the Unionists in the 1950s. Support in Scotland was critical to Labour's overall electoral fortunes as without Scottish MPs it would have gained only two UK electoral victories in the 20th century (1945 and 1966). The number of Scottish seats represented by Unionists (known as Conservatives from 1965 onwards) went into steady decline from 1959 onwards, until it fell to zero in 1997. Politicians with Scottish connections continued to play a prominent part in UK political life, with Prime Ministers including the Conservatives Harold Macmillan (whose father was Scottish) from 1957 to 1963 and Alec Douglas-Home from 1963 to 1964.
The Scottish National Party gained its first seat at Westminster in 1945 and became a party of national prominence during the 1970s, achieving 11 MPs in 1974. However, a referendum on devolution in 1979 was unsuccessful as it did not achieve the necessary support of 40 per cent of the electorate (despite a small majority of those who voted supporting the proposal) and the SNP went into electoral decline during the 1980s. The introduction in 1989 by the Thatcher-led Conservative government of the Community Charge (widely known as the Poll Tax), one year before the rest of the United Kingdom, contributed to a growing movement for a return to direct Scottish control over domestic affairs. The electoral success of New Labour in 1997 was led by two Prime Ministers with Scottish connections: Tony Blair (who was brought up in Scotland) from 1997 to 2007 and Gordon Brown from 2007 to 2010, opened the way for constitutional change. On 11 September 1997, the 700th anniversary of Battle of Stirling Bridge, the Blair led Labour government again held a referendum on the issue of devolution. A positive outcome led to the establishment of a devolved Scottish Parliament in 1999. A coalition government, which would last until 2007, was formed between Labour and the Liberal Democrats, with Donald Dewar as First Minister. The new Scottish Parliament Building, adjacent to Holyrood House in Edinburgh, opened in 2004. Although not initially reaching its 1970s peak in Westminster elections, the SNP had more success in the Scottish Parliamentary elections with their system of mixed member proportional representation. It became the official opposition in 1999, a minority government in 2007 and a majority government from 2011. In 2014, the independence referendum saw voters reject independence, choosing instead to remain in the United Kingdom. In the 2015 Westminster election, the SNP won 56 out of 59 Scottish seats, making them the third largest party in Westminster.
After World War II, Scotland's economic situation became progressively worse due to overseas competition, inefficient industry, and industrial disputes. This only began to change in the 1970s, partly due to the discovery and development of North Sea oil and gas and partly as Scotland moved towards a more service-based economy. The discovery of the giant Forties oilfield in October 1970 signalled that Scotland was about to become a major oil producing nation, a view confirmed when Shell Expro discovered the giant Brent oilfield in the northern North Sea east of Shetland in 1971. Oil production started from the Argyll field (now Ardmore) in June 1975, followed by Forties in November of that year. Deindustrialisation took place rapidly in the 1970s and 1980s, as most of the traditional industries drastically shrank or were completely closed down. A new service-oriented economy emerged to replace traditional heavy industries. This included a resurgent financial services industry and the electronics manufacturing of Silicon Glen.
In the 20th century existing Christian denominations were joined by other organisations, including the Brethren and Pentecostal churches. Although some denominations thrived, after World War II there was a steady overall decline in church attendance and resulting church closures for most denominations. Talks began in the 1950s aiming at a grand merger of the main Presbyterian, Episcopal and Methodist bodies in Scotland. The talks were ended in 2003, when the General Assembly of the Church of Scotland rejected the proposals. In the 2011 census, 53.8% of the Scottish population identified as Christian (declining from 65.1% in 2001). The Church of Scotland is the largest religious grouping in Scotland, with 32.4% of the population. The Roman Catholic Church accounted for 15.9% of the population and is especially important in West Central Scotland and the Highlands. In recent years other religions have established a presence in Scotland, mainly through immigration and higher birth rates among ethnic minorities, with a small number of converts. Those with the most adherents in the 2011 census are Islam (1.4%, mainly among immigrants from South Asia), Hinduism (0.3%), Buddhism (0.2%) and Sikhism (0.2%). Other minority faiths include the Bahá'í Faith and small Neopagan groups. There are also various organisations which actively promote humanism and secularism, included within the 43.6% who either indicated no religion or did not state a religion in the 2011 census.
Although plans to raise the school leaving age to 15 in the 1940s were never ratified, increasing numbers stayed on beyond elementary education and it was eventually raised to 16 in 1973. As a result, secondary education was the major area of growth in the second half of the 20th century. New qualifications were developed to cope with changing aspirations and economics, with the Leaving Certificate being replaced by the Scottish Certificate of Education Ordinary Grade ('O-Grade') and Higher Grade ('Higher') qualifications in 1962, which became the basic entry qualification for university study. The higher education sector expanded in the second half of the 20th century, with four institutions being given university status in the 1960s (Dundee, Heriot-Watt, Stirling and Strathclyde) and five in the 1990s (Abertay, Glasgow Caledonian, Napier, Paisley and Robert Gordon). After devolution, in 1999 the new Scottish Executive set up an Education Department and an Enterprise, Transport and Lifelong Learning Department. One of the major diversions from practice in England, possible because of devolution, was the abolition of student tuition fees in 1999, instead retaining a system of means-tested student grants.
Some writers that emerged after the Second World War followed Hugh MacDiarmid by writing in Scots, including Robert Garioch and Sydney Goodsir Smith. Others demonstrated a greater interest in English language poetry, among them Norman MacCaig, George Bruce and Maurice Lindsay. George Mackay Brown from Orkney, and Iain Crichton Smith from Lewis, wrote both poetry and prose fiction shaped by their distinctive island backgrounds. The Glaswegian poet Edwin Morgan became known for translations of works from a wide range of European languages. He was also the first Scots Makar (the official national poet), appointed by the inaugural Scottish government in 2004. Many major Scottish post-war novelists, such as Muriel Spark, with "The Prime of Miss Jean Brodie" (1961) spent much or most of their lives outside Scotland, but often dealt with Scottish themes. Successful mass-market works included the action novels of Alistair MacLean, and the historical fiction of Dorothy Dunnett. A younger generation of novelists that emerged in the 1960s and 1970s included Shena Mackay, Alan Spence, Allan Massie and the work of William McIlvanney. From the 1980s Scottish literature enjoyed another major revival, particularly associated with a group of Glasgow writers focused around critic, poet and teacher Philip Hobsbaum and editor Peter Kravitz. In the 1990s major, prize winning, Scottish novels, often overtly political, that emerged from this movement included Irvine Welsh's "Trainspotting" (1993), Warner's "Morvern Callar" (1995), Gray's "Poor Things" (1992) and Kelman's "How Late It Was, How Late" (1994). Scottish crime fiction has been a major area of growth, particularly the success of Edinburgh's Ian Rankin and his Inspector Rebus novels. This period also saw the emergence of a new generation of Scottish poets that became leading figures on the UK stage, including Carol Ann Duffy, who was named as Poet Laureate in May 2009, the first woman, the first Scot and the first openly gay poet to take the post. | https://en.wikipedia.org/wiki?curid=13617 |
Hadrian
Hadrian (; ; 24 January 76 – 10 July 138) was Roman emperor from 117 to 138. He was born Publius Aelius Hadrianus in Italica, Hispania Baetica, into a Roman Italo-Hispanic family that settled in Spain from the Italian city of Atri in Picenum. His father was of senatorial rank and was a first cousin of Emperor Trajan. He married Trajan's grand-niece Vibia Sabina early in his career, before Trajan became emperor and possibly at the behest of Trajan's wife Pompeia Plotina. Plotina and Trajan's close friend and adviser Lucius Licinius Sura were well disposed towards Hadrian. When Trajan died, his widow claimed that he had nominated Hadrian as emperor immediately before his death.
Rome's military and Senate approved Hadrian's succession, but four leading senators were unlawfully put to death soon after. They had opposed Hadrian or seemed to threaten his succession, and the senate held him responsible for it and never forgave him. He earned further disapproval among the elite by abandoning Trajan's expansionist policies and territorial gains in Mesopotamia, Assyria, Armenia, and parts of Dacia. Hadrian preferred to invest in the development of stable, defensible borders and the unification of the empire's disparate peoples. He is known for building Hadrian's Wall, which marked the northern limit of Britannia.
Hadrian energetically pursued his own Imperial ideals and personal interests. He visited almost every province of the Empire, accompanied by an Imperial retinue of specialists and administrators. He encouraged military preparedness and discipline, and he fostered, designed, or personally subsidised various civil and religious institutions and building projects. In Rome itself, he rebuilt the Pantheon and constructed the vast Temple of Venus and Roma. In Egypt, he may have rebuilt the Serapeum of Alexandria. He was an ardent admirer of Greece and sought to make Athens the cultural capital of the Empire, so he ordered the construction of many opulent temples there. His intense relationship with Greek youth Antinous and the latter's untimely death led Hadrian to establish a widespread cult late in his reign. He suppressed the Bar Kokhba revolt in Judaea, but his reign was otherwise peaceful.
Hadrian's last years were marred by chronic illness. He saw the Bar Kokhba revolt as the failure of his panhellenic ideal. He executed two more senators for their alleged plots against him, and this provoked further resentment. His marriage to Vibia Sabina had been unhappy and childless; he adopted Antoninus Pius in 138 and nominated him as a successor, on the condition that Antoninus adopt Marcus Aurelius and Lucius Verus as his own heirs. Hadrian died the same year at Baiae, and Antoninus had him deified, despite opposition from the Senate. Edward Gibbon includes him among the Empire's "Five Good Emperors", a "benevolent dictator"; Hadrian's own senate found him remote and authoritarian. He has been described as enigmatic and contradictory, with a capacity for both great personal generosity and extreme cruelty and driven by insatiable curiosity, self-conceit, and ambition.
Hadrian was born on 24 January 76, probably in Italica (near modern Seville) in the Roman province of Hispania Baetica; one Roman biographer claims he was born at Rome. He was named Publius Aelius Hadrianus. His father was Publius Aelius Hadrianus Afer, a senator of praetorian rank, born and raised in Italica but paternally linked, through many generations over several centuries, to a family from Hadria (modern Atri), an ancient town in Picenum. The family had settled in Italica soon after its founding by Scipio Africanus. Hadrian's mother was Domitia Paulina, daughter of a distinguished Hispano-Roman senatorial family from Gades (Cádiz). His only sibling was an elder sister, Aelia Domitia Paulina. Hadrian's great-nephew, Gnaeus Pedanius Fuscus Salinator, from Barcino (Barcelona) would become Hadrian's colleague as co-consul in 118. As a senator, Hadrian's father would have spent much of his time in Rome. In terms of his later career, Hadrian's most significant family connection was to Trajan, his father's first cousin, who was also of senatorial stock, and had been born and raised in Italica. Hadrian and Trajan were both considered to bein the words of Aurelius Victor"aliens", people "from the outside" ("advenae").
Hadrian's parents died in 86, when he was ten years old. He and his sister became wards of Trajan and Publius Acilius Attianus (who later became Trajan's Praetorian prefect). Hadrian was physically active, and enjoyed hunting; when he was 14, Trajan called him to Rome and arranged his further education in subjects appropriate to a young Roman aristocrat. Hadrian's enthusiasm for Greek literature and culture earned him the nickname "Graeculus" ("Greekling").
Hadrian's first official post in Rome was as a member of the "decemviri stlitibus judicandis", one among many vigintivirate offices at the lowest level of the cursus honorum ("course of honours") that could lead to higher office and a senatorial career. He then served as a military tribune, first with the LegioII "Adiutrix" in 95, then with the Legio V Macedonica. During Hadrian's second stint as tribune, the frail and aged reigning emperor Nerva adopted Trajan as his heir; Hadrian was dispatched to give Trajan the news— or most probably was one of many emissaries charged with this same commission. Then Hadrian was transferred to Legio XXII Primigenia and a third tribunate. Hadrian's three tribunates gave him some career advantage. Most scions of the older senatorial families might serve one, or at most two military tribunates as a prerequisite to higher office. When Nerva died in 98, Hadrian is said to have hastened to Trajan, to inform him ahead of the official envoy sent by the governor, Hadrian's brother-in-law and rival Lucius Julius Ursus Servianus.
In 101, Hadrian was back in Rome; he was elected quaestor, then "quaestor imperatoris Traiani", liaison officer between Emperor and the assembled Senate, to whom he read the Emperor's communiqués and speeches – which he possibly composed on the emperor's behalf. In his role as imperial ghostwriter, Hadrian took the place of the recently deceased Licinius Sura, Trajan's all-powerful friend and kingmaker. His next post was as "ab actis senatus", keeping the Senate's records. During the First Dacian War, Hadrian took the field as a member of Trajan's personal entourage, but was excused from his military post to take office in Rome as Tribune of the Plebs, in 105. After the war, he was probably elected praetor. During the Second Dacian War, Hadrian was in Trajan's personal service again, but was released to serve as legate of Legio I Minervia, then as governor of Lower Pannonia in 107, tasked with "holding back the Sarmatians".
Now in his mid-thirties, Hadrian travelled to Greece; he was granted Athenian citizenship and was appointed eponymous archon of Athens for a brief time (in 112). The Athenians awarded him a statue with an inscription in the Theater of Dionysus (IG II2 3286) offering a detailed account of his "cursus honorum" thus far. Thereafter no more is heard of him until Trajan's Parthian War. It is possible that he remained in Greece until his recall to the imperial retinue, when he joined Trajan's expedition against Parthia as a legate. When the governor of Syria was sent to deal with renewed troubles in Dacia, Hadrian was appointed his replacement, with independent command. Trajan became seriously ill, and took ship for Rome, while Hadrian remained in Syria, "de facto" general commander of the Eastern Roman army. Trajan got as far as the coastal city of Selinus, in Cilicia, and died there, on 8 August; he would be regarded as one of Rome's most admired, popular and best emperors.
Around the time of his quaestorship, in 100 or 101, Hadrian had married Trajan's seventeen or eighteen-year-old grandniece, Vibia Sabina. Trajan himself seems to have been less than enthusiastic about the marriage, and with good reason, as the couple's relationship would prove to be scandalously poor. The marriage might have been arranged by Trajan's empress, Plotina. This highly cultured, influential woman shared many of Hadrian's values and interests, including the idea of the Roman Empire as a commonwealth with an underlying Hellenic culture. If Hadrian were to be appointed Trajan's successor, Plotina and her extended family could retain their social profile and political influence after Trajan's death. Hadrian could also count on the support of his mother-in-law, Salonina Matidia, who was daughter of Trajan's beloved sister Ulpia Marciana. When Ulpia Marciana died, in 112, Trajan had her deified, and made Salonina Matidia an Augusta.
Hadrian's personal relationship with Trajan was complex, and may have been difficult. Hadrian seems to have sought influence over Trajan, or Trajan's decisions, through cultivation of the latter's boy favourites; this gave rise to some unexplained quarrel, around the time of Hadrian's marriage to Sabina. Late in Trajan's reign, Hadrian failed to achieve a senior consulship, being only suffect consul for 108; this gave him parity of status with other members of the senatorial nobility, but no particular distinction befitting an heir designate. Had Trajan wished it, he could have promoted his protege to patrician rank and its privileges, which included opportunities for a fast track to consulship without prior experience as tribune; he chose not to. While Hadrian seems to have been granted the office of Tribune of the Plebs a year or so younger than was customary, he had to leave Dacia, and Trajan, to take up the appointment; Trajan might simply have wanted him out of the way. The "Historia Augusta" describes Trajan's gift to Hadrian of a diamond ring that Trajan himself had received from Nerva, which "encouraged [Hadrian's] hopes of succeeding to the throne". While Trajan actively promoted Hadrian's advancement, he did so with caution.
Failure to nominate an heir could invite chaotic, destructive wresting of power by a succession of competing claimants – a civil war. Too early a nomination could be seen as an abdication, and reduce the chance for an orderly transmission of power. As Trajan lay dying, nursed by his wife, Plotina, and closely watched by Prefect Attianus, he could have lawfully adopted Hadrian as heir, by means of a simple deathbed wish, expressed before witnesses; but when an adoption document was eventually presented, it was signed not by Trajan but by Plotina, and was dated the day after Trajan's death. That Hadrian was still in Syria was a further irregularity, as Roman adoption law required the presence of both parties at the adoption ceremony. Rumours, doubts, and speculation attended Hadrian's adoption and succession. It has been suggested that Trajan's young manservant Phaedimus, who died very soon after Trajan, was killed (or killed himself) rather than face awkward questions. Ancient sources are divided on the legitimacy of Hadrian's adoption: Dio Cassius saw it as bogus and the "Historia Augusta" writer as genuine. An aureus minted early in Hadrian's reign represents the official position; it presents Hadrian as Trajan's "Caesar" (Trajan's heir designate).
According to the "Historia Augusta", Hadrian informed the Senate of his accession in a letter as a "fait accompli", explaining that "the unseemly haste of the troops in acclaiming him emperor was due to the belief that the state could not be without an emperor". The new emperor rewarded the legions' loyalty with the customary bonus, and the Senate endorsed the acclamation. Various public ceremonies were organised on Hadrian's behalf, celebrating his "divine election" by all the gods, whose community now included Trajan, deified at Hadrian's request.
Hadrian remained in the east for a while, suppressing the Jewish revolt that had broken out under Trajan. He relieved Judea's governor, the outstanding Moorish general Lusius Quietus, of his personal guard of Moorish auxiliaries;
then he moved on to quell disturbances along the Danube frontier. In Rome, Hadrian's former guardian and current Praetorian Prefect, Attianus, claimed to have uncovered a conspiracy involving Lusius Quietus and three others leading senators, Lucius Publilius Celsus, Aulus Cornelius Palma Frontonianus and Gaius Avidius Nigrinus. There was no public trial for the four – they were tried "in absentia", hunted down and killed. Hadrian claimed that Attianus had acted on his own initiative, and rewarded him with senatorial status and consular rank; then pensioned him off, no later than 120. Hadrian assured the senate that henceforth their ancient right to prosecute and judge their own would be respected.
The reasons for these four executions remain obscure. Official recognition of Hadrian as legitimate heir may have come too late to dissuade other potential claimants. Hadrian's greatest rivals were Trajan's closest friends, the most experienced and senior members of the imperial council; any of them might have been a legitimate competitor for the imperial office ("capaces imperii"); and any of them might have supported Trajan's expansionist policies, which Hadrian intended to change. One of their number was Aulus Cornelius Palma who as a former conqueror of Arabia Nabatea would have retained a stake in the East. The "Historia Augusta" describes Palma and a third executed senator, Lucius Publilius Celsus (consul for the second time in 113), as Hadrian's personal enemies, who had spoken in public against him. The fourth was Gaius Avidius Nigrinus, an ex-consul, intellectual, friend of Pliny the Younger and (briefly) Governor of Dacia at the start of Hadrian's reign. He was probably Hadrian's chief rival for the throne; a senator of highest rank, breeding, and connections; according to the "Historia Augusta", Hadrian had considered making Nigrinus his heir apparent, before deciding to get rid of him.
Soon after, in 125, Hadrian appointed Marcius Turbo as his Praetorian Prefect. Turbo was his close friend, a leading figure of the equestrian order, a senior court judge and a procurator. As Hadrian also forbade equestrians to try cases against senators, the Senate retained full legal authority over its members; it also remained the highest court of appeal, and formal appeals to the emperor regarding its decisions were forbidden. If this was an attempt to repair the damage done by Attianus, with or without Hadrian's full knowledge, it was not enough; Hadrian's reputation and relationship with his Senate were iredeemably soured, for the rest of his reign. Some sources describe Hadrian's occasional recourse to a network of informers, the "frumentarii" to discreetly investigate persons of high social standing, including senators and his close friends.
Hadrian was to spend more than half his reign outside Italy. Whereas previous emperors had, for the most part, relied on the reports of their imperial representatives around the Empire, Hadrian wished to see things for himself. Previous emperors had often left Rome for long periods, but mostly to go to war, returning once the conflict was settled. Hadrian's near-incessant travels may represent a calculated break with traditions and attitudes in which the empire was a purely Roman hegemony. Hadrian sought to include provincials in a commonwealth of civilised peoples and a common Hellenic culture under Roman supervision. He supported the creation of provincial towns (municipia), semi-autonomous urban communities with their own customs and laws, rather than the imposition of new Roman colonies with Roman constitutions.
A cosmopolitan, ecumenical intent is evident in coin issues of Hadrian's later reign, showing the emperor "raising up" the personifications of various provinces. Aelius Aristides would later write that Hadrian "extended over his subjects a protecting hand, raising them as one helps fallen men on their feet". All this did not go well with Roman traditionalists. The self-indulgent emperor Nero had enjoyed a prolonged and peaceful tour of Greece, and had been criticised by the Roman elite for abandoning his fundamental responsibilities as emperor. In the eastern provinces, and to some extent in the west, Nero had enjoyed popular support; claims of his imminent return or rebirth emerged almost immediately after his death. Hadrian may have consciously exploited these positive, popular connections during his own travels. In the "Historia Augusta", Hadrian is described as "a little too much Greek", too cosmopolitan for a Roman emperor.
Prior to Hadrian's arrival in Britannia, the province had suffered a major rebellion, from 119 to 121. Inscriptions tell of an "expeditio Britannica" that involved major troop movements, including the dispatch of a detachment (vexillatio), comprising some 3,000 soldiers. Fronto writes about military losses in Britannia at the time. Coin legends of 119–120 attest that Quintus Pompeius Falco was sent to restore order. In 122 Hadrian initiated the construction of a wall, "to separate Romans from barbarians". The idea that the wall was built in order to deal with an actual threat or its resurgence, however, is probable but nevertheless conjectural. A general desire to cease the Empire's extension may have been the determining motive. Reduction of defence costs may also have played a role, as the Wall deterred attacks on Roman territory at a lower cost than a massed border army, and controlled cross-border trade and immigration. A shrine was erected in York to Brittania as the divine personification of Britain; coins were struck, bearing her image, identified as BRITANNIA. By the end of 122, Hadrian had concluded his visit to Britannia. He never saw the finished wall that bears his name.
Hadrian appears to have continued through southern Gaul. At Nemausus, he may have overseen the building of a basilica dedicated to his patroness Plotina, who had recently died in Rome and had been deified at Hadrian's request. At around this time, Hadrian dismissed his secretary "ab epistulis", the biographer Suetonius, for "excessive familiarity" towards the empress. Marcius Turbo's colleague as Praetorian Prefect, Gaius Septicius Clarus, was dismissed for the same alleged reason, perhaps a pretext to remove him from office. Hadrian spent the winter of 122/123 at Tarraco, in Spain, where he restored the Temple of Augustus.
In 123, Hadrian crossed the Mediterranean to Mauretania, where he personally led a minor campaign against local rebels. The visit was cut short by reports of war preparations by Parthia; Hadrian quickly headed eastwards. At some point, he visited Cyrene, where he personally funded the training of young men from well-bred families for the Roman military. Cyrene had benefited earlier (in 119) from his restoration of public buildings destroyed during the earlier Jewish revolt.
When Hadrian arrived on the Euphrates, he personally negotiated a settlement with the Parthian King Osroes I, inspected the Roman defences, then set off westwards, along the Black Sea coast. He probably wintered in Nicomedia, the main city of Bithynia. Nicomedia had been hit by an earthquake only shortly before his stay; Hadrian provided funds for its rebuilding, and was acclaimed as restorer of the province.
It is possible that Hadrian visited Claudiopolis and saw the beautiful Antinous, a young man of humble birth who became Hadrian's beloved. Literary and epigraphic sources say nothing of when or where they met; depictions of Antinous show him aged 20 or so, shortly before his death in 130. In 123 he would most likely have been a youth of 13 or 14. It is also possible that Antinous was sent to Rome to be trained as a page to serve the emperor and only gradually rose to the status of imperial favourite. The actual history of their relationship is mostly unknown.
With or without Antinous, Hadrian travelled through Anatolia. Various traditions suggest his presence at particular locations, and allege his foundation of a city within Mysia, Hadrianutherae, after a successful boar hunt. At about this time, plans to complete the Temple of Zeus in Cyzicus, begun by the kings of Pergamon, were put into practice. The temple received a colossal statue of Hadrian. Cyzicus, Pergamon, Smyrna, Ephesus and Sardes were promoted as regional centres for the Imperial cult ("neocoros").
Hadrian arrived in Greece during the autumn of 124, and participated in the Eleusinian Mysteries. He had a particular commitment to Athens, which had previously granted him citizenship and an archonate; at the Athenians' request, he revised their constitution – among other things, he added a new phyle (tribe), which was named after him. Hadrian combined active, hands-on interventions with cautious restraint. He refused to intervene in a local dispute between producers of olive oil and the Athenian Assembly and Council, who had imposed production quotas on oil producers; yet he granted an imperial subsidy for the Athenian grain supply. Hadrian created two foundations, to fund Athens' public games, festivals and competitions if no citizen proved wealthy or willing enough to sponsor them as a Gymnasiarch or Agonothetes. Generally Hadrian preferred that Greek notables, including priests of the Imperial cult, focus on more durable provisions, such as aqueducts and public fountains ("nymphaea"). Athens was given two such fountains; another was given to Argos.
During the winter he toured the Peloponnese. His exact route is uncertain, but it took in Epidaurus; Pausanias describes temples built there by Hadrian, and his statue – in heroic nudity – erected by its citizens in thanks to their "restorer". Antinous and Hadrian may have already been lovers at this time; Hadrian showed particular generosity to Mantinea, which shared ancient, mythic, politically useful links with Antinous' home at Bithynia. He restored Mantinea's Temple of Poseidon Hippios, and according to Pausanias, restored the city's original, classical name. It had been renamed Antigoneia since Hellenistic times, after the Macedonian King Antigonus III Doson. Hadrian also rebuilt the ancient shrines of Abae and Megara, and the Heraion of Argos.
During his tour of the Peloponnese, Hadrian persuaded the Spartan grandee Eurycles Herculanus – leader of the Euryclid family that had ruled Sparta since Augustus' day – to enter the Senate, alongside the Athenian grandee Herodes Atticus the Elder. The two aristocrats would be the first from "Old Greece" to enter the Roman Senate, as representatives of the two "great powers" of the Classical Age. This was an important step in overcoming Greek notables' reluctance to take part in Roman political life. In March 125, Hadrian presided at the Athenian festival of Dionysia, wearing Athenian dress. The Temple of Olympian Zeus had been under construction for more than five centuries; Hadrian committed the vast resources at his command to ensure that the job would be finished. He also organised the planning and construction of a particularly challenging and ambitious aqueduct to bring water to the Athenian Agora.
On his return to Italy, Hadrian made a detour to Sicily. Coins celebrate him as the restorer of the island. Back in Rome, he saw the rebuilt Pantheon, and his completed villa at nearby Tibur, among the Sabine Hills. In early March 127 Hadrian set off on a tour of Italy; his route has been reconstructed through the evidence of his gifts and donations. He restored the shrine of Cupra in Cupra Maritima, and improved the drainage of the Fucine lake. Less welcome than such largesse was his decision in 127 to divide Italy into four regions under imperial legates with consular rank, acting as governors. They were given jurisdiction over all of Italy, excluding Rome itself, therefore shifting Italian cases from the courts of Rome. Having Italy effectively reduced to the status of a group of mere provinces did not go down well with the Roman Senate, and the innovation did not long outlive Hadrian's reign.
Hadrian fell ill around this time; whatever the nature of his illness, it did not stop him from setting off in the spring of 128 to visit Africa. His arrival coincided with the good omen of rain, which ended a drought. Along with his usual role as benefactor and restorer, he found time to inspect the troops; his speech to them survives. Hadrian returned to Italy in the summer of 128 but his stay was brief, as he set off on another tour that would last three years.
In September 128, Hadrian attended the Eleusinian mysteries again. This time his visit to Greece seems to have concentrated on Athens and Sparta – the two ancient rivals for dominance of Greece. Hadrian had played with the idea of focusing his Greek revival around the Amphictyonic League based in Delphi, but by now he had decided on something far grander. His new Panhellenion was going to be a council that would bring Greek cities together. Having set in motion the preparations – deciding whose claim to be a Greek city was genuine would take time – Hadrian set off for Ephesus. From Greece, Hadrian proceeded by way of Asia to Egypt, probably conveyed across the Aegean with his entourage by an Ephesian merchant, Lucius Erastus. Hadrian later sent a letter to the Council of Ephesus, supporting Erastus as a worthy candidate for town councillor and offering to pay the requisite fee.
Hadrian arrived in Egypt before the Egyptian New Year on 29 August 130. He opened his stay in Egypt by restoring Pompey the Great's tomb at Pelusium, offering sacrifice to him as a hero and composing an epigraph for the tomb. As Pompey was universally acknowledged as responsible for establishing Rome's power in the east, this restoration was probably linked to a need to reaffirm Roman Eastern hegemony, following social unrest there during Trajan's late reign. Hadrian and Antinous held a lion hunt in the Libyan desert; a poem on the subject by the Greek Pankrates is the earliest evidence that they travelled together.
While Hadrian and his entourage were sailing on the Nile, Antinous drowned. The exact circumstances surrounding his death are unknown, and accident, suicide, murder and religious sacrifice have all been postulated. "Historia Augusta" offers the following account:
Hadrian founded the city of Antinoöpolis in Antinous' honour on 30 October 130. He then continued down the Nile to Thebes, where his visit to the Colossi of Memnon on 20 and 21 November was commemorated by four epigrams inscribed by Julia Balbilla, which still survive. After that, he headed north, reaching the Fayyum at the beginning of December.
Hadrian's movements after his journey down the Nile are uncertain. Whether or not he returned to Rome, he travelled in the East during 130/131, to organise and inaugurate his new Panhellenion, which was to be focused on the Athenian Temple to Olympian Zeus. As local conflicts had led to the failure of the previous scheme for an Hellenic association centered on Delphi, Hadrian decided instead for a grand league of all Greek cities. Successful applications for membership involved mythologised or fabricated claims to Greek origins, and affirmations of loyalty to Imperial Rome, to satisfy Hadrian's personal, idealised notions of Hellenism. Hadrian saw himself as protector of Greek culture and the "liberties" of Greece – in this case, urban self-government. It allowed Hadrian to appear as the fictive heir to Pericles, who supposedly had convened a previous Panhellenic Congress – such a Congress is mentioned only in Pericles' biography by Plutarch, who respected Rome's Imperial order.
Epigraphical evidence suggests that the prospect of applying to the Panhellenion held little attraction to the wealthier, Hellenised cities of Asia Minor, which were jealous of Athenian and European Greek preeminence within Hadrian's scheme. Hadrian's notion of Hellenism was narrow and deliberately archaising; he defined "Greekness" in terms of classical roots, rather than a broader, Hellenistic culture. Some cities with a dubious claim to Greekness, however – such as Side – were acknowledged as fully Hellenic. The German sociologist Georg Simmel remarked that the Panhellenion was based on "games, commemorations, preservation of an ideal, an entirely non-political Hellenism".
Hadrian bestowed honorific titles on many regional centres. Palmyra received a state visit and was given the civic name Hadriana Palmyra. Hadrian also bestowed honours on various Palmyrene magnates, among them one Soados, who had done much to protect Palmyrene trade between the Roman Empire and Parthia.
Hadrian had spent the winter of 131–32 in Athens, where he dedicated the now-completed Temple of Olympian Zeus, At some time in 132, he headed East, to Judaea.
In Roman Judaea Hadrian visited Jerusalem, which was still in ruins after the First Roman–Jewish War of 66–73. He may have planned to rebuild Jerusalem as a Roman colony – as Vespasian had done with Caesarea Maritima – with various honorific and fiscal privileges. The non-Roman population would have no obligation to participate in Roman religious rituals, but were expected to support the Roman imperial order; this is attested in Caesarea, where some Jews served in the Roman army during both the 66 and 132 rebellions. It has been speculated that Hadrian intended to assimilate the Jewish Temple to the traditional Roman civic-religious Imperial cult; such assimilations had long been commonplace practise in Greece and in other provinces, and on the whole, had been successful. The neighbouring Samaritans had already integrated their religious rites with Hellenistic ones. Strict Jewish monotheism proved more resistant to Imperial cajoling, and then to Imperial demands. A massive anti-Hellenistic and anti-Roman Jewish uprising broke out, led by Simon bar Kokhba. The Roman governor Tineius (Tynius) Rufus asked for an army to crush the resistance; bar Kokhba punished any Jew who refused to join his ranks. According to Justin Martyr and Eusebius, that had to do mostly with Christian converts, who opposed bar Kokhba's messianic claims.
A tradition based on the "Historia Augusta" suggests that the revolt was spurred by Hadrian's abolition of circumcision ("brit milah"); which as a Hellenist he viewed as mutilation. The scholar Peter Schäfer maintains that there is no evidence for this claim, given the notoriously problematical nature of the "Historia Augusta" as a source, the "tomfoolery" shown by the writer in the relevant passage, and the fact that contemporary Roman legislation on "genital mutilation" seems to address the general issue of castration of slaves by their masters. Other issues could have contributed to the outbreak; a heavy-handed, culturally insensitive Roman administration; tensions between the landless poor and incoming Roman colonists privileged with land-grants; and a strong undercurrent of messianism, predicated on Jeremiah's prophecy that the Temple would be rebuilt seventy years after its destruction, as the First Temple had been after the Babylonian exile.
Given the fragmentary nature of the existing evidence, it is impossible to ascertain an exact date for the beginning of the uprising, but it is probable that it began in-between summer and fall 132. The Romans were overwhelmed by the organised ferocity of the uprising. Hadrian called his general Sextus Julius Severus from Britain, and brought troops in from as far as the Danube. Roman losses were heavy; an entire legion or its numeric equivalent of around 4,000. Hadrian's report on the war to the Roman Senate omitted the customary salutation, "If you and your children are in health, it is well; I and the legions are in health." The rebellion was quashed by 135. According to Cassius Dio, Roman war operations in Judea left some 580,000 Jews dead, and 50 fortified towns and 985 villages razed. An unknown proportion of the population was enslaved. Beitar, a fortified city southwest of Jerusalem, fell after a three and a half year siege. The extent of punitive measures against the Jewish population remains a matter of debate.
Hadrian erased the province's name from the Roman map, renaming it Syria Palaestina. He renamed Jerusalem Aelia Capitolina after himself and Jupiter Capitolinus, and had it rebuilt in Greek style. According to Epiphanius, Hadrian appointed Aquila from Sinope in Pontus as "overseer of the work of building the city", since he was related to him by marriage. Hadrian is said to have placed the city's main Forum at the junction of the main Cardo and Decumanus Maximus, now the location for the (smaller) Muristan. After the suppression of the Jewish revolt, Hadrian provided the Samaritans with a temple, dedicated to Zeus Hypsistos ("Highest Zeus") on Mount Gerizim. The bloody repression of the revolt ended Jewish political independence from the Roman Imperial order.
Inscriptions make it clear that in 133 Hadrian took to the field with his armies against the rebels. He then returned to Rome, probably in that year and almost certainly – judging from inscriptions – via Illyricum.
Hadrian spent the final years of his life at Rome. In 134, he took an Imperial salutation for the end of the Second Jewish War (which was not actually concluded until the following year). Commemorations and achievement awards were kept to a minimum, as Hadrian came to see the war "as a cruel and sudden disappointment to his aspirations" towards a cosmopolitan empire.
The Empress Sabina died, probably in 136, after an unhappy marriage with which Hadrian had coped as a political necessity. The "Historia Augusta" biography states that Hadrian himself declared that his wife's "ill-temper and irritability" would be reason enough for a divorce, were he a private citizen. That gave credence, after Sabina's death, to the common belief that Hadrian had her poisoned. In keeping with well-established Imperial propriety, Sabina – who had been made an "Augusta" sometime around 128 – was deified not long after her death.
Hadrian's marriage to Sabina had been childless. Suffering from poor health, Hadrian turned to the problem of the succession. In 136 he adopted one of the ordinary consuls of that year, Lucius Ceionius Commodus, who as an emperor-in waiting took the name Lucius Aelius Caesar. He was the son-in-law of Gaius Avidius Nigrinus, one of the "four consulars" executed in 118, but was himself in delicate health, apparently with a reputation more "of a voluptuous, well educated great lord than that of a leader". Various modern attempts have been made to explain Hadrian's choice: Jerome Carcopino proposes that Aelius was Hadrian's natural son. It has also been speculated that his adoption was Hadrian's belated attempt to reconcile with one of the most important of the four senatorial families whose leading members had been executed soon after Hadrian's succession. Aelius acquitted himself honourably as joint governor of Pannonia Superior and Pannonia Inferior; he held a further consulship in 137, but died on 1 January 138.
Hadrian next adopted Titus Aurelius Fulvus Boionius Arrius Antoninus (the future emperor Antoninus Pius), who had served Hadrian as one of the five imperial legates of Italy, and as proconsul of Asia. In the interests of dynastic stability, Hadrian required that Antoninus adopt both Lucius Ceionius Commodus (son of the deceased Aelius Caesar) and Marcus Annius Verus (grandson of an influential senator of the same name who had been Hadrian's close friend); Annius was already betrothed to Aelius Caesar's daughter Ceionia Fabia. It may not have been Hadrian, but rather Antoninus Pius – Annius Verus's uncle – who supported Annius Verus' advancement; the latter's divorce of Ceionia Fabia and subsequent marriage to Antoninus' daughter Annia Faustina points in the same direction. When he eventually became Emperor, Marcus Aurelius would co-opt Ceionius Commodus as his co-Emperor, under the name of Lucius Verus, on his own initiative.
Hadrian's last few years were marked by conflict and unhappiness. His adoption of Aelius Caesar proved unpopular, not least with Hadrian's brother-in-law Lucius Julius Ursus Servianus and Servianus's grandson Gnaeus Pedanius Fuscus Salinator. Servianus, though now far too old, had stood in the line of succession at the beginning of Hadrian's reign; Fuscus is said to have had designs on the imperial power for himself. In 137 he may have attempted a coup in which his grandfather was implicated; Hadrian ordered that both be put to death. Servianus is reported to have prayed before his execution that Hadrian would "long for death but be unable to die". During his final, protracted illness, Hadrian was prevented from suicide on several occasions.
Hadrian died in the year 138 on 10 July, in his villa at Baiae at the age of 62. Dio Cassius and the "Historia Augusta" record details of his failing health. He had reigned for 21 years, the longest since Tiberius, and the fourth longest in the Principate, after Augustus, Hadrian's successor Antoninus Pius, and Tiberius.
He was buried first at Puteoli, near Baiae, on an estate that had once belonged to Cicero. Soon after, his remains were transferred to Rome and buried in the Gardens of Domitia, close by the almost-complete mausoleum. Upon completion of the Tomb of Hadrian in Rome in 139 by his successor Antoninus Pius, his body was cremated, and his ashes were placed there together with those of his wife Vibia Sabina and his first adopted son, Lucius Aelius, who also died in 138. The Senate had been reluctant to grant Hadrian divine honours; but Antoninus persuaded them by threatening to refuse the position of Emperor. Hadrian was given a temple on the Campus Martius, ornamented with reliefs representing the provinces. The Senate awarded Antoninus the title of "Pius", in recognition of his filial piety in pressing for the deification of his adoptive father. At the same time, perhaps in reflection of the senate's ill will towards Hadrian, commemorative coinage honouring his consecration was kept to a minimum.
Most of Hadrian's military activities were consistent with his ideology of Empire as a community of mutual interest and support. He focused on protection from external and internal threats; on "raising up" existing provinces, rather than the aggressive acquisition of wealth and territory through subjugation of "foreign" peoples that had characterised the early Empire. Hadrian's policy shift was part of a trend towards the slowing down of the empire's expansion, such expansion being not closed after him (the Empire greatest extent being achieved only during the Severan dynasty), but a significant step in this direction, given the empire's overstretching. While the empire as a whole benefited from this, military careerists resented the loss of opportunities.
The 4th-century historian Aurelius Victor saw Hadrian's withdrawal from Trajan's territorial gains in Mesopotamia as a jealous belittlement of Trajan's achievements ("Traiani gloriae invidens"). More likely, an expansionist policy was no longer sustainable; the Empire had lost two legions, the Legio XXII Deiotariana and the "lost legion" IX Hispania, possibly destroyed in a late Trajanic uprising by the Brigantes in Britain. Trajan himself may have thought his gains in Mesopotamia indefensible, and abandoned them shortly before his death. Hadrian granted parts of Dacia to the Roxolani Sarmatians; their king Rasparaganus received Roman citizenship, client king status, and possibly an increased subsidy. Hadrian's presence on the Dacian front at this time is mere conjecture, but Dacia was included in his coin series with allegories of the provinces. A controlled, partial withdrawal of troops from the Dacian plains would have been less costly than maintaining several Roman cavalry units and a supporting network of fortifications.
Hadrian retained control over Osroene through the client king Parthamaspates, who had once served as Trajan's client king of Parthia; and around 121, Hadrian negotiated a peace treaty with the now-independent Parthia. Late in his reign (135), the Alani attacked Roman Cappadocia with the covert support of Pharasmanes, king of Caucasian Iberia. The attack was repulsed by Hadrian's governor, the historian Arrian, who subsequently installed a Roman "adviser" in Iberia. Arrian kept Hadrian well-informed on matters related to the Black Sea and the Caucasus. Between 131 and 132 he sent Hadrian a lengthy letter ("Periplus of the Euxine") on a maritime trip around the Black Sea, intended to offer relevant information in case a Roman intervention was needed.
Hadrian also developed permanent fortifications and military posts along the empire's borders ("limites", sl. "limes") to support his policy of stability, peace and preparedness. This helped keep the military usefully occupied in times of peace; his Wall across Britania was built by ordinary troops. A series of mostly wooden fortifications, forts, outposts and watchtowers strengthened the Danube and Rhine borders. Troops practised intensive, regular drill routines. Although his coins showed military images almost as often as peaceful ones, Hadrian's policy was peace through strength, even threat, with an emphasis on "disciplina" (discipline), which was the subject of two monetary series. Cassius Dio praised Hadrian's emphasis on "spit and polish" as cause for the generally peaceful character of his reign. Fronto expressed other opinions on the subject. In his view, Hadrian preferred war games to actual war, and enjoyed "giving eloquent speeches to the armies" – like the inscribed series of addresses he made while on an inspection tour, during 128, at the new headquarters of Legio III Augusta in Lambaesis
Faced with a shortage of legionary recruits from Italy and other Romanised provinces, Hadrian systematised the use of less costly "numeri" – ethnic non-citizen troops with special weapons, such as Eastern mounted archers – in low-intensity, mobile defensive tasks such as dealing with border infiltrators and skirmishers. Hadrian is also credited with introducing units of heavy cavalry (cataphracts) into the Roman army. Fronto later blamed Hadrian for declining standards in the Roman army of his own time.
Hadrian enacted, through the jurist Salvius Julianus, the first attempt to codify Roman law. This was the Perpetual Edict, according to which the legal actions of praetors became fixed statutes, and as such could no longer be subjected to personal interpretation or change by any magistrate other than the Emperor. At the same time, following a procedure initiated by Domitian, Hadrian made the Emperor's legal advisory board, the "consilia principis" ("council of the princeps") into a permanent body, staffed by salaried legal aides. Its members were mostly drawn from the equestrian class, replacing the earlier freedmen of the Imperial household. This innovation marked the superseding of surviving Republican institutions by an openly autocratic political system. The reformed bureaucracy was supposed to exercise administrative functions independently of traditional magistracies; objectively it did not detract from the Senate's position. The new civil servants were free men and as such supposed to act on behalf of the interests of the "Crown", not of the Emperor as an individual. However, the Senate never accepted the loss of its prestige caused by the emergence of a new aristocracy alongside it, placing more strain on the already troubled relationship between the Senate and the Emperor.
Hadrian codified the customary legal privileges of the wealthiest, most influential or highest status citizens (described as "splendidiores personae" or "honestiores"), who held a traditional right to pay fines when found guilty of relatively minor, non-treasonous offences. Low ranking persons – "alii" ("the others"), including low-ranking citizens – were "humiliores" who for the same offences could be subject to extreme physical punishments, including forced labour in the mines or in public works, as a form of fixed-term servitude. While Republican citizenship had carried at least notional equality under law, and the right to justice, offences in Imperial courts were judged and punished according to the relative prestige, rank, reputation and moral worth of both parties; senatorial courts were apt to be lenient when trying one of their peers, and to deal very harshly with offences committed against one of their number by low ranking citizens or non-citizens. For treason (maiestas) beheading was the worst punishment that the law could inflict on "honestiores"; the "humiliores" might suffer crucifixion, burning, or condemnation to the beasts in the arena.
A great number of Roman citizens maintained a precarious social and economic advantage at the lower end of the hierarchy. Hadrian found it necessary to clarify that decurions, the usually middle-class, elected local officials responsible for running the ordinary, everyday official business of the provinces, counted as "honestiores"; so did soldiers, veterans and their families, as far as civil law was concerned; by implication, all others, including freedmen and slaves, counted as "humiliores". Like most Romans, Hadrian seems to have accepted slavery as morally correct, an expression of the same natural order that rewarded "the best men" with wealth, power and respect. When confronted by a crowd demanding the freeing of a popular slave charioteer, Hadrian replied that he could not free a slave belonging to another person. However, he limited the punishments that slaves could suffer; they could be lawfully tortured to provide evidence, but they could not be lawfully killed unless guilty of a capital offence. Masters were also forbidden to sell slaves to a gladiator trainer (lanista) or to a procurer, except as legally justified punishment. Hadrian also forbade torture of free defendants and witnesses. He abolished ergastula, private prisons for slaves in which kidnapped free men had sometimes been illegally detained.
Hadrian issued a general rescript, imposing a ban on castration, performed on freedman or slave, voluntarily or not, on pain of death for both the performer and the patient. Under the "Lex Cornelia de Sicaris et Veneficis", castration was placed on a par with conspiracy to murder, and punished accordingly. Notwithstanding his philhellenism, Hadrian was also a traditionalist. He enforced dress-standards among the "honestiores"; senators and knights were expected to wear the toga when in public. He imposed strict separation between the sexes in theatres and public baths; to discourage idleness, the latter were not allowed to open until 2.00 in the afternoon, "except for medical reasons".
One of Hadrian's immediate duties on accession was to seek senatorial consent for the apotheosis of his predecessor, Trajan, and any members of Trajan's family to whom he owed a debt of gratitude. Matidia Augusta, Hadrian's mother-in-law, died in December 119, and was duly deified. Hadrian may have stopped at Nemausus during his return from Britannia, to oversee the completion or foundation of a basilica dedicated to his patroness Plotina. She had recently died in Rome and had been deified at Hadrian's request.
As Emperor, Hadrian was also Rome's pontifex maximus, responsible for all religious affairs and the proper functioning of official religious institutions throughout the empire. His Hispano-Roman origins and marked pro-Hellenism shifted the focus of the official imperial cult, from Rome to the Provinces. While his standard coin issues still identified him with the traditional "genius populi Romani", other issues stressed his personal identification with "Hercules Gaditanus" (Hercules of Gades), and Rome's imperial protection of Greek civilisation. He promoted Sagalassos in Greek Pisidia as the Empire's leading Imperial cult centre; his exclusively Greek "Panhellenion" extolled Athens as the spiritual centre of Greek culture.
Hadrian added several Imperial cult centres to the existing roster, particularly in Greece, where traditional intercity rivalries were commonplace. Cities promoted as Imperial cult centres drew Imperial sponsorship of festivals and sacred games, attracted tourism, trade and private investment. Local worthies and sponsors were encouraged to seek self-publicity as cult officials under the aegis of Roman rule, and to foster reverence for Imperial authority. Hadrian's rebuilding of long-established religious centres would have further underlined his respect for the glories of classical Greece – something well in line with contemporary antiquarian tastes. During Hadrian's third and last trip to the Greek East, there seems to have been an upwelling of religious fervour, focused on Hadrian himself. He was given personal cult as a deity, monuments and civic homage, according to the religious syncretism at the time. He may have had the great Serapeum of Alexandria rebuilt, following damage sustained in 116, during the Kitos War.
In 136, just two years before his death, Hadrian dedicated his Temple of Venus and Roma. It was built on land he had set aside for the purpose in 121, formerly the site of Nero's Golden House. The temple was the largest in Rome, and was built in an Hellenising style, more Greek than Roman. The temple's dedication and statuary associated the worship of the traditional Roman goddess Venus, divine ancestress and protector of the Roman people, with the worship of the goddess Roma – herself a Greek invention, hitherto worshiped only in the provinces – to emphasise the universal nature of the empire.
Hadrian had Antinous deified as Osiris-Antinous by an Egyptian priest at the ancient Temple of Ramesses II, very near the place of his death. Hadrian dedicated a new temple-city complex there, built in a Graeco-Roman style, and named it Antinoöpolis. It was a proper Greek polis; it was granted an Imperially subsidised alimentary scheme similar to Trajan's alimenta, and its citizens were allowed intermarriage with members of the native population, without loss of citizen-status. Hadrian thus identified an existing native cult (to Osiris) with Roman rule. The cult of Antinous was to become very popular in the Greek-speaking world, and also found support in the West. In Hadrian's villa, statues of the Tyrannicides, with a bearded Aristogeiton and a clean-shaven Harmodios, linked his favourite to the classical tradition of Greek love. In the west, Antinous was identified with the Celtic sun-god Belenos.
Hadrian was criticised for the open intensity of his grief at Antinous's death, particularly as he had delayed the apotheosis of his own sister Paulina after her death. Nevertheless, his recreation of the deceased youth as a cult-figure found little opposition. Though not a subject of the state-sponsored, official Roman imperial cult, Antinous offered a common focus for the emperor and his subjects, emphasising their sense of community. Medals were struck with his effigy, and statues erected to him in all parts of the empire, in all kinds of garb, including Egyptian dress. Temples were built for his worship in Bithynia and Mantineia in Arcadia. In Athens, festivals were celebrated in his honour and oracles delivered in his name. As an "international" cult figure, Antinous had an enduring fame, far outlasting Hadrian's reign. Local coins with his effigy were still being struck during Caracalla's reign, and he was invoked in a poem to celebrate the accession of Diocletian.
Hadrian continued Trajan's policy on Christians; they should not be sought out, and should only be prosecuted for specific offences, such as refusal to swear oaths. In a rescript addressed to the proconsul of Asia, Gaius Minicius Fundanus, and preserved by Justin Martyr, Hadrian laid down that accusers of Christians had to bear the burden of proof for their denunciations or be punished for "calumnia" (defamation).
Hadrian had an abiding and enthusiastic interest in art, architecture and public works. Rome's Pantheon (temple "to all the gods"), originally built by Agrippa and destroyed by fire in 80, was partly restored under Trajan and completed under Hadrian in the domed form it retains to this day. Hadrian's Villa at Tibur (Tivoli) provides the greatest Roman equivalent of an Alexandrian garden, complete with domed Serapeum, recreating a sacred landscape. An anecdote from Cassius Dio's history suggests Hadrian had a high opinion of his own architectural tastes and talents, and took their rejection as a personal offence: at some time before his reign, his predecessor Trajan was discussing an architectural problem with Apollodorus of Damascus – architect and designer of Trajan's Forum, the Column commemorating his Dacian conquest, and his bridge across the Danube – when Hadrian interrupted to offer his advice. Apollodorus gave him a scathing response: "Be off, and draw your gourds [a sarcastic reference to the domes which Hadrian apparently liked to draw]. You don't understand any of these matters." Dio claims that once Hadrian became emperor, he showed Apollodorus drawings of the gigantic Temple of Venus and Roma, implying that great buildings could be created without his help. When Apollodorus pointed out the building's various insoluble problems and faults, Hadrian was enraged, sent him into exile and later put him to death on trumped up charges.
Hadrian wrote poetry in both Latin and Greek; one of the few surviving examples is a Latin poem he reportedly composed on his deathbed (see below). Some of his Greek productions found their way into the Palatine Anthology. He also wrote an autobiography, which "Historia Augusta" says was published under the name of Hadrian's freedman Phlegon of Tralles. It was not, apparently, a work of great length or revelation, but designed to scotch various rumours or explain Hadrian's most controversial actions. It is possible that this autobiography had the form of a series of open letters to Antoninus Pius.
Hadrian was a passionate hunter from a young age. In northwest Asia, he founded and dedicated a city to commemorate a she-bear he killed. It is documented that in Egypt he and his beloved Antinous killed a lion. In Rome, eight reliefs featuring Hadrian in different stages of hunting decorate a building that began as a monument celebrating a kill.
Hadrian's philhellenism may have been one reason for his adoption, like Nero before him, of the beard as suited to Roman imperial dignity; Dio of Prusa had equated the growth of the beard with the Hellenic ethos. Hadrian's beard may also have served to conceal his natural facial blemishes. All emperors before him (except Nero) had been clean-shaven; emperors who came after him until Constantine the Great were bearded and this imperial fashion was revived again by Phocas at the beginning of the 7th century.
Hadrian was familiar with the rival philosophers Epictetus and Favorinus, and with their works, and held an interest in Roman philosophy. During his first stay in Greece, before he became emperor, he attended lectures by Epictetus at Nicopolis. Shortly before the death of Plotina, Hadrian had granted her wish that the leadership of the Epicurean School in Athens be open to a non-Roman candidate.
During Hadrian's time as Tribune of the Plebs, omens and portents supposedly announced his future imperial condition. According to the "Historia Augusta", Hadrian had a great interest in astrology and divination and had been told of his future accession to the Empire by a grand-uncle who was himself a skilled astrologer.
According to the "Historia Augusta", Hadrian composed the following poem shortly before his death:
The poem has enjoyed remarkable popularity, but uneven critical acclaim. According to Aelius Spartianus, the alleged author of Hadrian's biography in the "Historia Augusta", Hadrian "wrote also similar poems in Greek, not much better than this one". T. S. Eliot's poem "Animula" may have been inspired by Hadrian's, though the relationship is not unambiguous.
Hadrian has been described as the most versatile of all Roman emperors, who "adroitly concealed a mind envious, melancholy, hedonistic, and excessive with respect to his own ostentation; he simulated restraint, affability, clemency, and conversely disguised the ardor for fame with which he burned." His successor Marcus Aurelius, in his "Meditations", lists those to whom he owes a debt of gratitude; Hadrian is conspicuously absent. Hadrian's tense, authoritarian relationship with his senate was acknowledged a generation after his death by Fronto, himself a senator, who wrote in one of his letters to Marcus Aurelius that "I praised the deified Hadrian, your grandfather, in the senate on a number of occasions with great enthusiasm, and I did this willingly, too [...] But, if it can be said – respectfully acknowledging your devotion towards your grandfather – I wanted to appease and assuage Hadrian as I would Mars Gradivus or Dis Pater, rather than to love him." Fronto adds, in another letter, that he kept some friendships, during Hadrian's reign, "under the risk of my life" ("cum periculo capitis"). Hadrian underscored the autocratic character of his reign by counting his "dies imperii" from the day of his acclamation by the armies, rather than the senate, and legislating by frequent use of imperial decrees to bypass the Senate's approval. The veiled antagonism between Hadrian and the Senate never grew to overt confrontation as had happened during the reigns of overtly "bad" emperors, because Hadrian knew how to remain aloof and avoid an open clash. That Hadrian spent half of his reign away from Rome in constant travel probably helped to mitigate the worst of this permanently strained relationship.
In 1503, Niccolò Machiavelli, though an avowed republican, esteemed Hadrian as an ideal "princeps", one of Rome's Five Good Emperors. Friedrich Schiller called Hadrian "the Empire's first servant". Edward Gibbon admired his "vast and active genius" and his "equity and moderation", and considered Hadrian's era as part of the "happiest era of human history". In Ronald Syme's view, Hadrian "was a Führer, a Duce, a Caudillo". According to Syme, Tacitus' description of the rise and accession of Tiberius is a disguised account of Hadrian's authoritarian Principate. According, again, to Syme, Tacitus' Annals would be a work of contemporary history, written "during Hadrian's reign and hating it".
While the balance of ancient literary opinion almost invariably compares Hadrian unfavourably to his predecessor, modern historians have sought to examine his motives, purposes and the consequences of his actions and policies. For M.A. Levi, a summing-up of Hadrian's policies should stress the ecumenical character of the Empire, his development of an alternate bureaucracy disconnected from the Senate and adapted to the needs of an "enlightened" autocracy, and his overall defensive strategy; this would qualify him as a grand Roman political reformer, creator of an openly absolute monarchy to replace a sham senatorial republic. Robin Lane Fox credits Hadrian as creator of a unified Greco-Roman cultural tradition, and as the end of this same tradition; Hadrian's attempted "restoration" of Classical culture within a non-democratic Empire drained it of substantive meaning, or, in Fox's words, "kill[ed] it with kindness".
In Hadrian's time, there was already a well established convention that one could not write a contemporary Roman imperial history for fear of contradicting what the emperors wanted to say, read or hear about themselves. As an earlier Latin source, Fronto's correspondence and works attest to Hadrian's character and the internal politics of his rule. Greek authors such as Philostratus and Pausanias wrote shortly after Hadrian's reign, but confined their scope to the general historical framework that shaped Hadrian's decisions, especially those relating the Greek-speaking world, Greek cities and notables. Pausanias especially wrote a lot in praise of Hadrian's benefactions to Greece in general and Athens in particular. Political histories of Hadrian's reign come mostly from later sources, some of them written centuries after the reign itself. The early 3rd-century "Roman History" by Cassius Dio, written in Greek, gave a general account of Hadrian's reign, but the original is lost, and what survives, aside from some fragments, is a brief, Byzantine-era abridgment by the 11th-century monk Xiphilinius, who focused on Hadrian's religious interests, the Bar Kokhba war, and little else—mostly on Hadrian's moral qualities and his fraught relationship with the Senate. The principal source for Hadrian's life and reign is therefore in Latin: one of several late 4th-century imperial biographies, collectively known as the "Historia Augusta". The collection as a whole is notorious for its unreliability ("a mish mash of actual fact, cloak and dagger, sword and sandal, with a sprinkling of "Ubu Roi""), but most modern historians consider its account of Hadrian to be relatively free of outright fictions, and probably based on sound historical sources, principally one of a lost series of imperial biographies by the prominent 3rd-century senator Marius Maximus, who covered the reigns of Nerva through to Elagabalus.
The first modern historian to produce a chronological account of Hadrian's life, supplementing the written sources with other epigraphical, numismatic, and archaeological evidence, was the German 19th-century medievalist Ferdinand Gregorovius. A 1907 biography by Weber, a German nationalist and later Nazi Party supporter, incorporates the same archaeological evidence to produce an account of Hadrian, and especially his Bar Kokhba war, that has been described as ideologically loaded. Epigraphical studies in the post-war period help support alternate views of Hadrian. Anthony Birley's 1997 biography of Hadrian sums up and reflects these developments in Hadrian historiography.
Inscriptions: | https://en.wikipedia.org/wiki?curid=13621 |
Herman Melville
Herman Melville (born Melvill; August 1, 1819 – September 28, 1891) was an American novelist, short story writer and poet of the American Renaissance period. Among his best-known works are "Moby-Dick" (1851), "Typee" (1846), a romanticized account of his experiences in Polynesia, and "Billy Budd, Sailor", a posthumously published novella. Although his reputation was not high at the time of his death, the centennial of his birth in 1919 was the starting point of a Melville revival and "Moby-Dick" grew to be considered one of the great American novels.
Melville was born in New York City, the third child of a prosperous merchant whose death in 1832 left the family in financial straits. He took to sea in 1839 as a common sailor on a merchant ship and then on the whaler "Acushnet" but he jumped ship in the Marquesas Islands. "Typee", his first book, and its sequel, "Omoo" (1847) were travel-adventures based on his encounters with the peoples of the island. Their success gave him the financial security to marry Elizabeth Shaw, the daughter of a prominent Boston family. "Mardi" (1849), a romance-adventure and his first book not based on his own experience, was not well received. "Redburn" (1849) and "White Jacket" (1850), both tales based on his experience as a well-born young man at sea, were given respectable reviews but did not sell well enough to support his expanding family.
Melville's growing literary ambition showed in "Moby-Dick" (1851), which took nearly a year and a half to write. But it did not find an audience and critics scorned his psychological novel, "" (1852). From 1853 to 1856, Melville published short fiction in magazines, including "Benito Cereno" and "Bartleby, the Scrivener". In 1857, he traveled to England, toured the Near East, and published his last work of prose, "The Confidence-Man" (1857). He moved to New York in 1863 to take a position as Customs Inspector.
From that point, Melville focused his creative powers on poetry. "Battle-Pieces and Aspects of the War" (1866) was his poetic reflection on the moral questions of the American Civil War. In 1867, his eldest child Malcolm died at home from a self-inflicted gunshot. Melville's metaphysical epic "Clarel: A Poem and Pilgrimage in the Holy Land" was published in 1876. In 1886, his other son Stanwix died of apparent tuberculosis, and Melville retired. During his last years, he privately published two volumes of poetry, and left one volume unpublished. The novella "Billy Budd" was left unfinished at his death but was published posthumously in 1924. Melville died from cardiovascular disease in 1891.
Herman Melville was born in New York City on August 1, 1819, to Allan Melvill (1782–1832) and Maria (Gansevoort) Melvill (1791–1872). Herman was the third of eight children in a family of Dutch heredity and background. His siblings, who played important roles in his career as well as in his emotional life, were Gansevoort (1815–1846); Helen Maria (1817–1888); Augusta (1821–1876); Allan (1823–1872); Catherine (1825–1905); Frances Priscilla (1827–1885); and Thomas (1830–1884), who eventually became a governor of Sailors Snug Harbor. Part of a well-established and colorful Boston family, Allan Melvill spent much time out of New York and in Europe as a commission merchant and an importer of French dry goods.
Both of Melville's grandfathers were heroes of the Revolutionary War, and Melville found satisfaction in his "double revolutionary descent". Major Thomas Melvill (1751–1832) had taken part in the Boston Tea Party, and his maternal grandfather, General Peter Gansevoort (1749–1812), was famous for having commanded the defense of Fort Stanwix in New York in 1777. Major Melvill sent his son Allan (Herman's father) to France instead of college at the turn of the nineteenth century, where he spent two years in Paris and learned to speak and write French fluently. In 1814, Allan, who subscribed to his father's Unitarianism, married Maria Gansevoort, who was committed to the more strict and biblically oriented Dutch Reformed version of the Calvinist creed of her family. This more severe Protestantism of the Gansevoorts' tradition ensured she was well versed in the Bible, both in English as well as in Dutch, the language she had grown up speaking with her parents.
On August 19, almost three weeks after his birth, Herman Melville was baptized at home by a minister of the South Reformed Dutch Church. During the 1820s, Melville lived a privileged, opulent life in a household with three or more servants at a time. At four-year intervals, the family would move to more spacious and elegant quarters, finally settling on Broadway in 1828. Allan Melvill lived beyond his means and on large sums he borrowed from both his father and his wife's widowed mother. Although his wife's opinion of his financial conduct is unknown, biographer Hershel Parker suggests Maria "thought her mother's money was infinite and that she was entitled to much of her portion" while her children were young. How well the parents managed to hide the truth from their children is "impossible to know", according to biographer Andrew Delbanco.
In 1830, Maria's family finally lost patience and their support came to a halt, at which point Allan's total debt to both families exceeded $20,000, showing his lack of financial responsibility (). The relative happiness and comfort of Melville's early childhood, biographer Newton Arvin writes, depended not so much on Allan's wealth, or his lack of fiscal prudence, as on the "exceptionally tender and affectionate spirit in all the family relationships, especially in the immediate circle". Arvin describes Allan as "a man of real sensibility and a particularly warm and loving father," while Maria was "warmly maternal, simple, robust, and affectionately devoted to her husband and her brood".
Herman's education began in 1824 when he was five, around the time the Melvills moved to a newly built house at 33 Bleecker Street in Manhattan. Herman and his older brother, Gansevoort, were sent to the New York Male High School. In 1826, the same year that Herman contracted scarlet fever, Allan Melvill described him as "very backwards in speech & somewhat slow in comprehension" at first, but his development increased its pace and Allan was surprised "that Herman proved the best Speaker in the introductory Department". In 1829, both Gansevoort and Herman were transferred to Columbia Grammar and Preparatory School, and Herman enrolled in the English Department on September 28. "Herman I think is making more progress than formerly," Allan wrote in May 1830 to Major Melvill, "and without being a bright Scholar, he maintains a respectable standing, and would proceed further, if he could only be induced to study more—being a most amiable and innocent child, I cannot find it in my heart to coerce him".
Emotionally unstable and behind on paying the rent for the house on Broadway, Herman's father tried to recover from his setbacks by moving his family to Albany, New York, in 1830 and going into the fur business. Herman attended the Albany Academy from October 1830 to October 1831, where he took the standard preparatory course, studying reading and spelling; penmanship; arithmetic; English grammar; geography; natural history; universal, Greek, Roman and English history; classical biography; and Jewish antiquities. "The ubiquitous classical references in Melville's published writings," as Melville scholar Merton Sealts observed, "suggest that his study of ancient history, biography, and literature during his school days left a lasting impression on both his thought and his art, as did his almost encyclopedic knowledge of both the Old and the New Testaments". Parker speculates that he left the Academy in October 1831 because "even the tiny tuition fee seemed too much to pay". His brothers Gansevoort and Allan continued their attendance a few months longer, Gansevoort until March the next year.
In December, Herman's father returned from New York City by steamboat, but ice forced him to travel the last seventy miles over two days and two nights in an open horse carriage at , causing him to become ill. In early January, he began to show "signs of delirium," and his situation grew worse until his wife felt his suffering deprived him of his intellect. Two months before reaching fifty, Allan Melvill died on January 28, 1832. As Herman was no longer attending school, he likely witnessed these scenes. Twenty years later he described a similar death in "Pierre".
The death of Allan caused many major shifts in the family's material and spiritual circumstances. One result was the greater influence of his mother's religious beliefs. Maria sought consolation in her faith and in April was admitted as a member of the First Reformed Dutch Church. Herman's saturation in orthodox Calvinism was surely the most decisive intellectual and spiritual influence of his early life.
Two months after his father's death, Gansevoort entered the cap and fur business. Uncle Peter Gansevoort, a director of the New York State Bank, got Herman a job as clerk for $150 a year (). Biographers cite a passage from "Redburn" when trying to answer what Herman must have felt then: "I had learned to think much and bitterly before my time," the narrator remarks, adding, "I must not think of those delightful days, before my father became a bankrupt ... and we removed from the city; for when I think of those days, something rises up in my throat and almost strangles me". With Melville, Arvin argues, one has to reckon with "psychology, the tormented psychology, of the decayed patrician".
When Melville's paternal grandfather died on September 16, 1832, Maria and her children discovered Allan, somewhat unscrupulously, had borrowed more than his share of his inheritance, meaning Maria received only $20 (). His paternal grandmother died almost exactly seven months later. Melville did his job well at the bank; though he was only fourteen in 1834, the bank considered him competent enough to be sent to Schenectady, New York, on an errand. Not much else is known from this period, except that he was very fond of drawing. The visual arts became a lifelong interest. Around May 1834, the Melvilles moved to another house in Albany, a three-story brick house. That same month a fire destroyed Gansevoort's skin-preparing factory, which left him with personnel he could neither employ nor afford. Instead he pulled Melville out of the bank to man the cap and fur store.
In 1835, while still working in the store, Melville enrolled in Albany Classical School, perhaps using Maria's part of the proceeds from the sale of the estate of his maternal grandmother in March 1835. In September of the following year Herman was back in Albany Academy in the Latin course. He also participated in debating societies, in an apparent effort to make up as much as he could for his missed years of schooling. In this period he read Shakespeare—at least "Macbeth", whose witch scenes gave him the chance to teasingly scare his sisters. In March 1837, he was again withdrawn from Albany Academy.
Gansevoort served as a role model and support for Melville in many ways throughout his life, at this time particularly in forming a self-directed educational plan. In early 1834 Gansevoort had become a member of Albany's Young Men's Association for Mutual Improvement, and in January 1835 Melville joined him there. Gansevoort also had copies of John Todd's "Index Rerum", a blank register for indexing remarkable passages from books one had read for easy retrieval. Among the sample entries which Gansevoort made showing his academic scrupulousness was "Pequot, beautiful description of the war with," with a short title reference to the place in Benjamin Trumbull's "A Complete History of Connecticut" (Volume I in 1797, and Volume II in 1898) where the description could be found. The two surviving volumes of Gansevoort's are the best evidence for Melville's reading in this period. Gansevoort's entries include books Melville used for "Moby-Dick" and "Clarel", such as "Parsees—of India—an excellent description of their character, and religion and an account of their descent—East India Sketch Book p. 21". Other entries are on Panther, the pirate's cabin, and storm at sea from James Fenimore Cooper's "The Red Rover", Saint-Saba.
The Panic of 1837 forced Gansevoort to file for bankruptcy in April. In June, Maria told the younger children they must leave Albany for somewhere cheaper. Gansevoort began studying law in New York City while Herman managed the farm before getting a teaching position at Sikes District School near Lenox, Massachusetts. He taught about 30 students of various ages, including some his own age.
The semester over, he returned to his mother in 1838. In February he was elected president of the Philo Logos Society, which Peter Gansevoort invited to move into Stanwix Hall for no rent. In the "Albany Microscope" in March, Melville published two polemical letters about issues in vogue in the debating societies. Historians Leon Howard and Hershel Parker suggest the motive behind the letters was a youthful desire to have his rhetorical skills publicly recognized. In May, the Melvilles moved to a rented house in Lansingburgh, almost 12 miles north of Albany. Nothing is known about what Melville did or where he went for several months after he finished teaching at Sikes. On November 12, five days after arriving in Lansingburgh, Melville paid for a term at Lansingburgh Academy to study surveying and engineering. In an April 1839 letter recommending Herman for a job in the Engineer Department of the Erie Canal, Peter Gansevoort says his nephew "possesses the ambition to make himself useful in a business which he desires to make his profession," but no job resulted.
Just weeks after this failure, Melville's first known published essay appeared. Using the initials "L.A.V"., Herman contributed "Fragments from a Writing Desk" to the weekly newspaper "Democratic Press and Lansingburgh Advertiser", which printed it in two installments, the first on May 4. According to Merton Sealts, his use of heavy-handed allusions reveals familiarity with the work of William Shakespeare, John Milton, Walter Scott, Richard Brinsley Sheridan, Edmund Burke, Samuel Taylor Coleridge, Lord Byron, and Thomas Moore. Parker calls the piece "characteristic Melvillean mood-stuff" and considers its style "excessive enough [...] to indulge his extravagances and just enough overdone to allow him to deny that he was taking his style seriously". For Delbanco, the style is "overheated in the manner of Poe, with sexually charged echoes of Byron and "The Arabian Nights"".
On May 31, 1839, Gansevoort, then living in New York City, wrote that he was sure Herman could get a job on a whaler or merchant vessel. The next day, he signed aboard the merchant ship "St. Lawrence" as a "boy" (a green hand), which cruised from New York to Liverpool. "Redburn: His First Voyage" (1849) draws on his experiences in this journey; at least two of the nine guide-books listed in chapter 30 of the book had been part of Allan Melvill's library. He arrived back in New York October 1, 1839 and resumed teaching, now at Greenbush, New York, but left after one term because he had not been paid. In the summer of 1840 he and his friend James Murdock Fly went to Galena, Illinois, to see if his Uncle Thomas could help them find work. Unsuccessful, he and his friend returned home in autumn, likely by way of St. Louis and up the Ohio River.
As part of the response to contemporaneous popular cultural reading, including Richard Henry Dana, Jr.'s new book "Two Years Before the Mast," and by Jeremiah N. Reynolds's account in the May 1839 issue of "The Knickerbocker" magazine of the hunt for a great white sperm whale named Mocha Dick, Melville and Gansevoort traveled to New Bedford, where Melville signed up for a whaling voyage aboard a new ship, the "Acushnet". Built in 1840, the ship measured some 104 feet in length, almost 28 feet in breadth, and almost 14 feet in depth. She measured slightly less than 360 tons, had two decks and three masts, but no quarter galleries. Melville signed a contract on Christmas Day with the ship's agent as a "green hand" for 1/175th of whatever profits the voyage would yield. On Sunday the 27th the brothers heard the Reverend Enoch Mudge preach at the Seamen's Bethel on Johnny-Cake Hill, where white marble cenotaphs on the walls memorialized local sailors who had died at sea, often in battle with whales. When he signed the crew list the next day he was advanced $84.
On January 3, 1841, the "Acushnet" set sail. Melville slept with some twenty others in the forecastle; Captain Valentine Pease, the mates, and the skilled men slept aft. Whales were found near The Bahamas, and in March 150 barrels of oil were sent home from Rio de Janeiro. Cutting in and trying-out (boiling) a single whale took about three days, and a whale yielded approximately one barrel of oil per foot of length and per ton of weight (the average whale weighed 40 to 60 tons). The oil was kept on deck for a day to cool off, and was then stowed down; scrubbing the deck completed the labor. An average voyage meant that some forty whales were killed to yield some 1600 barrels of oil.
On April 15, the "Acushnet" sailed around Cape Horn, and traveled to the South Pacific, where the crew sighted whales without catching any. Then up the coast of Chile to the region of Selkirk Island and on 7 May, near Juan Fernández Islands, she had 160 barrels. On June 23 the ship anchored for the first time since Rio, in Santa Harbor. The cruising grounds the "Acushnet" was sailing attracted much traffic, and Captain Pease not only paused to visit other whalers, but at times hunted in company with them. From July 23 into August the "Acushnet" regularly gammed with the "Lima" from Nantucket, and Melville met William Henry Chase, the son of Owen Chase, who gave him a copy of his father's account of his adventures aboard the "Essex". Ten years later Melville wrote in his other copy of the book: "The reading of this wondrous story upon the landless sea, & close to the very latitude of the shipwreck had a surprising effect upon me".
On September 25 the ship reported 600 barrels of oil to another whaler, and in October 700 barrels. On October 24 the "Acushnet" crossed the equator to the north, and six or seven days later arrived at the Galápagos Islands. This short visit would be the basis for "The Encantadas". On November 2, the "Acushnet" and three other American whalers were hunting together near the Galápagos Islands; Melville later exaggerated that number in Sketch Fourth of "The Encantadas". From November 19 to 25 the ship anchored at Chatham's Isle, and on December 2 reached the coast of Peru and anchored at Tombez near Paita, with 570 barrels of oil on board. On December 27 the "Acushnet" sighted Cape Blanco, off Ecuador, Point St. Elena was sighted the next day, and on January 6, 1842, the ship approached the Galápagos Islands from the southeast. From February 13 to 7 May, seven sightings of sperm whales were recorded but none killed. From early May to early June, the "Acushnet" cooperatively set about its whaling endeavors several times with the "Columbus" of New Bedford, which also took letters from Melville's ship; the two ships were in the same area just south of the Equator. On June 16 she carried 750 barrels, and sent home 200 on the "Herald the Second". On June 23, the "Acushnet" reached the Marquesas Islands, and anchored at Nuku Hiva.
A time of some emotional turbulence for Melville ensued over the next summer months. On July 9, 1842, Melville and his shipmate Richard Tobias Greene jumped ship at Nuku Hiva Bay and ventured into the mountains to avoid capture. While Melville's first book, "Typee" (1845), is loosely based on his stay in or near the Taipi Valley, scholarly research has increasingly shown that much if not all of this account was either taken from Melville's readings or exaggerated to dramatize a contrast between idyllic native culture and Western civilization. On August 9, Melville boarded the Australian whaler "Lucy Ann", bound for Tahiti, where on arrival he took part in a mutiny and was briefly jailed in the native "Calabooza Beretanee". In October, he and crew mate John B. Troy escaped Tahiti for Eimeo. He then spent a month as beachcomber and island rover ("omoo" in Tahitian), eventually crossing over to Moorea. He drew on these experiences for "Omoo", the sequel to "Typee". In November, he contracted to be a seaman on the Nantucket whaler "Charles & Henry" for a six-month cruise (November 1842 − April 1843), and was discharged at Lahaina, Maui in the Hawaiian Islands in May 1843.
After four months of working several jobs, including as a clerk, he joined the US Navy initially as one of the crew of the frigate as an ordinary seaman on August 20. During the next year, the homeward bound ship visited the Marquesas Islands, Tahiti, and Valparaiso, and then, from summer to fall 1844, Mazatlan, Lima, and Rio de Janeiro, before reaching Boston on October 3. Melville was discharged on October 14. This Navy experience is used in "White-Jacket" (1850), Melville's fifth book. Melville's wander-years created what biographer Arvin calls "a settled hatred of external authority, a lust for personal freedom" and a "growing and intensifying sense of his own exceptionalness as a person," along with "the resentful sense that circumstance and mankind together had already imposed their will upon him in a series of injurious ways". Scholar Robert Milder believes the encounter with the wide ocean, where he was seemingly abandoned by God, led Melville to experience a "metaphysical estrangement" and influenced his social views in two ways: first, that he belonged to the genteel classes but sympathized with the "disinherited commons" he had been placed among; and second that experiencing the cultures of Polynesia let him view the West from an outsider's perspective.
Upon his return, Melville regaled his family and friends with his adventurous tales and romantic experiences, and they urged him to put them into writing. Melville completed "Typee", his first book, in the summer of 1845 while living in Troy, New York. His brother Gansevoort found a publisher for it in London, where it was published in February 1846 by John Murray in his travel adventure series. It became an overnight bestseller in England, then in New York, when it was published on March 17 by Wiley & Putnam.
Melville extended the period his narrator spent on the island by three months, made it appear he understood the native language, and incorporated material from source books he had assembled. Milder calls "Typee" "an appealing mixture of adventure, anecdote, ethnography, and social criticism presented with a genial latitudinarianism that gave novelty to a South Sea idyll at once erotically suggestive and romantically chaste".
An unsigned review in the "Salem Advertiser" written by Nathaniel Hawthorne called the book a "skilfully managed" narrative by an author with "that freedom of view ... which renders him tolerant of codes of morals that may be little in accordance with our own". Hawthorne stated: This book is lightly but vigorously written; and we are acquainted with no work that gives a freer and more effective picture of barbarian life, in that unadulterated state of which there are now so few specimens remaining. The gentleness of disposition that seems akin to the delicious climate, is shown in contrast with the traits of savage fierceness ... He has that freedom of view—it would be too harsh to call it laxity of principle—which renders him tolerant of codes of morals that may be little in accordance with our own, a spirit proper enough to a young and adventurous sailor, and which makes his book the more wholesome to our staid landsmen. The depictions of the "native girls are voluptuously colored, yet not more so than the exigencies of the subject appear to require". At about the same time, prior to Melville being introduced to Hawthorne, Melville wrote a review of Hawthorne's "Power of Blackness" stating: Whether Hawthorne has simply availed himself of this mystical blackness as a means to the wondrous effects he makes it to produce in his lights and shades; or whether there really lurks in him, perhaps unknown to himself, a touch of Puritanic gloom—this, I cannot altogether tell. Certain it is, however, that this power of blackness in him derives its force from its appeals to that Calvinistic sense of Innate Depravity and Original Sin, from whose visitations, in some shape or another, no deeply thinking mind is always and wholly freed.
Pleased but not overwhelmed by the adulation of his new public, years later Melville expressed concern that he would "go down to posterity ... as a 'man who lived among the cannibals'!" The writing of "Typee" brought Melville back into contact with his friend Greene—Toby in the book—who wrote confirming Melville's account in newspapers. The two corresponded until 1863, and in his final years Melville "traced and successfully located his old friend" for a further meeting of the two friends. In March 1847, "Omoo", a sequel to "Typee" was published by Murray in London, and in May by Harper in New York. "Omoo" is "a slighter but more professional book," according to Milder. "Typee" and "Omoo" gave Melville overnight renown as a writer and adventurer, and he often entertained by telling stories to his admirers. As the writer and editor Nathaniel Parker Willis wrote, "With his cigar and his Spanish eyes, he "talks" Typee and Omoo, just as you find the flow of his delightful mind on paper". In 1847 Melville tried unsuccessfully to find a "government job" in Washington.
In June 1847, Melville and Elizabeth "Lizzie" Knapp Shaw were engaged, after knowing each other for approximately three months. Melville had first asked her father, Lemuel Shaw, for her hand in March, but was turned down at the time. Shaw, Chief Justice of Massachusetts, had been a close friend of Melville's father, and his marriage with Melville's aunt Nancy was prevented only by her death. His warmth and financial support for the family continued after Allan's death. Melville dedicated his first book, "Typee", to him. Lizzie was raised by her grandmother and an Irish nurse. Arvin suggests that Melville's interest in Lizzie may have been stimulated by "his need of Judge Shaw's paternal presence". They were married on August 4, 1847. Lizzie described their marriage as "very unexpected, and scarcely thought of until about two months before it actually took place". She wanted to be married in church, but they had a private wedding ceremony at home to avoid possible crowds hoping to see the celebrity. The couple honeymooned in the then-British Province of Canada, and traveled to Montreal. They settled in a house on Fourth Avenue in New York City (now called Park Avenue).
According to scholars Joyce Deveau Kennedy and Frederick James Kennedy, Lizzie brought to their marriage a sense of religious obligation, an intent to make a home with Melville regardless of place, a willingness to please her husband by performing such "tasks of drudgery" as mending stockings, an ability to hide her agitation, and a desire "to shield Melville from unpleasantness". The Kennedys conclude their assessment with:
Biographer Robertson-Lorant cites "Lizzie's adventurous spirit and abundant energy," and she suggests that "her pluck and good humor might have been what attracted Melville to her, and vice versa". An example of such good humor appears in a letter about her not yet used to being married: "It seems sometimes exactly as if I were here for a "visit". The illusion is quite dispelled however when Herman stalks into my room without even the ceremony of knocking, bringing me perhaps a button to sew on, or some equally romantic occupation". On February 16, 1849, the Melvilles' first child, Malcolm, was born.
In March 1848, "Mardi" was published by Richard Bentley in London, and in April by Harper in New York. Nathaniel Hawthorne thought it a rich book "with depths here and there that compel a man to swim for his life". According to Milder, the book began as another South Sea story but, as he wrote, Melville left that genre behind, first in favor of "a romance of the narrator Taji and the lost maiden Yillah," and then "to an allegorical voyage of the philosopher Babbalanja and his companions through the imaginary archipelago of Mardi".
In October 1849, "Redburn" was published by Bentley in London, and in November by Harper in New York. The bankruptcy and death of Allan Melvill, and Melville's own youthful humiliations surface in this "story of outward adaptation and inner impairment". Biographer Robertson-Lorant regards the work as a deliberate attempt for popular appeal: "Melville modeled each episode almost systematically on every genre that was popular with some group of antebellum readers," combining elements of "the picaresque novel, the travelogue, the nautical adventure, the sentimental novel, the sensational French romance, the gothic thriller, temperance tracts, urban reform literature, and the English pastoral". His next novel, "White-Jacket", was published by Bentley in London in January 1850, and in March by Harper in New York.
The earliest surviving mention of "Moby-Dick" is from early May 1850, when Melville told fellow sea author Richard Henry Dana Jr. it was "half way" written. In June, he described the book to his English publisher as "a romance of adventure, founded upon certain wild legends in the Southern Sperm Whale Fisheries," and promised it would be done by the fall. The original manuscript has not survived, but over the next several months Melville radically transformed his initial plan, conceiving what Delbanco described in 2005 as "the most ambitious book ever conceived by an American writer".
From August 4 to 12, 1850, the Melvilles, Sarah Morewood, Evert Duyckinck, Oliver Wendell Holmes, and other literary figures from New York and Boston came to Pittsfield to enjoy a period of parties, picnics, dinners, and the like. Nathaniel Hawthorne and his publisher James T. Fields joined the group while Hawthorne's wife stayed at home to look after the children. Hawthorne and Melville had a deep, private conversation about Hawthorne's short story collection "Mosses from an Old Manse". Hawthorne invited Melville to stay for a few days, which was unusual because Hawthorne typically felt overnight guests prevented him from working. The following days, Melville wrote the essay "Hawthorne and His Mosses," a review of Hawthorne's "Mosses from an Old Manse" that appeared in two installments, on August 17 and 24, in "The Literary World". Melville wrote that these stories revealed a dark side to Hawthorne, "shrouded in blackness, ten times black". Later that summer, Duyckinck sent Hawthorne copies of Melville's three latest books. Hawthorne read them, as he wrote to Duyckinck on August 29, "with a progressive appreciation of their author".He thought Melville in "Redburn" and "White-Jacket" put the reality "more unflinchingly" before his reader than any writer, and he thought "Mardi" was "a rich book, with depths here and there that compel a man to swim for his life. It is so good that one scarcely pardons the writer for not having brooded long over it, so as to make it a great deal better".
In September 1850, Melville borrowed three thousand dollars from his father-in-law Lemuel Shaw to buy a 160-acre farm in Pittsfield, Massachusetts. Melville called his new home Arrowhead because of the arrowheads that were dug up around the property during planting season. That winter, Melville paid Hawthorne an unexpected visit, only to discover he was working and "not in the mood for company". Hawthorne's wife Sophia gave him copies of "Twice-Told Tales" and, for Malcolm, "The Grandfather's Chair". Melville invited them to visit Arrowhead soon, hoping to "[discuss] the Universe with a bottle of brandy & cigars" with Hawthorne, but Hawthorne would not stop working on his new book for more than one day and they did not come. After a second visit from Melville, Hawthorne surprised him by arriving at Arrowhead with his daughter Una. According to Robertson-Lorant, "The handsome Hawthorne made quite an impression on the Melville women, especially Augusta, who was a great fan of his books". They spent the day mostly "smoking and talking metaphysics".
In Robertson-Lorant's assessment of the friendship, Melville was "infatuated with Hawthorne's intellect, captivated by his artistry, and charmed by his elusive personality," and though the two writers were "drawn together in an undeniable sympathy of soul and intellect, the friendship meant something different to each of them," with Hawthorne offering Melville "the kind of intellectual stimulation he needed". They may have been "natural allies and friends," yet they were also "fifteen years apart in age and temperamentally quite different" and Hawthorne "found Melville's manic intensity exhausting at times". Melville wrote ten letters to Hawthorne; one scholar identifies "sexual excitement ... in all the letters". Melville was inspired and encouraged by his new relationship with Hawthorne during the period that he was writing "Moby-Dick." Melville dedicated the work to Hawthorne: "In token of my admiration for his genius, this book is inscribed to Nathaniel Hawthorne".
On October 18, 1851, "The Whale" was published in Britain in three volumes, and on November 14 "Moby-Dick" appeared in the United States as a single volume. In between these dates, on October 22, 1851, the Melvilles' second child, Stanwix, was born. In December, Hawthorne told Duyckinck, "What a book Melville has written! It gives me an idea of much greater power than his preceding ones." Unlike other contemporaneous reviewers of Melville, Hawthorne had seen the uniqueness of Melville's new novel and acknowledged it. In early December 1852, Melville visited the Hawthornes in Concord and discussed the idea of the "Agatha" story he had pitched to Hawthorne. This was the last known contact between the two writers before Melville visited Hawthorne in Liverpool four years later when Hawthorne had relocated to England.
After having borrowed three thousand dollars from his father-in-law in September 1850 to buy a 160-acre farm in Pittsfield, Massachusetts, Melville had high hopes that his next book would please the public and restore his finances. In April 1851 he told his British publisher, Richard Bentley, that his new book had "unquestionable novelty" and was calculated to have wide appeal with elements of romance and mystery. In fact, "" was heavily psychological, though drawing on the conventions of the romance, and difficult in style. It was not well received. The New York "Day Book" published a venomous attack on September 8, 1852, headlined "HERMAN MELVILLE CRAZY". The item, offered as a news story, reported,
On May 22, 1853, Melville's third child and first daughter Elizabeth (Bessie) was born, and on or about that day Herman finished work on the Agatha story, "Isle of the Cross". Melville traveled to New York to discuss a book, presumably "Isle of the Cross", with his publisher, but later wrote that Harper & Brothers was "prevented" from publishing his manuscript because it was lost.
After the commercial and critical failure of "Pierre", Melville had difficulty finding a publisher for his follow-up novel, "Israel Potter". Instead, this narrative of a Revolutionary War veteran was serialized in "Putnam's Monthly Magazine" in 1853. From November 1853 to 1856, Melville published fourteen tales and sketches in "Putnam's" and "Harper's" magazines. In December 1855 he proposed to Dix & Edwards, the new owners of "Putnam's", that they publish a selective collection of the short fiction. The collection, "The Piazza Tales", was named after a new introductory story Melville wrote for it, "The Piazza". It also contained five previously published stories, including "Bartleby, the Scrivener" and "Benito Cereno". On March 2, 1855, the Melvilles' fourth child, Frances (Fanny), was born. In this period his book "Israel Potter" was published.
The writing of "The Confidence-Man" put great strain on Melville, leading Sam Shaw, a nephew of Lizzie, to write to his uncle Lemuel Shaw, "Herman I hope has had no more of those ugly attacks"—a reference to what Robertson-Lorant calls "the bouts of rheumatism and sciatica that plagued Melville". Melville's father-in-law apparently shared his daughter's "great anxiety about him" when he wrote a letter to a cousin, in which he described Melville's working habits: "When he is deeply engaged in one of his literary works, he confines him[self] to hard study many hours in the day, with little or no exercise, and this specially in winter for a great many days together. He probably thus overworks himself and brings on severe nervous affections". Shaw advanced Melville $1,500 from Lizzie's inheritance to travel four or five months in Europe and the Holy Land.
From October 11, 1856, to May 20, 1857, Melville made a six-month Grand Tour of Europe and the Mediterranean. While in England, in November 1856, he briefly reunited for three days with Hawthorne, who had taken the position of United States Consul at Liverpool, at that time the hub of Britain's Atlantic trade. At the nearby coast resort of Southport, amid the sand dunes where they had stopped to smoke cigars, they had a conversation which Hawthorne later described in his journal: "Melville, as he always does, began to reason of Providence and futurity, and of everything that lies beyond human ken, and informed me that he 'pretty much made up his mind to be annihilated' [...] If he were a religious man, he would be one of the most truly religious and reverential; he has a very high and noble nature, and better worth immortality than most of us."
The Mediterranean part of the tour took in the Holy Land, which inspired his epic poem "Clarel." On April 1, 1857, Melville published his last full-length novel, "The Confidence-Man". This novel, subtitled "His Masquerade", has won general acclaim in modern times as a complex and mysterious exploration of issues of fraud and honesty, identity and masquerade. But, when it was published, it received reviews ranging from the bewildered to the denunciatory.
To repair his faltering finances, Melville took up public lecturing from late 1857 to 1860. He embarked upon three lecture tours and spoke at lyceums, chiefly on Roman statuary and sightseeing in Rome. Melville's lectures, which mocked the pseudo-intellectualism of lyceum culture, were panned by contemporary audiences. On May 30, 1860, Melville boarded the clipper "Meteor" for California, with his brother Thomas at the helm. After a shaky trip around Cape Horn, Melville returned to New York alone via Panama in November. Later that year, he submitted a poetry collection to a publisher but it was not accepted, and is now lost. In 1863, he bought his brother's house at 104 East 26th Street in New York City and moved there.
In 1864, Melville visited the Virginia battlefields of the American Civil War. After the war, he published "Battle Pieces and Aspects of the War" (1866), a collection of 72 poems that has been described as "a polyphonic verse journal of the conflict". The work did not do well commercially—of the print run of 1,260 copies, 300 were sent as review copies, and 551 copies were sold—and reviewers did not realize that Melville had purposely avoided the ostentatious diction and fine writing that were in fashion, choosing to be concise and spare.
In 1866, Melville became a customs inspector for New York City. He held the post for 19 years and had a reputation for honesty in a notoriously corrupt institution. Unbeknownst to him, his position was sometimes protected by Chester A. Arthur, at that time a customs official who admired Melville's writing but never spoke to him. During this time, Melville was short-tempered because of nervous exhaustion, physical pain, and drinking. He would sometimes mistreat his family and servants in his unpredictable mood swings. Robertson-Lorant compared Melville's behavior to the "tyrannical captains he had portrayed in his novels".
In 1867, his oldest son died at home at the age of 18 from a self-inflicted gun shot. Historians and psychologists disagree on if it was intentional or accidental. In May 1867, Lizzie's brother plotted to help her leave Melville without suffering the consequences divorce carried at the time, particularly the loss of all claim to her children. His plan was for Lizzie to visit Boston, and friends would inform Melville she would not come back. To get a divorce, she would then have to bring charges against Melville, asserting her husband to be insane, but she ultimately decided against pursuing a divorce.
Though Melville's professional writing career had ended, he remained dedicated to his writing. He spent years on what Milder called "his autumnal masterpiece" "Clarel: A Poem and a Pilgrimage", an 18,000-line epic poem inspired by his 1856 trip to the Holy Land. It is among the longest single poems in American literature. The title character is a young American student of divinity who travels to Jerusalem to renew his faith. One of the central characters, Rolfe, is similar to Melville in his younger days, a seeker and adventurer, while the reclusive Vine is loosely based on Hawthorne, who had died twelve years before. Publication of 350 copies was funded with a bequest from his uncle in 1876, but sales failed miserably and the unsold copies were burned when Melville was unable to buy them at cost. Critic Lewis Mumford found an unread copy in the New York Public Library in 1925 "with its pages uncut".
Although Melville's own finances remained limited, in 1884, Lizzie received a legacy that enabled him to buy a steady stream of books and prints each month. Melville retired on December 31, 1885, after several of his wife's relatives further supported the couple with supplementary legacies and inheritances. On February 22, 1886, Stanwix Melville died in San Francisco at age 36, apparently from tuberculosis. In 1889 Melville became a member of the New York Society Library.
Melville had a modest revival of popularity in England when readers rediscovered his novels in the late nineteenth century. A series of poems inspired by his early experiences at sea, with prose head notes, were published in two collections for his relatives and friends, each with a print run of 25 copies. The first, "John Marr and Other Sailors", was published in 1888, followed by "Timoleon" in 1891.
He died the morning of September 28, 1891. His death certificate shows "cardiac dilation" as the cause. He was interred in the Woodlawn Cemetery in the Bronx, New York City. "The New York Times" obituary mistakenly called his masterpiece ""Mobie Dick"", erroneously implying that he and his books were unappreciated at his time of death. A later article was published on October 6 in the same paper, referring to him as "the late Hiram Melville", but this appears to have been a typesetting error.
Melville left a volume of poetry, "Weeds and Wildings", and a sketch, "Daniel Orme", unpublished at the time of his death. His wife also found pages for an unfinished novella, Billy Budd. Melville had revised and rearranged the manuscript in several stages, leaving the pages in disarray. Lizzie could not decide her husband's intentions (or even read his handwriting in some places) and abandoned attempts to edit the manuscript for publication. The pages were stored in a family breadbox until 1919, when Melville's granddaughter gave them to Raymond Weaver. Weaver, who initially dismissed the work's importance, published a quick transcription in 1924. This version, however, contained many misreadings, some of which affected interpretation. It was an immediate critical success in England, then in the United States. In 1962, the Melville scholars Harrison Hayford and Merton M. Sealts published a critical reading text that was widely accepted. It was adapted as a stage play on Broadway in 1951, then an opera, and in 1961 as a film.
Melville's writing style shows both consistencies and enormous changes throughout the years. His development "had been abnormally postponed, and when it came, it came with a rush and a force that had the menace of quick exhaustion in it". As early as "Fragments from a Writing Desk", written when Melville was 20, scholar Sealts sees "a number of elements that anticipate Melville's later writing, especially his characteristic habit of abundant literary allusion". "Typee" and "Omoo" were documentary adventures that called for a division of the narrative in short chapters. Such compact organization bears the risk of fragmentation when applied to a lengthy work such as "Mardi", but with "Redburn" and "White Jacket," Melville turned the short chapter into a concentrated narrative.
Some chapters of "Moby-Dick" are no more than two pages in standard editions, and an extreme example is Chapter 122, consisting of a single paragraph of 36 words. The skillful handling of chapters in "Moby-Dick" is one of the most fully developed Melvillean signatures, and is a measure of his masterly writing style. Individual chapters have become "a touchstone for appreciation of Melville's art and for explanation" of his themes. In contrast, the chapters in "Pierre", called Books, are divided into short numbered sections, seemingly an "odd formal compromise" between Melville's natural length and his purpose to write a regular romance that called for longer chapters. As satirical elements were introduced, the chapter arrangement restores "some degree of organization and pace from the chaos". The usual chapter unit then reappears for "Israel Potter", "The Confidence-Man" and even "Clarel", but only becomes "a vital part in the whole creative achievement" again in the juxtaposition of accents and of topics in "Billy Budd".
Newton Arvin points out that only superficially the books after "Mardi" seem as if Melville's writing went back to the vein of his first two books. In reality, his movement "was not a retrograde but a spiral one", and while "Redburn" and "White-Jacket" may lack the spontaneous, youthful charm of his first two books, they are "denser in substance, richer in feeling, tauter, more complex, more connotative in texture and imagery". The rhythm of the prose in "Omoo" "achieves little more than easiness; the language is almost neutral and without idiosyncrasy", while "Redburn" shows an improved ability in narrative which fuses imagery and emotion.
Melville's early works were "increasingly baroque" in style, and with "Moby-Dick" Melville's vocabulary had grown superabundant. Bezanson calls it an "immensely varied style". According to critic Warner Berthoff, three characteristic uses of language can be recognized. First, the exaggerated repetition of words, as in the series "pitiable," "pity," "pitied," and "piteous" (Ch. 81, "The Pequod Meets the Virgin"). A second typical device is the use of unusual adjective-noun combinations, as in "concentrating brow" and "immaculate manliness" (Ch. 26, "Knights and Squires"). A third characteristic is the presence of a participial modifier to emphasize and to reinforce the already established expectations of the reader, as the words "preluding" and "foreshadowing" ("so still and subdued and yet somehow preluding was all the scene ..." "In this foreshadowing interval ...").
After his use of hyphenated compounds in "Pierre", Melville's writing gives Berthoff the impression of becoming less exploratory and less provocative in his choices of words and phrases. Instead of providing a lead "into possible meanings and openings-out of the material in hand," the vocabulary now served "to crystallize governing impressions," the diction no longer attracted attention to itself, except as an effort at exact definition. The language, Berthoff continues, reflects a "controlling intelligence, of right judgment and completed understanding". The sense of free inquiry and exploration which infused his earlier writing and accounted for its "rare force and expansiveness," tended to give way to "static enumeration". By comparison to the verbal music and kinetic energy of "Moby-Dick", Melville's subsequent writings seem "relatively muted, even withheld" in his later works.
Melville's paragraphing in his best work Berthoff considers to be the virtuous result of "compactness of form and free assembling of unanticipated further data", such as when the mysterious sperm whale is compared with Exodus's invisibility of God's face in the final paragraph of Chapter 86 ("The Tail"). Over time Melville's paragraphs became shorter as his sentences grew longer, until he arrived at the "one-sentence paragraphing characteristic of his later prose". Berthoff points to the opening chapter of "The Confidence-Man" for an example, as it counts fifteen paragraphs, seven of which consist of only one elaborate sentence, and four that have only two sentences. The use of similar technique in "Billy Budd" contributes in large part, Berthoff says, to its "remarkable narrative economy".
In Nathalia Wright's view, Melville's sentences generally have a looseness of structure, easy to use for devices as catalogue and allusion, parallel and refrain, proverb and allegory. The length of his clauses may vary greatly, but the narrative style of writing in "Pierre" and "The Confidence-Man" is there to convey feeling, not thought. Unlike Henry James, who was an innovator of sentence ordering to render the subtlest nuances in thought, Melville made few such innovations. His domain is the mainstream of English prose, with its rhythm and simplicity influenced by the King James Bible. Another important characteristic of Melville's writing style is in its echoes and overtones. Melville's imitation of certain distinct styles is responsible for this. His three most important sources, in order, are the Bible, Shakespeare, and Milton. Direct quotation from any of the sources is slight; only one sixth of his Biblical allusions can be qualified as such because Melville adapts Biblical usage to his own narrated textual requirements of clarifying his plot.
In terms of Biblical influence, Melville's style can be divided into three categories. First, Melville's use of Biblical allusion is more at the narrative level of including the allusions within his own writing style rather that formally identifying Biblical quotation. Several uses of his preferred Biblical allusions appear appear repeated several times throughout his body of work, taking on the nature of refrains. Examples of this idiom are the injunctions to be 'as wise as serpents and as harmless as doves,' 'death on a pale horse,' 'the man of sorrows', the 'many mansions of heaven;' proverbs 'as the hairs on our heads are numbered,' 'pride goes before a fall,' 'the wages of sin is death;' adverbs and pronouns as 'verily, whoso, forasmuch as; phrases as come to pass, children's children, the fat of the land, vanity of vanities, outer darkness, the apple of his eye, Ancient of Days, the rose of Sharon.' Second, there are paraphrases of individual and combined verses. Redburn's "Thou shalt not lay stripes upon these Roman citizens" makes use of language of the Ten Commandments in Ex.20, and Pierre's inquiry of Lucy: "Loveth she me with the love past all understanding?" combines John 21:15–17 and Philippians 4:7. Third, certain Hebraisms are used, such as a succession of genitives ("all the waves of the billows of the seas of the boisterous mob"), the cognate accusative ("I dreamed a dream," "Liverpool was created with the Creation"), and the parallel ("Closer home does it go than a rammer; and fighting with steel is a play without ever an interlude"). A passage from "Redburn" shows how all these different ways of alluding interlock and result in a fabric texture of Biblical language, though there is very little direct quotation:
In addition to this, Melville successfully imitates three Biblical strains: the apocalyptic, the prophetic and the sermonic narrative tone of writing. Melville sustains the apocalyptic tone of anxiety and foreboding for a whole chapter of "Mardi." The prophetic strain is expressed by Melville in "Moby-Dick", most notably in Father Mapple's sermon. The tradition of the Psalms is imitated at length by Melville in "The Confidence-Man".
In 1849, Melville acquired an edition of Shakespeare's works printed in a font large enough for his tired eyes, which led to a deeper study of Shakespeare that greatly influenced the style of his next book, "Moby-Dick" (1851). The critic F. O. Matthiessen found that the language of Shakespeare far surpasses other influences upon the book, in that it inspired Melville to discover his own full strength. On almost every page, debts to Shakespeare can be discovered. The "mere sounds, full of Leviathanism, but signifying nothing" at the end of "Cetology" (Ch. 32) echo the famous phrase in "Macbeth:" "Told by an idiot, full of sound and fury/ Signifying nothing". Ahab's first extended speech to the crew, in the "Quarter-Deck" (Ch. 36) is practically blank verse and so is Ahab's soliloquy at the beginning of "Sunset" (Ch. 37):'I leave a white and turbid wake;/ Pale waters, paler cheeks, where'er I sail./ The envious billows sidelong swell to whelm/ My track; let them; but first I pass.' Through Shakespeare, Melville infused "Moby-Dick" with a power of expression he had not previously expressed. Reading Shakespeare had been "a catalytic agent" for Melville, one that transformed his writing from merely reporting to "the expression of profound natural forces". The extent to which Melville assimilated Shakespeare is evident in the description of Ahab, Matthiessen continues, which ends in language that seems Shakespearean yet is no imitation: 'Oh, Ahab! what shall be grand in thee, it must needs be plucked from the skies and dived for in the deep, and featured in the unbodied air!' The imaginative richness of the final phrase seems particularly Shakespearean, "but its two key words appear only once each in the plays...and to neither of these usages is Melville indebted for his fresh combination". Melville's diction depended upon no source, and his prose is not based on anybody else's verse but on an awareness of "speech rhythm".
Melville's mastering of Shakespeare, Matthiessen finds, supplied him with verbal resources that enabled him to create dramatic language through three essential techniques. First, the use of verbs of action creates a sense of movement and meaning. The effective tension caused by the contrast of "thou launchest navies of full-freighted worlds" and "there's that in here that still remains indifferent" in "The Candles" (Ch. 119) makes the last clause lead to a "compulsion to strike the breast," which suggests "how thoroughly the drama has come to inhere in the words;" Second, Melville took advantage of the Shakespearean energy of verbal compounds, as in "full-freighted". Third, Melville employed the device of making one part of speech act as another, for example, 'earthquake' as an adjective, or turning an adjective into a noun, as in "placeless".
Melville's style, in Nathalia Wright's analysis, seamlessly flows over into theme, because all these borrowings have an artistic purpose, which is to suggest an appearance "larger and more significant than life" for characters and themes that are in fact unremarkable. The allusions suggest that beyond the world of appearances another world exists, one that influences this world, and where ultimate truth can be found. Moreover, the ancient background thus suggested for Melville's narratives –ancient allusions being next in number to the Biblical ones –invests them with a sense of timelessness.
Melville was not financially successful as a writer; over his entire lifetime Melville's writings earned him just over $10,000 (). Melville's travelogues based on voyages to the South Seas and stories based on his time in the merchant marine and navy led to some initial success, but his popularity declined dramatically afterwards. By 1876, all of his books were out of print. He was viewed as a minor figure in American literature in the later years of his life and during the years immediately after his death.
Melville did not publish poetry until his late thirties, with "Battle-Pieces" (1866) and did not receive recognition as a poet until well into the 20th century. But he wrote predominantly poetry for about 25 years, twice as long as his prose career. The three novels of the 1850s that Melville worked on most seriously to present his philosophical explorations, "Moby-Dick", "Pierre", and "The Confidence Man", seem to make the step to philosophical poetry a natural one rather than simply a consequence of commercial failure. Since he turned to poetry as a meditative practice, his poetic style, even more than most Victorian poets, was not marked by linguistic play or melodic considerations.
Early critics were not sympathetic. Henry Chapin, in his Introduction to one of the earliest selections of Melville's poetry, "John Marr and Other Poems" (1922), said Melville's verse is "of an amateurish and uneven quality" but in it "that loveable freshness of personality, which his philosophical dejection never quenched, is everywhere in evidence," in "the voice of a true poet". The poet and novelist Robert Penn Warren became a champion of Melville as a great American poet and issued a selection of Melville's poetry in 1971 prefaced by an admiring critical essay. In the 1990s critic Lawrence Buell argued that Melville "is justly said to be nineteenth-century America's leading poet after Whitman and Dickinson." and Helen Vendler remarked of "Clarel": "What it cost Melville to write this poem makes us pause, reading it. Alone, it is enough to win him, as a poet, what he called 'the belated funeral flower of fame'." Some critics now place him as the first modernist poet in the United States while others assert that his work more strongly suggests what today would be a postmodern view.
The centennial of Melville's birth in 1919 coincided with a renewed interest in his writings known as the Melville revival where his work experienced a significant critical reassessment. The renewed appreciation began in 1917 with Carl Van Doren's article on Melville in a standard history of American literature. Van Doren also encouraged Raymond Weaver, who wrote the author's first full-length biography, "Herman Melville: Mariner and Mystic" (1921). Discovering the unfinished manuscript of "Billy Budd", among papers shown to him by Melville's granddaughter, Weaver edited it and published it in a new collected edition of Melville's works. Other works that helped fan the flames for Melville were Carl Van Doren's "The American Novel" (1921), D. H. Lawrence's "Studies in Classic American Literature" (1923), Carl Van Vechten's essay in "The Double Dealer" (1922), and Lewis Mumford's biography, "Herman Melville" (1929).
Starting in the mid-1930s, the Yale University scholar Stanley Thomas Williams supervised more than a dozen dissertations on Melville that were eventually published as books. Where the first wave of Melville scholars focused on psychology, Williams' students were prominent in establishing Melville Studies as an academic field concerned with texts and manuscripts, tracing Melville's influences and borrowings (even plagiarism), and exploring archives and local publications. To provide historical evidence, the independent scholar Jay Leyda searched libraries, family papers, local archives and newspapers across New England and New York to document Melville's life day by day for his two-volume "The Melville Log" (1951). Sparked by Leyda and post-war scholars, the second phase of the Melville Revival emphasized research into the biography of Melville rather than accepting Melville's early books as reliable accounts.
In 1945, The Melville Society was founded, a non-profit organisation dedicated to the study of Melville's life and works. Between 1969 and 2003 it published 125 issues of "Melville Society Extracts", which are now freely available on the society's website. Since 1999 it has published "Leviathan: A Journal of Melville Studies", currently three issues a year, published by Johns Hopkins University Press.
The postwar scholars tended to think that Weaver, Harvard psychologist Henry Murray, and Mumford favored Freudian interpretations which read Melville's fiction too literally as autobiography; exaggerated his suffering in the family; and inferred a homosexual attachment to Hawthorne. They saw a different arc to Melville's writing career. The first biographers saw a tragic withdrawal after the cold critical reception for his prose works and largely dismissed his poetry. A new view emerged of Melville's turn to poetry as a conscious choice that placed him among the most important American poets. Other post-war studies, however, continued the broad imaginative and interpretive style; Charles Olson's "Call Me Ishmael" (1947) presented Ahab as a Shakespearean tragic hero, and Newton Arvin's critical biography, "Herman Melville" (1950), won the National Book Award for non-fiction in 1951.
In the 1960s, Harrison Hayford organized an alliance between Northwestern University Press and the Newberry Library, with backing from the Modern Language Association and funding from the National Endowment for the Humanities, to edit and publish reliable critical texts of Melville's complete works, including unpublished poems, journals, and correspondence. The first volume of the Northwestern-Newberry Edition of the Writings of Herman Melville was published in 1968 and the last in the fall of 2017. The aim of the editors was to present a text "as close as possible to the author's intention as surviving evidence permits". The volumes have extensive appendices, including textual variants from each of the editions published in Melville's lifetime, an historical note on the publishing history and critical reception, and related documents. Because the texts were prepared with financial support from the United States Department of Education, no royalties are charged, and they have been widely reprinted. Hershel Parker published his two-volume "Herman Melville: A Biography", in 1996 and 2002, based on extensive original research and his involvement as editor of the Northwestern-Newberry Melville edition.
Melville's writings did not attract the attention of women's studies scholars of the 1970s and 1980s, though his preference for sea-going tales that involved almost only males has been of interest to scholars in men's studies and especially gay and queer studies. Melville was remarkably open in his exploration of sexuality of all sorts. For example, Alvin Sandberg claimed that the short story "The Paradise of Bachelors and the Tartarus of Maids" offers "an exploration of impotency, a portrayal of a man retreating to an all-male childhood to avoid confrontation with sexual manhood," from which the narrator engages in "congenial" digressions in heterogeneity. In line with this view, Warren Rosenberg argues the homosocial "Paradise of Bachelors" is shown to be "superficial and sterile".
David Harley Serlin observes in the second half of Melville's diptych, "The Tartarus of Maids", the narrator gives voice to the oppressed women he observes:
In the end Serlin says that the narrator is never fully able to come to terms with the contrasting masculine and feminine modalities.
Issues of sexuality have been observed in other works as well. Rosenberg notes Taji, in "Mardi", and the protagonist in "Pierre" "think they are saving young 'maidens in distress' (Yillah and Isabel) out of the purest of reasons but both are also conscious of a lurking sexual motive". When Taji kills the old priest holding Yillah captive, he says,
In "Pierre," the motive of the protagonist's sacrifice for Isabel is admitted: "womanly beauty and not womanly ugliness invited him to champion the right". Rosenberg argues,
Rosenberg says that Melville fully explores the theme of sexuality in his major epic poem, "Clarel". When the narrator is separated from Ruth, with whom he has fallen in love, he is free to explore other sexual (and religious) possibilities before deciding at the end of the poem to participate in the ritualistic order represented by marriage. In the course of the poem, "he considers every form of sexual orientation – celibacy, homosexuality, hedonism, and heterosexuality – raising the same kinds of questions as when he considers Islam or Democracy".
Some passages and sections of Melville's works demonstrate his willingness to address all forms of sexuality, including the homoerotic, in his works. Commonly noted examples from "Moby-Dick" are the "marriage bed" episode involving Ishmael and Queequeg, which is interpreted as male bonding; and the "Squeeze of the Hand" chapter, describing the camaraderie of sailors' extracting spermaceti from a dead whale presented in Chapter Ten of the novel titled "A Bosom Friend". Rosenberg notes that critics say that "Ahab's pursuit of the whale, which they suggest can be associated with the feminine in its shape, mystery, and in its naturalness, represents the ultimate fusion of the epistemological and sexual quest". In addition, he notes that Billy Budd's physical attractiveness is described in quasi-feminine terms: "As the Handsome Sailor, Billy Budd's position aboard the seventy-four was something analogous to that of a rustic beauty transplanted from the provinces and brought into competition with the highborn dames of the court".
Since the late 20th century, "Billy Budd" has become a central text in the field of legal scholarship known as law and literature. In the novel, Billy, a handsome and popular young sailor, is impressed from the merchant vessel "Rights of Man" to serve aboard H.M.S. "Bellipotent" in the late 1790s, during the war between Revolutionary France and Great Britain. He excites the enmity and hatred of the ship's master-at-arms, John Claggart. Claggart brings phony charges against Billy, accusing him of mutiny and other crimes, and the Captain, the Honorable Edward Fairfax Vere, brings them together for an informal inquiry. At this encounter, Billy is frustrated by his stammer, which prevents him from speaking, and strikes Claggart. The blow catches Claggart squarely on the forehead and, after a gasp or two, the master-at-arms dies. This death sets up the narrative climax of the novel; Vere immediately convenes a court-martial at which he urges the court to convict and sentence Billy to death.
The climactic trial has been the focus of scholarly inquiry regarding the motives of Vere and the legal necessity of Billy's condemnation. Vere states, given the circumstances of Claggart's slaying, condemning Billy to death would be unjust. While critics have viewed Vere as a character caught between the pressures between unbending legalism and malleable moral principles, other critics have differed in opinion. Such other critics have argued that Vere represents a ressentient protagonist whose disdain for Lord Admiral Nelson he takes out on Billy, in whom Vere sees the traits of Nelson's that he resents. argues that Vere manipulated and misrepresented the applicable laws in order to condemn Billy, showing that the laws of the time did not require a sentence of death and that legally any such sentence required review before being carried out. While this argument has been criticized for drawing on information outside the novel, Weisberg also shows that sufficient liberties existed in the laws Melville describes to avoid a capital sentence.
Melville's work often touched on themes of communicative expression and the pursuit of the absolute among illusions. As early as 1839, in the juvenile sketch "Fragments from a Writing Desk," Melville explores a problem which would reappear in the short stories "Bartleby" (1853) and "Benito Cereno" (1855): the impossibility to find common ground for mutual communication. The sketch centers on the protagonist and a mute lady, leading scholar Sealts to observe: "Melville's deep concern with expression and communication evidently began early in his career".
According to scholar Nathalia Wright, Melville's characters are all preoccupied by the same intense, superhuman and eternal quest for "the absolute amidst its relative manifestations," an enterprise central to the Melville canon: "All Melville's plots describe this pursuit, and all his themes represent the delicate and shifting relationship between its truth and its illusion". It is not clear, however, what the moral and metaphysical implications of this quest are, because Melville did not distinguish between these two aspects. Throughout his life Melville struggled with and gave shape to the same set of epistemological doubts and the metaphysical issues these doubts engendered. An obsession for the limits of knowledge led to the question of God's existence and nature, the indifference of the universe, and the problem of evil.
In 1982, the Library of America (LOA) began publication. In honor of Melville's central place in American culture, the very first volume contained "Typee", "Omoo", and "Mardi". The first volumes published in 1983 and 1985 also contained Melville's work, in 1983 "Redburn", "White-Jacket", and "Moby-Dick" and in 1985 "Pierre", "Israel Potter", "The Confidence Man", "Tales", and "Billy Budd". LOA did not publish his complete poetry until 2019.
On August 1, 1984, as part of the Literary Arts Series of stamps, the United States Postal Service issued a 20-cent commemorative stamp to honor Melville. The setting for the first day of issue was the Whaling Museum in New Bedford, Massachusetts.
In 1985, the New York City Herman Melville Society gathered at 104 East 26th Street to dedicate the intersection of Park Avenue South and 26th Street as Herman Melville Square. This is the street where Melville lived from 1863 to 1891 and where, among other works, he wrote "Billy Budd". Melville's house in Lansingburgh, New York, houses the Lansingburgh Historical Society.
In 2010, a species of extinct giant sperm whale, "Livyatan melvillei", was named in honor of Melville. The paleontologists who discovered the fossil were fans of "Moby-Dick" and dedicated their discovery to the author. | https://en.wikipedia.org/wiki?curid=13623 |
High fidelity
High fidelity (often shortened to hi-fi or hifi) is a term used by listeners, audiophiles and home audio enthusiasts to refer to high-quality reproduction of sound. This is in contrast to the lower quality sound produced by inexpensive audio equipment, AM radio, or the inferior quality of sound reproduction that can be heard in recordings made until the late 1940s.
Ideally, high-fidelity equipment has inaudible noise and distortion, and a flat (neutral, uncolored) frequency response within the human hearing range.
Bell Laboratories began experimenting with a range of recording techniques in the early 1930s. Performances by Leopold Stokowski and the Philadelphia Orchestra were recorded in 1931 and 1932 using telephone lines between the Academy of Music in Philadelphia and the Bell labs in New Jersey. Some multitrack recordings were made on optical sound film, which led to new advances used primarily by MGM (as early as 1937) and Twentieth Century Fox Film Corporation (as early as 1941). RCA Victor began recording performances by several orchestras using optical sound around 1941, resulting in higher-fidelity masters for 78-rpm discs. During the 1930s, Avery Fisher, an amateur violinist, began experimenting with audio design and acoustics. He wanted to make a radio that would sound like he was listening to a live orchestra—that would achieve high fidelity to the original sound. After World War II, Harry F. Olson conducted an experiment whereby test subjects listened to a live orchestra through a hidden variable acoustic filter. The results proved that listeners preferred high-fidelity reproduction, once the noise and distortion introduced by early sound equipment was removed.
Beginning in 1948, several innovations created the conditions that made major improvements of home-audio quality possible:
In the 1950s, audio manufacturers employed the phrase "high fidelity" as a marketing term to describe records and equipment intended to provide faithful sound reproduction. While some consumers simply interpreted "high fidelity" as fancy and expensive equipment, many found the difference in quality compared to the then-standard AM radios and 78-rpm records readily apparent and bought high-fidelity phonographs and 33⅓ LPs such as RCA's New Orthophonics and London's ffrr (Full Frequency Range Recording, a UK Decca system). Audiophiles paid attention to technical characteristics and bought individual components, such as separate turntables, radio tuners, preamplifiers, power amplifiers and loudspeakers. Some enthusiasts even assembled their own loudspeaker systems. In the 1950s, "hi-fi" became a generic term for home sound equipment, to some extent displacing "phonograph" and "record player".
In the late 1950s and early 1960s, the development of stereophonic equipment and recodings led to the next wave of home-audio improvement, and in common parlance "stereo" displaced "hi-fi". Records were now played on "a stereo". In the world of the audiophile, however, the concept of "high fidelity" continued to refer to the goal of highly accurate sound reproduction and to the technological resources available for approaching that goal. This period is regarded as the "Golden Age of Hi-Fi", when vacuum tube equipment manufacturers of the time produced many models considered endearing by modern audiophiles, and just before solid state (transistorized) equipment was introduced to the market, subsequently replacing tube equipment as the mainstream technology.
The metal-oxide-semiconductor field-effect transistor (MOSFET) was adapted into a power MOSFET for audio by Jun-ichi Nishizawa at Tohoku University in 1974. Power MOSFETs were soon manufactured by Yamaha for their hi-fi audio amplifiers. JVC, Pioneer Corporation, Sony and Toshiba also began manufacturing amplifiers with power MOSFETs in 1974. In 1977, Hitachi introduced the LDMOS (lateral diffused MOS), a type of power MOSFET. Hitachi was the only LDMOS manufacturer between 1977 and 1983, during which time LDMOS was used in audio power amplifiers from manufacturers such as HH Electronics (V-series) and Ashly Audio, and were used for music and public address systems. Class-D amplifiers became successful in the mid-1980s when low-cost, fast-switching MOSFETs were made available. Many transistor amps use MOSFET devices in their power sections, because their distortion curve is more tube-like.
A popular type of system for reproducing music beginning in the 1970s was the integrated music centre—which combined a phonograph turntable, AM-FM radio tuner, tape player, preamplifier, and power amplifier in one package, often sold with its own separate, detachable or integrated speakers. These systems advertised their simplicity. The consumer did not have to select and assemble individual components or be familiar with impedance and power ratings. Purists generally avoid referring to these systems as high fidelity, though some are capable of very good quality sound reproduction.
Audiophiles in the 1970s and 1980s preferred to buy each component separately. That way, they could choose models of each component with the specifications that they desired. In the 1980s, a number of audiophile magazines became available, offering reviews of components and articles on how to choose and test speakers, amplifiers and other components.
Listening tests are used by hi-fi manufacturers, audiophile magazines and audio engineering researchers and scientists. If a listening test is done in such a way that the listener who is assessing the sound quality of a component or recording can see the components that are being used for the test (e.g., the same musical piece listened to through a tube power amplifier and a solid-state amplifier), then it is possible that the listener's pre-existing biases towards or against certain components or brands could affect their judgment. To respond to this issue, researchers began to use blind tests, in which listeners cannot see the components being tested. A commonly used variant of this test is the ABX test. A subject is presented with two known samples (sample "A", the reference, and sample "B", an alternative), and one unknown sample "X," for three samples total. "X" is randomly selected from "A" and "B", and the subject identifies "X" as being either "A" or "B". Although there is no way to prove that a certain methodology is transparent, a properly conducted double-blind test can prove that a method is "not" transparent.
Blind tests are sometimes used as part of attempts to ascertain whether certain audio components (such as expensive, exotic cables) have any subjectively perceivable effect on sound quality. Data gleaned from these blind tests is not accepted by some audiophile magazines such as "Stereophile" and "The Absolute Sound" in their evaluations of audio equipment. John Atkinson, current editor of "Stereophile", stated that he once purchased a solid-state amplifier, the Quad 405, in 1978 after seeing the results from blind tests, but came to realize months later that "the magic was gone" until he replaced it with a tube amp. Robert Harley of "The Absolute Sound" wrote, in 2008, that: "...blind listening tests fundamentally distort the listening process and are worthless in determining the audibility of a certain phenomenon."
Doug Schneider, editor of the online Soundstage network, refuted this position with two editorials in 2009. He stated: "Blind tests are at the core of the decades' worth of research into loudspeaker design done at Canada's National Research Council (NRC). The NRC researchers knew that for their result to be credible within the scientific community and to have the most meaningful results, they had to eliminate bias, and blind testing was the only way to do so." Many Canadian companies such as Axiom, Energy, Mirage, Paradigm, PSB and Revel use blind testing extensively in designing their loudspeakers. Audio professional Dr. Sean Olive of Harman International shares this view.
Stereophonic sound provided a partial solution to the problem of creating the illusion of live orchestral performers by creating a phantom middle channel when the listener sits exactly in the middle of the two front loudspeakers. When the listener moves slightly to the side, however, this phantom channel disappears or is greatly reduced. An attempt to provide for the reproduction of the reverberation was tried in the 1970s through quadraphonic sound but, again, the technology at that time was insufficient for the task. Consumers did not want to pay the additional costs and space required for the marginal improvements in realism. With the rise in popularity of home theater, however, multi-channel playback systems became affordable, and many consumers were willing to tolerate the six to eight channels required in a home theater. The advances made in signal processors to synthesize an approximation of a good concert hall can now provide a somewhat more realistic illusion of listening in a concert hall.
In addition to spatial realism, the playback of music must be subjectively free from noise, such as hiss or hum, to achieve realism. The compact disc (CD) provides about 90 decibels of dynamic range, which exceeds the 80 dB dynamic range of music as normally perceived in a concert hall. Audio equipment must be able to reproduce frequencies high enough and low enough to be realistic. The human hearing range, for healthy young persons, is 20 Hz to 20,000 Hz.
Most adults can't hear higher than 15 kHz. CDs are capable of reproducing frequencies as low as 0 Hz and as high as 22.05 kHz, making them adequate for reproducing the frequency range that most humans can hear. The equipment must also provide no noticeable distortion of the signal or emphasis or de-emphasis of any frequency in this frequency range.
"Integrated", "mini", or "lifestyle" systems (also known by the older terms "music centre" or "midi system") contain one or more sources such as a CD player, a tuner, or a cassette deck together with a preamplifier and a power amplifier in one box. Although some high-end manufacturers do produce integrated systems, such products are generally disparaged by audiophiles, who prefer to build a system from "separates" (or "components"), often with each item from a different manufacturer specialising in a particular component. This provides the most flexibility for piece-by-piece upgrades and repairs.
For slightly less flexibility in upgrades, a preamplifier and a power amplifier in one box is called an "integrated amplifier"; with a tuner, it is a "receiver". A monophonic power amplifier, which is called a "monoblock", is often used for powering a subwoofer. Other modules in the system may include components like cartridges, tonearms, hi-fi turntables, Digital Media Players, digital audio players, DVD players that play a wide variety of discs including CDs, CD recorders, MiniDisc recorders, hi-fi videocassette recorders (VCRs) and reel-to-reel tape recorders. Signal modification equipment can include equalizers and signal processors.
This modularity allows the enthusiast to spend as little or as much as they want on a component that suits their specific needs. In a system built from separates, sometimes a failure on one component still allows partial use of the rest of the system. A repair of an integrated system, though, means complete lack of use of the system. Another advantage of modularity is the ability to spend money on only a few core components at first and then later add additional components to the system. Some of the disadvantages of this approach are increased cost, complexity, and space required for the components.
In the 2000s, modern hi-fi equipment can include signal sources such as digital audio tape (DAT), digital audio broadcasting (DAB) or HD Radio tuners. Some modern hi-fi equipment can be digitally connected using fibre optic TOSLINK cables, universal serial bus (USB) ports (including one to play digital audio files), or Wi-Fi support. Another modern component is the "music server" consisting of one or more computer hard drives that hold music in the form of computer files. When the music is stored in an audio file format that is lossless such as FLAC, Monkey's Audio or WMA Lossless, the computer playback of recorded audio can serve as an audiophile-quality source for a hi-fi system. There is now a push from certain streaming services to offer HiFi services. Streaming services typically have a modified dynamic range and possibly bit rates lower than audiophiles would be happy with. Tidal (service) has launched a Hi-Fi tier which includes access to FLAC and Master Quality Authenticated studio masters for many tracks through the desktop version of the player. This integration is also available for high end audio systems. | https://en.wikipedia.org/wiki?curid=13624 |
Holden
Holden, formerly known as General Motors-Holden, is an Australian automobile marque and former automobile manufacturer, which manufactured cars in Australia before switching to importing cars under the Holden brand. It is headquartered in Port Melbourne.
The company was founded in 1856 as a saddlery manufacturer in South Australia. In 1908, it moved into the automotive field before later becoming a subsidiary of the United States-based General Motors (GM) in 1931, when the company was renamed General Motors-Holden's Ltd. It was renamed Holden Ltd in 1998, adopting the name GM Holden Ltd in 2005.
In the past, Holden has offered badge-engineered models due to sharing arrangements with Chevrolet, Isuzu, Nissan, Opel, Suzuki, Toyota, and Vauxhall Motors. In previous years, the vehicle lineup consisted of models from GM Korea, GM Thailand, GM North America, and self-developed models like the Holden Commodore, Holden Caprice, and the Holden Ute. Holden also distributed the European Opel brand in Australia in 2012 until its Australian demise in mid-2013.
Holden briefly owned assembly plants in New Zealand during the early 1990s. The plants had belonged to General Motors from 1926 until 1990 in an earlier and quite separate operation from GM's Holden investment in Australia. From 1994 to 2017, all Australian-built Holden vehicles were manufactured in Elizabeth, South Australia, and engines were produced at the Fishermans Bend plant in Melbourne. Historically, production or assembly plants were operated in all mainland states of Australia. The consolidation of final assembly at Elizabeth was completed in 1988, but some assembly operations continued at Dandenong until 1994.
Although Holden's involvement in exports has fluctuated since the 1950s, the declining sales of large cars in Australia led the company to look to international markets to increase profitability. From 2010, Holden incurred losses due to the strong Australian dollar, and reductions of government grants and subsidies. This led to the announcement, on 11 December 2013, that Holden would cease vehicle and engine production by the end of 2017. On 20 October 2017, the last existing vehicle plant, located in Elizabeth, was closed as the production of the Holden Commodore ended. On 17 February 2020, General Motors announced that the Holden brand would be retired by 2021.
In 1852, James Alexander Holden emigrated to South Australia from Walsall, England, and in 1856 established J.A. Holden & Co., a saddlery business in Adelaide. In 1879 J A Holden's eldest son Henry James (HJ) Holden, became a partner and effectively managed the company. In 1885, German-born H. A. Frost joined the business as a junior partner and J.A. Holden & Co became Holden & Frost Ltd. Edward Holden, James' grandson, joined the firm in 1905 with an interest in automobiles. From there, the firm evolved through various partnerships, and in 1908, Holden & Frost moved into the business of minor repairs to car upholstery. The company began to re-body older chassis using motor bodies produced by F T Hack and Co from 1914. Holden & Frost mounted the body, and painted and trimmed it. The company began to produce complete motorcycle sidecar bodies after 1913. After 1917, wartime trade restrictions led the company to start full-scale production of vehicle body shells. H.J. Holden founded a new company in late 1917, and registered Holden's Motor Body Builders Ltd (HMBB) on 25 February 1919, specialising in car bodies and using the former F T Hack & Co facility at 400 King William Street in Adelaide before erecting a large four-story factory on the site.
By 1923, HMBB were producing 12,000 units per year. During this time, HMBB assembled bodies for Ford Motor Company of Australia until its Geelong plant was completed. From 1924, HMBB became the exclusive supplier of car bodies for GM in Australia, with manufacturing taking place at the new Woodville plant. These bodies were made to suit a number of chassis imported from manufacturers including Austin, Buick, Chevrolet, Cleveland, Dodge, Essex, Fiat, Hudson, Oakland, Oldsmobile, Overland, Reo, Studebaker, and Willys-Knight.
In 1926, General Motors (Australia) Limited was established with assembly plants at Newstead, Queensland; Marrickville, New South Wales; City Road, Melbourne, Victoria; Birkenhead, South Australia; and Cottesloe, Western Australia using bodies produced by HMBB and imported complete knock down chassis. In 1930 alone, the still independent Woodville plant built bodies for Austin, Chrysler, DeSoto, Morris, Hillman, Humber, Hupmobile, and Willys-Overland, as well GM cars. The last of this line of business was the assembly of Hillman Minx sedans in 1948. The Great Depression led to a substantial downturn in production by Holden, from 34,000 units annually in 1930 to just 1,651 units one year later. In 1931, GM purchased HMBB and merged it with General Motors (Australia) Pty Ltd to form General Motors-Holden's Ltd (GM-H). Throughout the 1920s, Holden also supplied 60 W-class tramcar bodies to the Melbourne & Metropolitan Tramways Board, of which several examples have been preserved in both Australia and New Zealand.
Holden's second full-scale car factory, located in Fishermans Bend (Port Melbourne), was opened on 5 November 1936 by Prime Minister Joseph Lyons, with construction beginning in 1939 on a new plant in Pagewood, New South Wales. However, World War II delayed car production with efforts shifted to the construction of vehicle bodies, field guns, aircraft, and engines. Before the war ended, the Australian government took steps to encourage an Australian automotive industry. Both GM and Ford provided studies to the Australian government outlining the production of the first Australian-designed car. Ford's proposal was the government's first choice, but required substantial financial assistance. GM's study was ultimately chosen because of its low level of government intervention. After the war, Holden returned to producing vehicle bodies, this time for Buick, Chevrolet, Pontiac, and Vauxhall. The Oldsmobile Ace was also produced from 1946 to 1948.
From here, Holden continued to pursue the goal of producing an Australian car. This involved compromise with GM, as Holden's managing director, Laurence Hartnett, favoured development of a local design, while GM preferred to see an American design as the basis for "Australia's Own Car". In the end, the design was based on a previously rejected postwar Chevrolet proposal. The Holden was launched in 1948, creating long waiting lists extending through 1949 and beyond. The name "Holden" was chosen in honour of Sir Edward Holden, the company's first chairman and grandson of J.A. Holden. Other names considered were "GeM", "Austral", "Melba", "Woomerah", "Boomerang", "Emu", and "Canbra", a phonetic spelling of Canberra. Although officially designated "48–215", the car was marketed simply as the "Holden". The unofficial usage of the name "FX" originated within Holden, referring to the updated suspension on the 48–215 of 1953.
During the 1950s, Holden dominated the Australian car market. GM invested heavily in production capacity, which allowed the company to meet increased postwar demand for motor cars. Less expensive, four-cylinder cars did not offer Holdens the ability to deal with rugged rural areas. Holden 48–215 sedans were produced in parallel with the 50-2106 coupé utility from 1951; the latter was known colloquially as the "ute" and became ubiquitous in Australian rural areas as the workhorse of choice. Production of both the utility and sedan continued with minor changes until 1953, when they were replaced by the facelifted FJ model, introducing a third panel van body style. The FJ was the first major change to the Holden since its 1948 introduction. Over time, it gained iconic status and remains one of Australia's most recognisable automotive symbols. A new horizontally slatted grille dominated the front end of the FJ, which received various other trim and minor mechanical revisions. In 1954, Holden began exporting the FJ to New Zealand. Although little changed from the 48–215, marketing campaigns and price cuts kept FJ sales steady until a completely redesigned model was launched. At the 2005 Australian International Motor Show in Sydney, Holden paid homage to the FJ with the Efijy concept car.
Holden's next model, the FE, launched in 1956, offered in a new station wagon body style dubbed "Station Sedan" in the company's sales literature. In the same year, Holden commenced exports to Malaya, Thailand, and North Borneo. Strong sales continued in Australia, and Holden achieved a market share of more than 50% in 1958 with the revised FC model. This was the first Holden to be tested on the new "Holden Proving Ground" based in Lang Lang, Victoria. In 1957, Holden's export markets grew to 17 countries, with new additions including Indonesia, Hong Kong, Singapore, Fiji, Sudan, the East Africa region, and South Africa. Indonesian market cars were assembled locally by P.T. Udatin. The opening of the Dandenong, Victoria, production facility in 1956 brought further jobs; by 1959, Holden employed 19,000 workers country-wide. In 1959, complete knock-down assembly began in South Africa and Indonesia.
In 1960, Holden introduced its third major new model, the FB. The car's style was inspired by 1950s Chevrolets, with tailfins and a wrap-around windscreen with "dog leg" A-pillars. By the time it was introduced, many considered the appearance dated. Much of the motoring industry at the time noted that the adopted style did not translate well to the more compact Holden. The FB became the first Holden that was adapted for left-hand drive markets, enhancing its export potential, and as such was exported to New Caledonia, New Hebrides, the Philippines, and Hawaii.
In 1960, Ford unveiled the new Falcon in Australia, only months after its introduction in the United States. To Holden's advantage, the Falcon was not durable, particularly in the front suspension, making it ill-suited for Australian conditions. In response to the Falcon, Holden introduced the facelifted EK series in 1961; the new model featured two-tone paintwork and optional Hydramatic automatic transmission. A restyled EJ series came in 1962, debuting the new luxury oriented Premier model. The EH update came a year later, bringing the new Red motor, providing better performance than the previous Grey motor. The HD series of 1965 had the introduction of the Powerglide automatic transmission. At the same time, an "X2" performance option with a more powerful version of the six-cylinder engine was made available. In 1966, the HR was introduced, including changes in the form of new front and rear styling and higher-capacity engines. More significantly, the HR fitted standard front seat belts; Holden thus became the first Australian automaker to provide the safety device as standard equipment across all models. This coincided with the completion of the production plant in Acacia Ridge, Queensland. By 1963, Holden was exporting cars to Africa, the Middle East, Southeast Asia, the Pacific Islands, and the Caribbean.
Holden began assembling the compact HA series Vauxhall Viva in 1964. This was superseded by the Holden Torana in 1967, a development of the Viva ending Vauxhall production in Australia. Holden offered the LC, a Torana with new styling, in 1969 with the availability of Holden's six-cylinder engine. In the development days, the six-cylinder Torana was reserved for motor racing, but research had shown a business case existed for such a model. The LC Torana was the first application of Holden's new three-speed Tri-Matic automatic transmission. This was the result of Holden's A$16.5 million transformation of the Woodville, South Australia, factory for its production.
Holden's association with the manufacture of Chevrolets and Pontiacs ended in 1968, coinciding with the year of Holden's next major new model, the HK . This included Holden's first V8 engine, a Chevrolet engine imported from Canada. Models based on the HK series included an extended-length prestige model, the Brougham; and a two-door coupé, the Monaro. The mainstream Holden Special was rebranded the Kingswood, and the basic fleet model, the Standard, became the Belmont. On 3 March 1969, Alexander Rhea, managing director of General Motors-Holden's at the time, was joined by press photographers and the Federal Minister of Shipping and Transport, Ian Sinclair as the two men drove the two-millionth Holden, an HK Brougham, off the production line. This came just over half a decade since the one-millionth car, an EJ Premier sedan, rolled off the Dandenong line on 25 October 1962. Following the Chevrolet V8 fitted to the HK, the first Australian-designed and mass-produced V8, the Holden V8 engine debuted in the Hurricane concept of 1969 before fitment to facelifted HT model. This was available in two capacities: and . Late in HT production, use of the new Tri-Matic automatic transmission, first seen in the LC Torana was phased in as Powerglide stock was exhausted, but Holden's official line was that the HG of 1971 was the first full-sized Holden to receive it.
Despite the arrival of serious competitors—namely, the Ford Falcon, Chrysler Valiant, and Japanese cars—in the 1960s, Holden's locally produced large six- and eight-cylinder cars remained Australia's top-selling vehicles. Sales were boosted by exporting the Kingswood sedan, station wagon, and utility body styles to Indonesia, Trinidad and Tobago, Pakistan, the Philippines, and South Africa in complete knock-down form.
Holden launched the new HQ series in 1971. At this time, the company was producing all of its passenger cars in Australia, and every model was of Australian design; however, by the end of the decade, Holden was producing cars based on overseas designs. The HQ was thoroughly re-engineered, featuring a perimeter frame and semimonocoque (unibody) construction. Other firsts included an all-coil suspension and an extended wheelbase for station wagons, while the utilities and panel vans retained the traditional coil/leaf suspension configuration. The series included the new prestige Statesman brand, which also had a longer wheelbase, replacing the Brougham. The Statesman remains noteworthy because it was not marketed as a "Holden", but rather a "Statesman".
The HQ framework led to a new generation of two-door Monaros, and despite the introduction of the similar-sized competitors, the HQ range became the top-selling Holden of all time, with 485,650 units sold in three years; 14,558 units were exported and 72,290 CKD kits were constructed. The HQ series was facelifted in 1974 with the introduction of the HJ, heralding new front-panel styling and a revised rear fascia. This new bodywork was to remain, albeit with minor upgrades, through the HX and HZ series. Detuned engines adhering to government emission standards were brought in with the HX series, whilst the HZ brought considerably improved road handling and comfort with the introduction of radial-tuned suspension. As a result of GM's toying with the Wankel rotary engine, as used by Mazda of Japan, an export agreement was initiated in 1975. This involved Holden exporting with powertrains, HJ, and later, HX series Premiers as the Mazda Roadpacer AP. Mazda then fitted these cars with the 13B rotary engine and three-speed automatic transmission. Production ended in 1977, after just 840 units sold.
Development of the Torana continued in with the larger mid-sized LH series released in 1974, offered only as a four-door sedan. The LH Torana was one of the few cars worldwide engineered to accommodate four-, six-, and eight-cylinder engines. This trend continued until Holden introduced the Sunbird in 1976, essentially the four-cylinder Torana with a new name. Designated LX, both the Sunbird and Torana introduced a three-door hatchback variant. A final UC update appeared in 1978. During its production run, the Torana achieved legendary racing success in Australia, achieving victories at the Mount Panorama Circuit in Bathurst, New South Wales.
In 1975, Holden introduced the compact Gemini, the Australian version of the "T-car", based on the Opel Kadett C. The Gemini was an overseas design developed jointly with Isuzu, GM's Japanese affiliate; and was powered by a 1.6-litre four-cylinder engine. Fast becoming a popular car, the Gemini rapidly attained sales leadership in its class, and the nameplate lived on until 1987.
Holden's most popular car to date, the Commodore, was introduced in 1978 as the VB. The new family car was loosely based on the Opel Rekord E body shell, but with the front from the Opel Senator grafted to accommodate the larger Holden six-cylinder and V8 engines. Initially, the Commodore maintained Holden's sales leadership in Australia. However, some of the compromises resulting from the adoption of a design intended for another market hampered the car's acceptance. In particular, it was narrower than its predecessor and its Falcon rival, making it less comfortable for three rear-seat passengers. With the abandonment of left-hand drive markets, Holden exported almost 100,000 Commodores to markets such as New Zealand, Thailand, Hong Kong, Malaysia, Indonesia, Malta and Singapore.
During the 1970s, Holden ran an advertising jingle "Football, Meat Pies, Kangaroos, and Holden cars", a localised version of the "Baseball, Hot Dogs, Apple Pies, and Chevrolet" jingle used by GM's Chevrolet division in the United States.
Holden discontinued the Torana in 1979 and the Sunbird in 1980. After the 1978 introduction of the Commodore, the Torana became the "in-between" car, surrounded by the smaller and more economical Gemini and the larger, more sophisticated Commodore. The closest successor to the Torana was the Camira, released in 1982 as Australia's version of GM's medium-sized "J-car".
The 1980s were challenging for Holden and the Australian automotive industry. The Australian Government tried to revive the industry with the Button car plan, which encouraged car makers to focus on producing fewer models at higher, more economical volumes, and to export cars. The decade opened with the shut-down of the Pagewood, New South Wales production plant and introduction of the light commercial Rodeo, sourced from Isuzu in Japan. The Rodeo was available in both two- and four-wheel drive chassis cab models with a choice of petrol and diesel powerplants. The range was updated in 1988 with the TF series, based on the Isuzu TF. Other cars sourced from Isuzu during the 1980s were the four-wheel drive Jackaroo (1981), the Shuttle (1982) van and the Piazza (1986) three-door sports hatchback. The second generation Holden Gemini from 1985 was also based on an Isuzu design, although, its manufacture was undertaken in Australia.
While GM Australia's commercial vehicle range had originally been mostly based on Bedford products, these had gradually been replaced by Isuzu products. This process began in the 1970s and by 1982 Holden's commercial vehicle arm no longer offered any Bedford products.
The new Holden WB commercial vehicles and the Statesman WB limousines were introduced in 1980. However, the designs, based on the HQ and updated HJ, HX and HZ models from the 1970s were less competitive than similar models in Ford's lineup. Thus, Holden abandoned those vehicle classes altogether in 1984. Sales of the Commodore also fell, with the effects of the 1979 energy crisis lessening, and for the first time the Commodore lost ground to the Ford Falcon. Sales in other segments also suffered when competition from Ford intensified, and other Australian manufacturers: Mitsubishi, Nissan and Toyota gained market share. When released in 1982, the Camira initially generated good sales, which later declined because buyers considered the 1.6-litre engine underpowered, and the car's build and ride quality below-average. The Camira lasted just seven years, and contributed to Holden's accumulated losses of over A$500 million by the mid-1980s.
In 1984, Holden introduced the VK Commodore, with significant styling changes from the previous VH. The Commodore was next updated in 1986 as the VL, which had new front and rear styling. Controversially, the VL was powered by the 3.0-litre Nissan "RB30" six-cylinder engine and had a Nissan-built, electronically controlled four-speed automatic transmission. Holden even went to court in 1984 to stop local motoring magazine "Wheels" from reporting on the matter. The engine change was necessitated by the legal requirement that all new cars sold in Australia after 1986 had to consume unleaded petrol. Because it was unfeasible to convert the existing six-cylinder engine to run on unleaded fuel, the Nissan engine was chosen as the best engine available. However, changing currency exchange rates doubled the cost of the engine and transmission over the life of the VL. The decision to opt for a Japanese-made transmission led to the closure of the Woodville, South Australia assembly plant. Confident by the apparent sign of turnaround, GM paid off Holden's mounted losses of A$780 million on 19 December 1986. At GM headquarters' request, Holden was then reorganised and recapitalised, separating the engine and car manufacturing divisions in the process. This involved the splitting of Holden into "Holden's Motor Company" (HMC) and "Holden's Engine Company" (HEC). For the most part, car bodies were now manufactured at Elizabeth, South Australia, with engines as before, confined to the Fishermans Bend plant in Port Melbourne, Victoria. The engine manufacturing business was successful, building four-cylinder "Family II" engines for use in cars built overseas. The final phase of the Commodore's recovery strategy involved the 1988 VN, a significantly wider model powered by the American-designed, Australian-assembled 3.8-litre Buick V6 engine.
Holden began to sell the subcompact Suzuki Swift-based Barina in 1985. The Barina was launched concurrently with the Suzuki-sourced Holden Drover, followed by the Scurry later on in 1985. In the previous year, Nissan Pulsar hatchbacks were rebadged as the Holden Astra, as a result of a deal with Nissan. This arrangement ceased in 1989 when Holden entered a new alliance with Toyota, forming a new company: United Australian Automobile Industries (UAAI). UAAI resulted in Holden selling rebadged versions of Toyota's Corolla and Camry, as the Holden Nova and Apollo respectively, with Toyota re-branding the Commodore as the Lexcen.
The company changed throughout the 1990s, increasing its Australian market share from 21 percent in 1991 to 28.2 percent in 1999. Besides manufacturing Australia's best selling car, which was exported in significant numbers, Holden continued to export many locally produced engines to power cars made elsewhere. In this decade, Holden adopted a strategy of importing cars it needed to offer a full range of competitive vehicles. During 1998, General Motors-Holden's Ltd name was shortened to "Holden Ltd".
On 26 April 1990, GM's New Zealand subsidiary Holden New Zealand announced that production at the assembly plant based in Trentham would be phased out and vehicles would be imported duty-free—this came after the 1984 closure of the Petone assembly line due to low output volumes. During the 1990s, Holden, other Australian automakers and trade unions pressured the Australian Government to halt the lowering of car import tariffs. By 1997, the federal government had already cut tariffs to 22.5 percent, from 57.5 percent ten years earlier; by 2000, a plan was formulated to reduce the tariffs to 15 percent. Holden was critical, saying that Australia's population was not large enough, and that the changes could tarnish the local industry.
Holden re-introduced its defunct Statesman title in 1990—this time under the Holden marque, as the Statesman and Caprice. For 1991, Holden updated the Statesman and Caprice with a range of improvements, including the introduction of four-wheel anti-lock brakes (ABS); although, a rear-wheel system had been standard on the Statesman Caprice from March 1976. ABS was added to the short-wheelbase Commodore range in 1992. Another returning variant was the full-size utility, and on this occasion it was based on the Commodore. The VN Commodore received a major facelift in 1993 with the VR—compared to the VN, approximately 80 percent of the car model was new. Exterior changes resulted in a smoother overall body and a "twin-kidney" grille—a Commodore styling trait that remained until the 2002 VY model and, as of 2013, remains a permanent staple on HSV variants.
Holden introduced the all-new VT Commodore in 1997, the outcome of a A$600 million development programme that spanned more than five years. The new model featured a rounded exterior body shell, improved handling and many firsts for an Australian-built car. Also, a stronger body structure increased crash safety. The locally produced Buick-sourced V6 engine powered the Commodore range, as did the 5.0-litre Holden V8 engine, and was replaced in 1999 by the 5.7-litre "LS" unit.
The UAAI badge-engineered cars first introduced in 1989 sold in far fewer numbers than anticipated, but the Holden Commodore, Toyota Camry, and Corolla were all successful when sold under their original nameplates. The first generation Nova and the donor Corolla were produced at Holden's Dandenong, Victoria facility until 1994. UAAI was dissolved in 1996, and Holden returned to selling only GM products. The Holden Astra and Vectra, both designed by Opel in Germany, replaced the Toyota-sourced Holden Nova and Apollo. This came after the 1994 introduction of the Opel Corsa replacing the already available Suzuki Swift as the source for the Holden Barina. Sales of the full-size Holden Suburban SUV sourced from Chevrolet commenced in 1998—lasting until 2001. Also in 1998, local assembly of the Vectra began at Elizabeth, South Australia. These cars were exported to Japan and Southeast Asia with Opel badges. However, the Vectra did not achieve sufficient sales in Australia to justify local assembly, and reverted to being fully imported in 2000.
Holden's market surge from the 1990s reversed in the 2000s decade. In Australia, Holden's market share dropped from 27.5 percent in 2000 to 15.2 percent in 2006. From March 2003, Holden no longer held the number one sales position in Australia, losing ground to Toyota.
This overall downturn affected Holden's profits; the company recorded a combined gain of A$842.9 million from 2002 to 2004, and a combined loss of A$290 million from 2005 to 2006. Factors contributing to the loss included the development of an all-new model, the strong Australian dollar and the cost of reducing the workforce at the Elizabeth plant, including the loss of 1,400 jobs after the closure of the third-shift assembly line in 2005, after two years in operation. Holden fared better in 2007, posting an A$6 million loss. This was followed by an A$70.2 million loss in the 2008, an A$210.6 million loss in 2009, and a profit of A$112 million in 2010. On 18 May 2005, "Holden Ltd" became "GM Holden Ltd", coinciding with the resettling to the new Holden headquarters on 191 Salmon Street, Port Melbourne, Victoria.
Holden caused controversy in 2005 with their Holden Employee Pricing television advertisement, which ran from October to December 2005. The campaign publicised, "for the first time ever, all Australians can enjoy the financial benefit of Holden Employee Pricing". However, this did not include a discounted dealer delivery fee and savings on factory fitted options and accessories that employees received. At the same time, employees were given a further discount of 25 to 29 percent on selected models.
Holden revived the Monaro coupe in 2001. Based on the VT Commodore architecture, the coupe attracted worldwide attention after being shown as a concept car at Australian auto shows. The VT Commodore received its first major update in 2002 with the VY series. A mildly facelifted VZ model launched in 2004, introducing the "High Feature" engine. This was built at the Fishermans Bend facility completed in 2003, with a maximum output of 900 engines per day. This has reportedly added A$5.2 billion to the Australian economy; exports account for about A$450 million alone. After the VZ, the "High Feature" engine powered the all-new Holden Commodore (VE). In contrast to previous models, the VE no longer used an Opel-sourced platform adapted both mechanically and in size, but was based on the GM Zeta platform that was earmarked to become a "Global RWD Architecture", until plans were cancelled due to the 2007/08 global financial crisis.
Throughout the 1990s, Opel had also been the source of many Holden models. To increase profitability, Holden looked to the South Korean Daewoo brand for replacements after acquiring a 44.6 percent stake—worth US$251 million—in the company in 2002 as a representative of GM. This was increased to 50.9 percent in 2005, but when GM further increased its stake to 70.1 percent around the time of its 2009 Chapter 11 reorganisation, Holden's interest was relinquished and transferred to another (undisclosed) part of GM.
The commencement of the Holden-branded Daewoo models began with the 2005 Holden Barina, which based on the Daewoo Kalos, replaced the Opel Corsa as the source of the Barina. In the same year, the Viva, based on the Daewoo Lacetti, replaced the entry-level Holden Astra Classic, although the new-generation Astra introduced in 2004 continued on. The Captiva crossover SUV came next in 2006. After discontinuing the Frontera and Jackaroo models in 2003, Holden was only left with one all-wheel drive model: the Adventra, a Commodore-based station wagon. The fourth model to be replaced with a South Korean alternative was the Vectra by the mid-size Epica in 2007. As a result of the split between GM and Isuzu, Holden lost the rights to use the "Rodeo" nameplate. Consequently, the Holden Rodeo was facelifted and relaunched as the Colorado in 2008. Following Holden's successful application for a A$149 million government grant to build a localised version of the Chevrolet Cruze in Australia from 2011, Holden in 2009 announced that it would initially import the small car unchanged from South Korea as the Holden Cruze.
Following the government grant announcement, Kevin Rudd, Australia's Prime Minister at the time, stated that production would support 600 new jobs at the Elizabeth facility; however, this failed to take into account Holden's previous announcement, whereby 600 jobs would be shed when production of the "Family II" engine ceased in late 2009. In mid-2013, Holden sought a further A$265 million, in addition to the A$275 million that was already committed by the governments of Canberra, South Australia and Victoria, to remain viable as a car manufacturer in Australia. A source close to Holden informed the "Australian" news publication that the car company is losing money on every vehicle that it produces and consequently initiated negotiations to reduce employee wages by up to A$200 per week to cut costs, following the announcement of 400 job cuts and an assembly line reduction of 65 (400 to 335) cars per day. From 2001 to 2012, Holden received over A$150 million a year in subsidy from Australian government. The subsidy from 2007 was more than Holden's capital investment of the same period. From 2004, Holden was only able to make a profit in 2010 and 2011.
In March 2012, Holden was given a $270 million lifeline by the Australian, South Australian and Victorian governments. In return, Holden planned to inject over $1 billion into car manufacturing in Australia. They estimated the new investment package would return around $4 billion to the Australian economy and see GM Holden continue making cars in Australia until at least 2022.
Industry Minister Kim Carr confirmed on 10 July 2013 that talks had been scheduled between the Australian government and Holden. On 13 August 2013, 1,700 employees at the Elizabeth plant in northern Adelaide voted to accept a three-year wage freeze in order to decrease the chances of the production line's closure in 2016. Holden's ultimate survival, though, depended on continued negotiations with the Federal Government—to secure funding for the period from 2016 to 2022—and the final decision of the global headquarters in Detroit, US.
Following an unsuccessful attempt to secure the extra funding required from the new Liberal/National coalition government, on 11 December 2013, General Motors announced that Holden would cease engine and vehicle manufacturing operations in Australia by the end of 2017. As a result, 2,900 jobs would be lost over four years. Beyond 2017 Holden's Australian presence would consist of a national sales company, a parts distribution centre and a global design studio.
In May 2014, GM reversed their decision to abandon the Lang Lang Proving Ground and decided to keep it as part of their engineering capability in Australia.
In 2015, Holden again began selling a range of Opel-derived cars comprising the Astra VXR and Insignia VXR (both based on the OPC models sold by Vauxhall) and Cascada. Later that year, Holden also announced plans to sell the European Astra and the Korean Cruze alongside each other from 2017.
In December 2015, Belgian entrepreneur Guido Dumarey commenced negotiations to buy the Commodore manufacturing plant in South Australia, with a view to continue producing a rebadged Zeta-based premium range of rear and all-wheel drive vehicles for local and export sales. The proposal was met with doubt in South Australia, and it later came to nothing. On 20 October 2017, Holden ceased manufacturing vehicles in Australia. Holden then imported their cars from Opel in Germany and GM plants in Canada, U.S., Thailand, and South Korea.
On 17 February 2020, General Motors announced that the Holden brand would be retired by 2021, after GM stated it would not make all right-hand drive vehicles globally, leaving the Australia and New Zealand market altogether, costing close to AUD$1.6Billion
On 8 May 2015, Jeff Rolfs, Holden's CFO, became interim chairman and managing director. Holden announced on 6 February 2015 that Mark Bernhard would return to Holden as chairman and managing director, the first Australian to hold the post in 25 years. In 2010, Holden sold vehicles across Australia through the Holden Dealer Network (310 authorised stores and 12 service centres), which employed more than 13,500 people.
In 1987, Holden established Holden Special Vehicles (HSV) in partnership with Tom Walkinshaw, who primarily manufactured modified, high-performance Commodore variants. To further reinforce the brand, HSV introduced the HSV Dealer Team into the V8 Supercar fold in 2005 under the naming rights of Toll HSV Dealer Team.
Holden's logo, of a lion holding a stone, was introduced in 1928. Holden's Motor Body Builders appointed Rayner Hoff to design the emblem, which refers to a fable in which observations of lions rolling stones led to the invention of the wheel. With the 1948 launch of the 48–215, Holden revised its logo. It commissioned another redesign in 1972 to better represent the company. The emblem was reworked once more in 1994.
Holden began to export vehicles in 1954, sending the FJ to New Zealand. Exports to New Zealand continued, but to broaden their export potential, Holden began to cater their Commodore, Monaro and Statesman/Caprice models for both right- and left-hand drive markets. The Middle East was Holden's largest export market, with the Commodore sold as the Chevrolet Lumina from 1998, and the Statesman from 1999 as the Chevrolet Caprice. Commodores were also sold as the Chevrolet Lumina in Brunei, Fiji and South Africa, and as the Chevrolet Omega in Brazil. Pontiac in North America also imported Commodore sedans from 2008 through to 2009 as the G8. The G8's cessation was a consequence of GM's Chapter 11 bankruptcy resulting in the demise of the Pontiac brand.
Sales of the Monaro began in 2003 to the Middle East as the Chevrolet Lumina Coupe. Later that year a modified version of the Monaro began selling in the United States (but not in Canada) as the Pontiac GTO, and under the Monaro name through Vauxhall dealerships in the United Kingdom. This arrangement continued through to 2005 when the car was discontinued. The long-wheelbase Statesman sales in the Chinese market as the Buick Royaum began in 2005, before being replaced in 2007 by the Statesman-based Buick Park Avenue. Statesman/Caprice exports to South Korea also began in 2005. These Korean models were sold as the Daewoo Statesman, and later as the Daewoo Veritas from 2008. Holden's move into international markets proved profitable; export revenue increased from A$973 million in 1999 to just under $1.3 billion in 2006.
From 2011, the WM Caprice was exported to North America as the Chevrolet Caprice PPV, a version of the Caprice built exclusively for law enforcement in North America and sold only to police. From 2007, the HSV-based Commodore was exported to the United Kingdom as the Vauxhall VXR8.
In 2013, Chevrolet announced that exports of the Commodore would resume to North America in the form of the VF Commodore as the Chevrolet SS sedan for the 2014 model year. The Chevrolet SS Sedan was also imported to the United States (but again, not to Canada) for 2015 with only minor changes, notably the addition of Magnetic Ride Control suspension and a Tremec TR-6060 manual transmission. For the 2016 model year the SS sedan received a facelift based on the VF Series II Commodore unveiled in September 2015. In 2017, production of Holden's last two American exports, the SS and the Caprice PPV was discontinued.
Whilst previously holding the number one position in Australian vehicle sales, Holden has sold progressively fewer cars during most of the 21st century, in part due to a large drop in Commodore sales.
Holden has been involved with factory backed teams in Australian touring car racing since 1968. The main factory-backed teams have been the Holden Dealer Team (1969–1987) and the Holden Racing Team (1990–2016). Since 2017, Triple Eight Race Engineering has been Holden's factory team. Holden has won the Bathurst 1000 32 times, more than any other manufacturer, and has won the Australian Touring Car and Supercars Championship title 20 times. Brad Jones Racing, Charlie Schwerkolt Racing, Erebus Motorsport, Matt Stone Racing, Tekno Autosports and Walkinshaw Andretti United also run Holden Commodores in the series. | https://en.wikipedia.org/wiki?curid=13625 |
Hank Greenberg
Henry Benjamin Greenberg (born Hyman Greenberg; January 1, 1911 – September 4, 1986), nicknamed "Hammerin' Hank", "Hankus Pankus", or "The Hebrew Hammer", was an American professional baseball player and team executive. He played in Major League Baseball (MLB), primarily for the Detroit Tigers as a first baseman in the 1930s and 1940s. A member of the Baseball Hall of Fame and a two-time Most Valuable Player (MVP) Award winner, he was one of the premier power hitters of his generation and is widely considered as one of the greatest sluggers in baseball history. He had 47 months of military service including service in World War II, all of which took place during what would have been prime years in his major league career.
Greenberg played the first twelve of his thirteen major league seasons for Detroit. He was an American League (AL) All-Star for four seasons and an AL MVP in 1935 (first baseman) and 1940 (left fielder). He had a batting average over .300 in eight seasons, and won two World Series championships with the Tigers ( and ). He was the AL home run leader four times and his 58 home runs for the Tigers in 1938 equaled Jimmie Foxx's 1932 mark for the most in one season by anyone other than Babe Ruth, and tied Foxx for the most home runs between Ruth's record 60 in 1927 and Roger Maris' record 61 in 1961. Greenberg was the first major league player to hit 25 or more home runs in a season in each league, and remains the AL record-holder for most runs batted in in a single season by a right-handed batter (183 in 1937, a 154-game schedule).
His career statistics would have certainly been higher had he not served in the armed services during wartime. In 1947, Greenberg signed a contract for a record $85,000 salary before being sold to the Pittsburgh Pirates, where he played his final MLB season that year. After retiring from playing, Greenberg continued to work in baseball as a team executive for the Cleveland Indians and Chicago White Sox.
Greenberg was the first Jewish superstar in American team sports. He attracted national attention in 1934 in the middle of a pennant race when he had to decide whether to play baseball on two major Jewish holidays; after consultation with his rabbi, he agreed to play on Rosh Hashanah, but on Yom Kippur he spent the day at his synagogue, even though he was not particularly observant religiously. Having endured his share of anti-semitic abuse in his career, Greenberg was one of the few opposing players to publicly welcome African-American player Jackie Robinson to the major leagues in 1947.
Hank Greenberg was born Hyman Greenberg on January 1, 1911, in Greenwich Village, New York City, to Romanian Orthodox Jewish parents, David and Sarah Greenberg, who had emigrated from Bucharest. The family owned a successful cloth-shrinking plant in New York. He had two brothers, Ben, four years older, and Joe, five years younger, who also played baseball, and a sister, Lillian, two years older. His family moved to the Bronx when he was about seven.
He attended James Monroe High School in the Bronx, where he was an outstanding all-around athlete and was bestowed with the long-standing nickname of "Bruggy" by his basketball coach. His preferred sport was baseball, and his preferred position was first base. In high school basketball, he was on the Monroe team that won the city championship.
In 1929, the 18-year-old 6-foot-4-inch Greenberg was recruited by the New York Yankees, who already had Lou Gehrig at first base. Greenberg turned them down and instead attended New York University for a year, where he was a member of Sigma Alpha Mu, after which he signed with the Detroit Tigers for $9,000 ($ today).
Greenberg played minor league baseball for three years. Greenberg played 17 games in 1930 for the Hartford Senators, then played at Raleigh, North Carolina, for the Raleigh Capitals, where he hit .314 with 19 home runs. In 1931, he played at Evansville for the Evansville Hubs in the Illinois–Indiana–Iowa League (.318, 15 homers, 85 RBIs). In 1932, at Beaumont for the Beaumont Exporters in the Texas League, he hit 39 homers with 131 RBIs, won the MVP award, and led Beaumont to the Texas League title.
When he broke into the major leagues in 1930, Greenberg was the youngest MLB player (19).
In 1933, he rejoined the Tigers and hit .301 while driving in 87 runs. At the same time, he was third in the league in strikeouts (78).
In 1934, his second major-league season, he hit .339 and helped the Tigers reach their first World Series in 25 years. He led the league in doubles, with 63 (the fourth-highest all-time in a single season), and extra base hits (96). He was third in the AL in slugging percentage (.600) – behind Jimmie Foxx and Lou Gehrig, but ahead of Babe Ruth, and in RBIs (139), sixth in batting average (.339), seventh in home runs (26), and ninth in on-base percentage (.404).
Late in the 1934 season, he announced that he would not play on September 10, which was Rosh Hashanah, the Jewish New Year, or on September 19, the Day of Atonement, Yom Kippur. Fans grumbled, "Rosh Hashanah comes every year but the Tigers haven't won the pennant since 1909." Greenberg did considerable soul-searching, and discussed the matter with his rabbi; finally he relented and agreed to play on Rosh Hashanah, but stuck with his decision not to play on Yom Kippur. Dramatically, Greenberg hit two home runs in a 2–1 Tigers victory over Boston on Rosh Hashanah. The next day's "Detroit Free Press" ran the Hebrew lettering for "Happy New Year" across its front page. Columnist and poet Edgar A. Guest expressed the general opinion in a poem titled "Speaking of Greenberg", in which he used the Irish (and thus Catholic) names Murphy and Mulroney. The poem ends with the lines ""We shall miss him on the infield and shall miss him at the bat / But he's true to his religion—and I honor him for that."" The complete text of the poem is at the end of Greenberg's biography page at the website of the International Jewish Sports Hall of Fame. The Detroit press was not so kind regarding the Yom Kippur decision, nor were many fans, but Greenberg in his autobiography recalled that he received a standing ovation from congregants at Congregation Shaarey Zedek when he arrived. Absent Greenberg, the Tigers lost to the New York Yankees, 5–2. The Tigers went on to face the St. Louis Cardinals in the 1934 World Series.
In 1935 Greenberg led the league in RBIs (170), total bases (389), and extra base hits (98), tied Foxx for the AL title in home runs (36), was 2nd in the league in doubles (46), slugging percentage (.628), was 3rd in the league in triples (16), and in runs scored (121), 6th in on-base percentage (.411) and walks (87), and was 7th in batting average (.328). He was unanimously voted the American League's Most Valuable Player. At the All-Star break that season, Greenberg hit 25 home runs and set an MLB record (still standing) of 103 RBIs – but was not selected to the AL All-Star roster (both managers put themselves on the rosters but did not play). He helped lead the Tigers to their first World Series title, but sprained his wrist in the second game and did not play in the other 4 games.
In 1936, Greenberg reinjured his wrist in a collision with Jake Powell of the Washington Senators in April and did not play the remainder of the season. He finished the season with 16 hits, 1 home run, and 15 RBIs in 12 games.
In 1937, Greenberg recovered from his injury and was voted to the AL All-Star roster, but did not play. On September 19, 1937, he hit the first home run into the center field bleachers at Yankee Stadium. He led the AL by driving in 183 runs (third all-time, behind Hack Wilson in 1930 and Lou Gehrig in 1931), and in extra base hits (103), while batting .337 with 200 hits. He was second in the league in home runs (40), doubles (49), total bases (397), slugging percentage (.668), and walks (102), third in on-base percentage (.436), and seventh in batting average (.337). Greenberg came in third in the vote for MVP.
A prodigious home run hitter, Greenberg narrowly missed breaking Babe Ruth's single-season home run record in 1938, when he hit 58 home runs, leading the league for the second time. That year, he had 11 games with multiple home runs, a new major league record. Sammy Sosa tied the record in 1998. Greenberg matched what was then the single-season home run record by a right-handed batter, (Jimmie Foxx, 1932); the mark stood for 66 years until it was broken by Sammy Sosa and Mark McGwire. Greenberg also had a 59th home run washed away in a rainout. It has been long speculated that Greenberg was intentionally walked late in the season to prevent him from breaking Ruth's record, but Greenberg dismissed this speculation, calling it "crazy stories." Nonetheless, Howard Megdal has calculated that in September 1938, Greenberg was walked in over 20% of his plate appearances, the highest percentage in his career by far.
Greenberg was again voted to the AL All-Star roster in 1938, but because he was not named to the 1935 AL All-Star roster and was benched in the 1937 game, he declined to accept a starting position on the 1938 AL team and did not play (the NL won 4-1). He led the league in runs scored (144) and at-bats per home run (9.6), tied for the AL lead in walks (119), was second in RBIs (146), slugging percentage (.683), and total bases (380), and third in OBP (.438) and set a still-standing major league record of 39 homers in his home park, the newly reconfigured Briggs Stadium. He also set a major-league record with 11 multiple-home run games. He came in third in the vote for MVP.
In 1939 Greenberg was voted to the AL All-Star roster for the third year in a row and was a starter at first base, and singled and walked in 4 at-bats (AL won 3-1). He finished second in the AL in home runs (33) and strikeouts (95), third in doubles (42) and slugging percentage (.622), fourth in RBIs (112), sixth in walks (91), and ninth in on-base percentage (.420).
After the 1939 season ended, Greenberg was asked by general manager Jack Zeller to take a salary cut of $5,000 ($ today) as a result of his off year in power and run production. He was asked to move from first base to the outfield to accommodate Rudy York, who was one of the best young hitters of his generation; York was tried at catcher, third baseman, and outfielder and proved to be a defensive liability at each position. Greenberg in turn, demanded a $10,000 bonus if he mastered the outfield, insisting "he" was the one taking the risk in learning a new position. Greenberg received his bonus at the end of spring training.
In 1940, Greenberg switched from playing the first base position to the left field position. For the 4th consecutive time, he was voted by the season's AL All-Star team manager to the AL All-Star team. In the bottom of the 6th inning, Greenberg and Lou Finney were sent into the game to replace right fielder Charlie Keller and left fielder Ted Williams with Greenberg playing in left field and Finney in right field. Greenberg batted twice in the game and fouled out to the catcher two-times. The NL won the game 4-0. That season, he led the AL in home runs for the third time in 6 years with 41; in RBIs (150), doubles (50), total bases (384), extra base hits (99), at-bats per home run (14.0), and slugging percentage (.670; 44 points ahead of Joe DiMaggio). He was second in the league behind Williams in runs scored (129) and OBP (.433), all while batting .340 (fifth best in the AL). He also led the Tigers to the AL pennant, and won his second American League MVP award, becoming the first player in major-league history to win an MVP award at two different playing positions.
On October 16, 1940, Greenberg became the first American League player to register for the nation's first peacetime draft. In the spring of 1941, the Detroit draft board initially classified Greenberg as 4F for "flat feet" after his first physical for military service and was recommended for light duty. The rumors that he had bribed the board, and concern that he would be likened to Jack Dempsey who had received negative publicity for failure to serve in World War I, led Greenberg to request to be reexamined. On April 18, he was found fit for regular military service and was reclassified.
On May 7, 1941, he was inducted into the U.S. Army after playing left field in 19 games and reported to Fort Custer at Battle Creek, Michigan. His salary was cut from $55,000 ($ today) a year to $21 ($ today) a month. He was not bitter, and stated, "I made up my mind to go when I was called. My country comes first." In November, while serving as an anti-tank gunner, he was promoted to sergeant, but was honorably discharged on December 5 (the United States Congress released men aged 28 years and older from service), two days before Japan bombed Pearl Harbor.
Greenberg re-enlisted as a sergeant on February 1, 1942, and volunteered for service in the Army Air Forces, becoming the first major league player to do so. He graduated from Officer Candidate School and was commissioned as a first lieutenant in the Air Corps (the new "Air Forces" service retaining the old name for its own logistics and training elements) and was assigned to the Physical Education Program. In February 1944, he was sent to the U.S. Army Special Services school. Promoted to captain, he requested overseas duty later that year and served in the China-Burma-India Theater for over six months, scouting locations for B-29 bomber bases and was a physical training officer with the 58th Bomber Wing. He was a Special Services officer of the 20th Bomber Command, 20th Air Force in China when it began bombing Japan on June 15. He was ordered to New York, and in late 1944, to Richmond, Virginia. Greenberg served 47 months, the longest of any major league player.
Greenberg remained in military uniform until he was placed on the military inactive list and discharged from the U.S. Army on June 14, 1945. He was the first major league player to return to MLB after the war. He returned to the Tigers team, and in his first game back on July 1, he homered. The All-Star Game scheduled for July 10 had been officially cancelled on April 24 and MLB did not name All-Stars that season due to strict travel restrictions during the last days of the war with Germany and Japan and the ending of World War II. In place of the All-Star Game, seven interleague games were played (eight had been scheduled) on July 9 and 10 to benefit the American Red Cross and the War Relief fund. An Associated Press All-Star roster was named (no game was played) for the AL and NL by a group of their sportswriters that included Greenberg as one of the All-Stars.
Greenberg, who played left field in 72 games and batted .311 in 1945, helped lead the Tigers to a come-from-behind American League pennant, clinching it with a grand slam home run in the dark—there were no lights in Sportsman's Park in St. Louis—ninth inning of the final game of the season. The ump—former Yankee pitching star of the 1920s Murderers Row team George Pipgras—supposedly said, "Sorry Hank, but I'm gonna have to call the game. I can't see the ball." Greenberg replied, "Don't worry, George, I can see it just fine", so the game continued. It ended with Greenberg's grand slam on the next pitch, clinching Hal Newhouser's 25th victory of the season. His home run allowed the Tigers to clinch the pennant and avoid a one-game playoff (that would have been necessary without the win) against the now-second-place Washington Senators. The Tigers went on to beat the Cubs in the World Series in seven games. Only three home runs were hit in that World Series. Phil Cavarretta hit a home run for the Cubs in Game One, Greenberg hit a homer in Game Two, where he batted in three runs in a 4–1 Tigers win, and he hit a two-run homer in Game Six in the eighth inning that tied the score 8–8; the Cubs went on to win that game with a run in the bottom of the 12th.
In 1946, he returned to peak form and playing at first base. He led the AL in home runs (44) and RBIs (127), both for the fourth time. He was second in slugging percentage (.604) and total bases (316) behind Ted Williams.
In 1947, Greenberg and the Tigers had a lengthy salary dispute. When Greenberg decided to retire rather than play for less, Detroit sold his contract to the Pittsburgh Pirates. To persuade him not to retire, Pittsburgh made Greenberg the first baseball player to earn over $80,000 ($ today) in a season as pure salary (though the exact amount is a matter of some dispute). Team co-owner Bing Crosby recorded a song, "Goodbye, Mr. Ball, Goodbye" with Groucho Marx and Greenberg to celebrate Greenberg's arrival. The Pirates also reduced the size of Forbes Field's cavernous left field, renaming the section "Greenberg Gardens" to accommodate Greenberg's pull-hitting style. Greenberg played first base for the Pirates in 1947 and was one of the few opposing players to publicly welcome Jackie Robinson to the majors.
That year he also had a chance to mentor a young future Hall-of-Famer, the 24-year-old Ralph Kiner. Said Greenberg, "Ralph had a natural home run swing. All he needed was somebody to teach him the value of hard work and self-discipline. Early in the morning on off-days, every chance we got, we worked on hitting." Kiner would go on to hit 51 home runs that year to lead the National League.
In his final season of 1947, Greenberg tied for the league lead in walks with 104, with a .408 on-base percentage and finished eighth in the league in home runs and tenth in slugging percentage. Greenberg became the first major league player to hit 25 or more home runs in a season in each league. Johnny Mize became the second in 1950.
Nevertheless, Greenberg retired as a player to take a front-office post with the Cleveland Indians. No player had ever retired after a final season in which they hit so many home runs. Since then, only Ted Williams (1960, 29), Dave Kingman (1986; 35), Mark McGwire (2001; 29), Barry Bonds (2007; 28) and David Ortiz (2016; 38) have hit as many or more homers in their final season.
Through 2010, he was first in career home runs and RBIs (ahead of Shawn Green) and batting average (ahead of Ryan Braun), and fourth in hits (behind Lou Boudreau), among all-time Jewish major league baseball players.
As a fielder, the 193-cm (6-foot-4-inch) Greenberg was awkward and unsure of himself early in his career, but mastered first base through countless hours of practice. Over the course of his career he demonstrated a higher-than-average fielding percentage and range at first base. When asked to move to left field in 1940 to make room for Rudy York, he worked tirelessly to conquer that position as well, reducing his errors in the outfield from 15 in 1940 to 0 in 1945.
Greenberg felt that runs batted in were more important than home runs. He would tell his teammates, "just get on base", or "just get the runner to third", and he would do the rest.
Greenberg would likely have approached 500 home runs and 1,800 RBIs had he not served in the military. As it was, he compiled 331 home runs, 1,051 runs and 1,276 RBI in a 1,394-game career. (b and c, see in Footnotes section below). Greenberg also hit for average, earning a lifetime batting average of .313. Starring as a first baseman and outfielder with the Tigers (1930, 1933–46) and doing duty only briefly with the Pirates (1947), Greenberg played only nine full seasons. He missed all but 19 games of the 1941 season, the three full seasons that followed, and most of 1945 to World War II military service and missed most of another season with a broken wrist.
After the 1947 season, Greenberg retired as a player, and Bill Veeck hired him as the Cleveland Indians' farm system director, and two years later, their General Manager; Greenberg did not, however, become a part-owner of the Indians until 1956, well after Veeck had sold his interest in the team. During his tenure, he sponsored more African American players than any other major league executive. Greenberg's contributions to the Cleveland farm system led to the team's successes throughout the 1950s, although Bill James once wrote that the Indians' late 1950s collapse should also be attributed to him. In 1949, Larry Doby also recommended Greenberg scout three players Doby used to play with in the Negro leagues: Hank Aaron, Ernie Banks, and Willie Mays. The next offseason Doby asked what Indians' scouts said about his recommendations. Said Greenberg, "Our guys checked 'em out and their reports were not good. They said that Aaron has a hitch in his swing and will never hit good pitching. Banks is too slow and didn't have enough range [at shortstop], and Mays can't hit a curveball." When Veeck sold his interest, Greenberg remained as general manager and part-owner (for one year) until 1957. He was the mastermind behind a potential move of the club to Minneapolis that was vetoed by the rest of ownership at the last minute. Greenberg was furious and sold his share soon afterwards.
In 1959, Greenberg and Veeck teamed up for a second time when their syndicate purchased the Chicago White Sox; Veeck served as team president with Greenberg as vice president and general manager. During Veeck and Greenberg's first season, the White Sox won their first AL pennant since 1919. Veeck would sell his shares in the White Sox in 1961, and Greenberg stepped down as general manager on August 26 of that season.
After the 1960 season, the American League announced plans to put a team in Los Angeles. Greenberg immediately became the favorite to become the new team's first owner and persuaded Veeck to join him as his partner. However, when Dodgers owner Walter O'Malley got wind of these developments, he threatened to scuttle the whole deal by invoking his exclusive rights to operate a major league team in southern California. In truth, O'Malley wanted no part of competing against an expansion team owned by a master promoter such as Veeck, even if he was only a minority partner. Greenberg wouldn't budge and pulled out of the running for what became the Los Angeles Angels (now the Los Angeles Angels of Anaheim). Greenberg later became a successful investment banker, briefly returning to baseball as a minority partner with Veeck when the latter repurchased the White Sox in 1975.
On September 20, 1961, Greenberg along with Bob Neal called a baseball game for ABC between the New York Yankees and Baltimore Orioles.
Greenberg married Caral Gimbel (daughter of Bernard Gimbel of the Gimbel's New York department store family) on February 18, 1946, three days after signing a $60,000 ($ today) contract with the Tigers. The couple had three children—sons Glenn H. Greenberg and Stephen and a daughter, Alva—before divorcing in 1958. Their son, Stephen, played five years in the Washington Senators/Texas Rangers organization. In 1995, Stephen Greenberg co-founded Classic Sports Network with Brian Bedol, which was purchased by ESPN and became ESPN Classic. He also was the chairman of CSTV, the first cable network devoted exclusively to college sports.
In 1966, Greenberg married Mary Jo Tarola, a minor actress who appeared on-screen as Linda Douglas, and remained with her until his death. They had no children.
Greenberg died of metastatic kidney cancer in Beverly Hills, California, in 1986, and his remains were entombed at Hillside Memorial Park Cemetery, in Culver City, California.
Incidents of anti-Semitism Greenberg faced included having players stare at him and having racial slurs thrown at him by spectators and sometimes opposing players. Examples of these imprecations were: "Hey Mo!" (referring to the Jewish prophet Moses) and "Throw a pork chop—he can't hit that!" (a reference to Judaic kosher laws). In the 1935 World Series umpire George Moriarty warned some Chicago Cubs players to stop yelling anti-Semitic slurs at Greenberg and eventually cleared the players from the Cubs bench. Moriarty was disciplined for this action by then-commissioner Kenesaw Mountain Landis.
Greenberg befriended Jackie Robinson after he signed with the Dodgers in 1947, and encouraged him; Robinson credited Greenberg with helping him through the difficulties of his rookie year.
In an article in 1976 in "Esquire" magazine, sportswriter Harry Stein published an "All Time All-Star Argument Starter", consisting of five ethnic baseball teams. Greenberg was the first baseman on Stein's Jewish team.
In 2006, Greenberg was featured on a United States postage stamp. The stamp is one of a block of four honoring "baseball sluggers", the others being Mickey Mantle, Mel Ott, and Roy Campanella. | https://en.wikipedia.org/wiki?curid=13627 |
Heinrich Schliemann
Heinrich Schliemann (; 6 January 1822 – 26 December 1890) was a German businessman and a pioneer in the field of archaeology. He was an advocate of the historicity of places mentioned in the works of Homer and an archaeological excavator of Hisarlik, now presumed to be the site of Troy, along with the Mycenaean sites Mycenae and Tiryns. His work lent weight to the idea that Homer's "Iliad" reflects historical events. Schliemann's excavation of nine levels of archaeological remains with dynamite has been criticized as destructive of significant historical artifacts, including the level that is believed to be the historical Troy.
Along with Arthur Evans, Schliemann was a pioneer in the study of Aegean civilization in the Bronze Age. The two men knew of each other, Evans having visited Schliemann's sites. Schliemann had planned to excavate at Knossos but died before fulfilling that dream. Evans bought the site and stepped in to take charge of the project, which was then still in its infancy.
Schliemann was born January 6, 1822 Heinrich Schliemann in Neubukow, Mecklenburg-Schwerin (part of the German Confederation). His father, Ernst Schliemann, was a Lutheran minister. The family moved to Ankershagen in 1823 (today their home houses the "Heinrich Schliemann Museum").
Heinrich's father was a poor Pastor. His mother, Luise Therese Sophie Schliemann, died in 1831, when Heinrich was nine years old. After his mother's death, his father sent Heinrich to live with his uncle. When he was eleven years old, his father paid for him to enroll in the Gymnasium (grammar school) at Neustrelitz. Heinrich's later interest in history was initially encouraged by his father, who had schooled him in the tales of the Iliad and the Odyssey and had given him a copy of Ludwig Jerrer's "Illustrated History of the World" for Christmas in 1829. Schliemann later claimed that at the age of 7 he had declared he would one day excavate the city of Troy.
However, Heinrich had to transfer to the Realschule (vocational school) after his father was accused of embezzling church funds and had to leave that institution in 1836 when his father was no longer able to pay for it. His family's poverty made a university education impossible, so it was Schliemann's early academic experiences that influenced the course of his education as an adult. In his archaeological career, however, there was often a division between Schliemann and the educated professionals.
At age 14, after leaving Realschule, Heinrich became an apprentice at Herr Holtz's grocery in Fürstenberg. He later told that his passion for Homer was born when he heard a drunkard reciting it at the grocer's. He laboured for five years, until he was forced to leave because he burst a blood vessel lifting a heavy barrel. In 1841, Schliemann moved to Hamburg and became a cabin boy on the "Dorothea," a steamer bound for Venezuela. After twelve days at sea, the ship foundered in a gale. The survivors washed up on the shores of the Netherlands. Schliemann became a messenger, office attendant, and later, a bookkeeper in Amsterdam.
On March 1, 1844, 22-year-old Schliemann took a position with B. H. Schröder & Co., an import/export firm. In 1846, the firm sent him as a General Agent to St. Petersburg.
In time, Schliemann represented a number of companies. He learned Russian and Greek, employing a system that he used his entire life to learn languages; Schliemann claimed that it took him six weeks to learn a language and wrote his diary in the language of whatever country he happened to be in. By the end of his life, he could converse in English, French, Dutch, Spanish, Portuguese, Italian, Russian, Swedish, Polish, Greek, Latin, and Arabic, besides his native German.
Schliemann's ability with languages was an important part of his career as a businessman in the importing trade. In 1850, he learned of the death of his brother, Ludwig, who had become wealthy as a speculator in the California gold fields.
Schliemann went to California in early 1851 and started a bank in Sacramento buying and reselling over a million dollars' worth of gold dust in just six months. When the local Rothschild agent complained about short-weight consignments he left California, pretending it was because of illness. While he was there, California became the 31st state in September 1850, and Schliemann acquired United States citizenship. While this story was propounded in Schliemann's autobiography of 1881, Christo Thanos and Wout Arentzen, state clearly that Schliemann was in St Petersburg that day, and "in actual fact, ...obtained his American citizenship only in 1869."
According to his memoirs, before arriving in California he dined in Washington, D.C. with President Millard Fillmore and his family, but W. Calder III says that Schliemann didn't attend but simply read about a similar gathering in the papers.
Schliemann also published what he said was an eyewitness account of the San Francisco Fire of 1851, which he said was in June although it took place in May. At the time he was in Sacramento and used the report of the fire in the "Sacramento Daily Journal" to write his report.
On April 7, 1852, he sold his business and returned to Russia. There he attempted to live the life of a gentleman, which brought him into contact with Ekaterina Petrovna Lyschin (1826–1896), the niece of one of his wealthy friends. Schliemann had previously learned that his childhood sweetheart, Minna, had married.
Heinrich and Ekaterina married on October 12, 1852. The marriage was troubled from the start.
Schliemann next cornered the market in indigo dye and then went into the indigo business itself, turning a good profit. Ekaterina and Heinrich had a son, Sergey (1855–1941), and two daughters, Natalya (1859–1869) and Nadezhda (1861–1935).
Schliemann made yet another quick fortune as a military contractor in the Crimean War, 1854–1856. He cornered the market in saltpeter, sulfur, and lead, constituents of ammunition, which he resold to the Russian government.
By 1858, Schliemann was 36 years old and wealthy enough to retire. In his memoirs, he claimed that he wished to dedicate himself to the pursuit of Troy.
As a consequence of his many travels, Schliemann was often separated from his wife and small children. He spent a month studying at the Sorbonne in 1866, while moving his assets from St. Petersburg to Paris to invest in real estate. He asked his wife to join him, but she refused.
Schliemann threatened to divorce Ekaterina twice before doing so. In 1869, he bought property and settled in Indianapolis for about three months to take advantage of Indiana's liberal divorce laws, although he obtained the divorce by lying about his residency in the U.S. and his intention to remain in the state. He moved to Athens as soon as an Indiana court granted him the divorce and married again two months later.
Heinrich Schliemann never was trained professionally in the art of archaeology. He is an amateur archaeologist, but many do not refer to him as any other than that of a professional archaeologist.
Schliemann was obsessed with the stories of Homer and ancient Mediterranean civilizations. He dedicated his life's work to unveiling the actual physical remains of the cities of Homer's epic tales. Many refer to him as the "father of pre-Hellenistic archaeology."
In 1868, Schliemann visited sites in the Greek world, published "Ithaka, der Peloponnesus und Troja" in which he asserted that Hissarlik was the site of Troy, and submitted a dissertation in Ancient Greek proposing the same thesis to the University of Rostock. In 1869, he was awarded a PhD "in absentia" from the University of Rostock, in Germany, for that submission. David Traill wrote that the examiners gave him his PhD on the basis of his topographical analyses of Ithaca, which were in part simply translations of another author's work or drawn from poetic descriptions by the same author.
In 1869, Schliemann divorced his first wife, Ekaterina Petrovna Lyshin, whom he had married in 1852, and bore him three children. A former teacher and Athenian friend, Theokletos Vimpos, the Archbishop of Mantineia and Kynouria, helped Schliemann find someone "enthusiastic about Homer and about a rebirth of my beloved Greece...with a Greek name and a soul impassioned for learning." The archbishop suggested a young schoolgirl, Sophia Engastromenos, daughter of his cousin. They were married by the archbishop on 23 September 1869. They later had two children, Andromache and Agamemnon Schliemann.
Schliemann was elected a member of the American Antiquarian Society in 1880.
Schliemann's first interest of a classical nature seems to have been the location of Troy. At the time he began excavating in Turkey, the site commonly believed to be Troy was at Pınarbaşı, a hilltop at the south end of the Trojan Plain. The site had been previously excavated by archaeologist and local expert, Frank Calvert. Schliemann performed soundings at Pınarbaşı but was disappointed by his findings. It was Calvert who identified Hissarlik as Troy and suggested Schliemann dig there on land owned by Calvert's family.
Schliemann was at first skeptical about the identification of Hissarlik with Troy but was persuaded by Calvert. Schliemann began digging at Hissarlik in 1870, and by 1873 had discovered nine buried cities. The day before digging was to stop on 15 June 1873, was the day he discovered gold, which he took to be Priam's treasure trove.
A cache of gold and several other objects appeared on or around May 27, 1873; Schliemann named it "Priam's Treasure". He later wrote that he had seen the gold glinting in the dirt and dismissed the workmen so that he and Sophia could excavate it themselves; they removed it in her shawl. However, Schliemann's oft-repeated story of the treasure's being carried by Sophia in her shawl was untrue. Schliemann later admitted fabricating it; at the time of the discovery Sophia was in fact with her family in Athens, following the death of her father. Sophia later wore "the Jewels of Helen" for the public.
Schliemann smuggled the treasure out of Turkey into Greece. The Turkish government sued Schliemann in a Greek court, and Schliemann was forced to pay a 10,000 gold franc indemnity. Schliemann ended up sending 50,000 gold francs to the Constantinople Imperial Museum, and some of the artifacts. Schliemann published "Troy and Its Remains" in 1874. Schliemann at first offered his collections, which included Priam's Gold, to the Greek government, then the French, and finally the Russians. However, in 1881, his collections ended up in Berlin, housed first in the Ethnographic Museum, and then the Museum for Pre- and Early History, until the start of WWII. In 1939, all exhibits were packed and stored in the museum basement, then moved to the Prussian State Bank vault in January 1941. Later in 1941, the treasure was moved to the Flakturm located at the Berlin Zoological Garden, called the Zoo Tower. Dr. Wilhelm Unverzagt protected the three crates containing the Trojan gold when the Battle for Berlin commenced, right up until SMERSH forces took control of the tower on 1 May. On 26 May 1945, Soviet forces, led by Lt. Gen. Nikolai Antipenko, Andre Konstantinov, deputy head of the Arts Committee, Viktor Lazarev, and Serafim Druzhinin, took the three crates away on trucks. The crates were then flown to Moscow on 30 June 1945, and taken to the Pushkin Museum ten days later. In 1994, the museum admitted the collection was in their possession.
In 1876, he began digging at Mycenae. There, he discovered the Shaft Graves, with their skeletons and more regal gold (including the so-called Mask of Agamemnon). These findings were published in "Mycenae" in 1878.
Although he had received permission in 1876 to continue excavation, Schliemann did not reopen the dig site at Troy until 1878–1879, after another excavation in Ithaca designed to locate a site mentioned in the "Odyssey". This was his second excavation at Troy. Emile Burnouf and Rudolf Virchow joined him there in 1879.
Schliemann began excavation of the Treasury of Minyas at Orchomenus (Boeotia) in 1880.
Schliemann made a third excavation at Troy in 1882–1883, an excavation of Tiryns with Wilhelm Dörpfeld in 1884, and a fourth excavation at Troy, also with Dörpfeld (who emphasized the importance of strata), in 1888–1890.
On August 1, 1890, Schliemann returned reluctantly to Athens, and in November travelled to Halle, where his chronic ear infection was operated upon, on November 13. The doctors deemed the operation a success, but his inner ear became painfully inflamed. Ignoring his doctors' advice, he left the hospital and travelled to Leipzig, Berlin, and Paris. From the latter, he planned to return to Athens in time for Christmas, but his ear condition became even worse. Too sick to make the boat ride from Naples to Greece, Schliemann remained in Naples but managed to make a journey to the ruins of Pompeii. On Christmas Day 1890, he collapsed into a coma; he died in a Naples hotel room the following day; the cause of death was cholesteatoma.
His corpse was then transported by friends to the First Cemetery in Athens. It was interred in a mausoleum shaped like a temple erected in ancient Greek style, designed by Ernst Ziller in the form of an amphiprostylee temple on top of a tall base. The frieze circling the outside of the mausoleum shows Schliemann conducting the excavations at Mycenae and other sites.
Schliemann's magnificent residence in the city centre of Athens, the "Iliou Melathron" (Ιλίου Μέλαθρον, "Palace of Ilium") houses today the Numismatic Museum of Athens.
Further excavation of the Troy site by others indicated that the level he named the Troy of the "Iliad" was inaccurate, although they retain the names given by Schliemann. In an article for "The Classical World," D.F. Easton wrote that Schliemann "was not very good at separating fact from interpretation" and claimed that, "Even in 1872 Frank Calvert could see from the pottery that Troy II had to be hundreds of years too early to be the Troy of the Trojan War, a point finally proven by the discovery of Mycenaean pottery in Troy VI in 1890."
"King Priam's Treasure" was found in the Troy II level, that of the Early Bronze Age, long before Priam's city of Troy VI or Troy VIIa in the prosperous and elaborate Mycenaean Age. Moreover, the finds were unique. The elaborate gold artifacts do not appear to belong to the Early Bronze Age.
His excavations were condemned by later archaeologists as having destroyed the main layers of the real Troy. Kenneth W. Harl, in the Teaching Company's "Great Ancient Civilizations of Asia Minor" lecture series, sarcastically claimed that Schliemann's excavations were carried out with such rough methods that he did to Troy what the Greeks could not do in their times, destroying and levelling down the entire city walls to the ground.
In 1972, Professor William Calder of the University of Colorado, speaking at a commemoration of Schliemann's birthday, claimed that he had uncovered several possible problems in Schliemann's work. Other investigators followed, such as Professor David Traill of the University of California.
An article published by the National Geographic Society called into question Schliemann's qualifications, his motives, and his methods:
In northwestern Turkey, Heinrich Schliemann excavated the site believed to be Troy in 1870. Schliemann was a German adventurer and con man who took sole credit for the discovery, even though he was digging at the site, called Hisarlik, at the behest of British archaeologist Frank Calvert. [...] Eager to find the legendary treasures of Troy, Schliemann blasted his way down to the second city, where he found what he believed were the jewels that once belonged to Helen. As it turns out, the jewels were a thousand years older than the time described in Homer's epic.
Another article presented similar criticisms when reporting on a speech by University of Pennsylvania scholar C. Brian Rose:
German archaeologist Heinrich Schliemann was the first to explore the Mound of Troy in the 1870s. Unfortunately, he had had no formal education in archaeology, and dug an enormous trench "which we still call the Schliemann Trench," according to Rose, because in the process Schliemann “destroyed a phenomenal amount of material." [...] Only much later in his career would he accept the fact that the treasure had been found at a layer one thousand years removed from the battle between the Greeks and Trojans, and thus that it could not have been the treasure of King Priam. Schliemann may not have discovered the truth, but the publicity stunt worked, making Schliemann and the site famous and igniting the field of Homeric studies in the late 19th century. During this period he was criticized and ridiculed of claims to fathering an offspring with a local Assyrian Girl sparking infidelity and adultery which Schliemann did not confirm or deny. '
Schliemann's methods have been described as "savage and brutal. He plowed through layers of soil and everything in them without proper record keeping—no mapping of finds, few descriptions of discoveries." Carl Blegen forgave his recklessness, saying "Although there were some regrettable blunders, those criticisms are largely colored by a comparison with modern techniques of digging; but it is only fair to remember that before 1876 very few persons, if anyone, yet really knew how excavations should properly be conducted. There was no science of archaeological investigation, and there was probably no other digger who was better than Schliemann in actual field work."
In 1874, Schliemann also initiated and sponsored the removal of medieval edifices from the Acropolis of Athens, including the great Frankish Tower. Despite considerable opposition, including from King George I of Greece, Schliemann saw the project through. The eminent historian of Frankish Greece William Miller later denounced this as "an act of vandalism unworthy of any people imbued with a sense of the continuity of history", and "pedantic barbarism".
Peter Ackroyd's novel "The Fall of Troy" (2006) is based on Schliemann's excavation of Troy. Schliemann is portrayed as "Heinrich Obermann".
Schliemann is also the subject of Chris Kuzneski's novel" The Lost Throne".
Schliemann is the subject of Irving Stone's novel "The Greek Treasure" (1975), which was the basis for the 2007 German television production "" ("Hunt for Troy").
Schliemann is a peripheral character in the historical mystery, "A Terrible Beauty". It is the 11th book in a series of novels featuring Lady Emily Hargreaves by Tasha Alexander.
Schliemann is also mentioned in the 2005 TV film "The Magic of Ordinary Days" by the character Livy.
The questionable authenticity of Schliemann’s discovery of Priam’s Treasure is a central plot point to Lian Dolan’s novel "Helen of Pasadena".
Schliemann is also mentioned in 2011 book "" by the character Ian Welch. | https://en.wikipedia.org/wiki?curid=13628 |
Hypnos
In Greek mythology, Hypnos (; , "sleep") is the personification of sleep; the Roman equivalent is known as Somnus. His name is the origin of the word hypnosis.
Hypnos is the son of Nyx ("The Night") and Erebus ("The Darkness"). His brother is Thanatos ("Death"). Both siblings live in the underworld ("Hades") or in Erebus, another valley of the Greek underworld. According to rumors, Hypnos lived in a big cave, which the river Lethe ("Forgetfulness") comes from and where night and day meet. His bed is made of ebony, on the entrance of the cave grow a number of poppies and other hypnotic plants. No light and no sound would ever enter his grotto. According to Homer, he lives on the island Lemnos, which later on has been claimed to be his very own dream-island. He is said to be a calm and gentle god, as he helps humans in need and, due to their sleep, owns half of their lives.
Hypnos lived next to his twin brother, Thanatos (Θάνατος, "death personified") in the underworld.
Hypnos' mother was Nyx (Νύξ, "Night"), the deity of Night, and his father was Erebus, the deity of Darkness. Nyx was a dreadful and powerful goddess, and even Zeus feared to enter her realm.
His wife, Pasithea, was one of the youngest of the Charites and was promised to him by Hera, who is the goddess of marriage and birth. Pasithea is the deity of hallucination or relaxation.
Hypnos used his powers to trick Zeus. Hypnos was able to trick him and help the Danaans win the Trojan war. During the war, Hera loathed her brother and husband, Zeus, so she devised a plot to trick him. She decided that in order to trick him she needed to make him so enamoured with her that he would fall for the trick. So she washed herself with ambrosia and anointed herself with oil, made especially for her to make herself impossible to resist for Zeus. She wove flowers through her hair, put on three brilliant pendants for earrings, and donned a wondrous robe. She then called for Aphrodite, the goddess of love, and asked her for a charm that would ensure that her trick would not fail. In order to procure the charm, however, she lied to Aphrodite because they sided on opposite sides of the war. She told Aphrodite that she wanted the charm to help herself and Zeus stop fighting. Aphrodite willingly agreed. Hera was almost ready to trick Zeus, but she needed the help of Hypnos, who had tricked Zeus once before.
Hera called on Hypnos and asked him to help her by putting Zeus to sleep. Hypnos was reluctant because the last time he had put the god to sleep, he was furious when he awoke. It was Hera who had asked him to trick Zeus the first time as well. She was furious that Heracles, Zeus' son, sacked the city of the Trojans. So she had Hypnos put Zeus to sleep, and set blasts of angry winds upon the sea while Heracles was still sailing home. When Zeus awoke he was furious and went on a rampage looking for Hypnos. Hypnos managed to avoid Zeus by hiding with his mother, Nyx. This made Hypnos reluctant to accept Hera's proposal and help her trick Zeus again. Hera first offered him a beautiful golden seat that can never fall apart and a footstool to go with it. He refused this first offer, remembering the last time he tricked Zeus. Hera finally got him to agree by promising that he would be married to Pasithea, one of the youngest Graces, whom he had always wanted to marry. Hypnos made her swear by the river Styx and call on gods of the underworld to be witnesses so that he would be ensured that he would marry Pasithea.
Hera went to see Zeus on Gargarus, the topmost peak of Mount Ida. Zeus was extremely taken by her and suspected nothing as Hypnos was shrouded in a thick mist and hidden upon a pine tree that was close to where Hera and Zeus were talking. Zeus asked Hera what she was doing there and why she had come from Olympus, and she told him the same lie she told Aphrodite. She told him that she wanted to go help her parent stop quarrelling and she stopped there to consult him because she didn't want to go without his knowledge and have him be angry with her when he found out. Zeus said that she could go any time, and that she should postpone her visit and stay there with him so they could enjoy each other's company. He told her that he was never in love with anyone as much as he loved her at that moment. He took her in his embrace and Hypnos went to work putting him to sleep, with Hera in his arms. While this went on, Hypnos travelled to the ships of the Achaeans to tell Poseidon, God of the Sea, that he could now help the Danaans and give them a victory while Zeus was sleeping. This is where Hypnos leaves the story, leaving Poseidon eager to help the Danaans. Thanks to Hypnos helping to trick Zeus, the war changed its course to Hera's favour, and Zeus never found out that Hypnos had tricked him one more time.
According to a passage in "Deipnosophistae", the sophist and dithyrambic poet Licymnius of Chios tells a different tale about the Endymion myth, in which Hypnos, in awe of his beauty, causes him to sleep with his eyes open, so he can fully admire his face.
Hypnos appears in numerous works of art, most of which are vases. An example of one vase that Hypnos is featured on is called "Ariadne Abandoned by Theseus," which is part of the Museum of Fine Arts in Boston’s collection. In this vase, Hypnos is shown as a winged god dripping Lethean water upon the head of Ariadne as she sleeps. One of the most famous works of art featuring Hypnos is a bronze head of Hypnos himself, now kept in the British Museum in London. This bronze head has wings sprouting from his temples and the hair is elaborately arranged, some tying in knots and some hanging freely from his head.
The English word "hypnosis" is derived from his name, referring to the fact that when hypnotized, a person is put into a sleep-like state (hypnos "sleep" + -osis "condition"). The class of medicines known as "hypnotics" which induce sleep also take their name from Hypnos.
Additionally, the English word "insomnia" comes from the name of his Latin counterpart, Somnus. (in- "not" + somnus "sleep"), as well as a few less-common words such as "somnolent", meaning sleepy or tending to cause sleep and hypersomnia meaning excessive sleep, which can be caused by many conditions (known as secondary hypersomnia) or a rare sleep disorder causing excessive sleep with unknown cause, called Idiopathic Hypersomnia.
3D model of "Bronze head of Hypnos" via laser scan of a cast of British Museum's bronze. | https://en.wikipedia.org/wiki?curid=13629 |
Holy orders
In certain Christian churches, holy orders are ordained ministries such as bishop, priest, or deacon, and the sacrament or rite by which candidates are ordained to those orders. Churches recognizing these orders include the Catholic Church, the Eastern Orthodox (ιερωσύνη ["hierōsynē"], ιεράτευμα ["hierateuma"], Священство ["Svyashchenstvo"]), Oriental Orthodox, Anglican, Assyrian, Old Catholic, Independent Catholic and some Lutheran churches. Except for Lutherans and some Anglicans, these churches regard ordination as a sacrament (the "sacramentum ordinis"). The Anglo-Catholic tradition within Anglicanism identifies more with the Roman Catholic position about the sacramental nature of ordination.
Denominations have varied conceptions of holy orders. In Anglican and some Lutheran churches the traditional orders of bishop, priest and deacon are bestowed using ordination rites. The extent to which ordination is considered sacramental in these traditions has, however, been a matter of some internal dispute. Baptists are among the denominations that do not consider ministry as being sacramental in nature and would not think of it in terms of "holy orders" as such. Historically, the word "order" (Latin "ordo") designated an established civil body or corporation with a hierarchy, and "ordinatio" meant legal incorporation into an "ordo". The word "holy" refers to the church. In context, therefore, a holy order is set apart for ministry in the church. Other positions, such as pope, patriarch, cardinal, monsignor, archbishop, archimandrite, archpriest, protopresbyter, hieromonk, protodeacon and archdeacon, are not sacramental orders but specialized ministries.
The Eastern Orthodox Church considers ordination (known as "cheirotonia", "laying on of hands") to be a sacred mystery (what in the West is called a sacrament). Although all other mysteries may be performed by a presbyter, ordination may only be conferred by a bishop, and the ordination of a bishop may only be performed by several bishops together. "Cheirotonia" always takes place during the Divine Liturgy.
It was the mission of the Apostles to go forth into all the world and preach the Gospel, baptizing those who believed in the name of the Holy Trinity (). In the Early Church those who presided over congregations were referred to variously as "episcopos" (bishop) or "presbyteros" (priest). These successors of the Apostles were ordained to their office by the laying on of hands, and according to Orthodox theology formed a living, organic link with the Apostles, and through them with Jesus Christ himself. This link is believed to continue in unbroken succession to this day. Over time, the ministry of bishops (who hold the fullness of the priesthood) and presbyters or priests (who hold a portion of the priesthood as bestowed by their bishop) came to be distinguished. In Orthodox terminology, "priesthood" or "sacerdotal" refers to the ministry of bishops and priests.
The Eastern Orthodox Church also has ordination to minor orders (known as "cheirothesia", "imposition of hands") which is performed outside of the Divine Liturgy, typically by a bishop, although certain archimandrites of stavropegial monasteries may bestow cheirothesia on members of their communities.
A bishop is the collector of the money of the diocese and the living Vessel of Grace through whom the "energeia" (divine grace) of the Holy Spirit flows into the rest of the church. A bishop is consecrated through the laying on of hands by several bishops. (With the consent of several other bishops, a single bishop has performed the ordination of another bishop in emergency situations, such as times of persecution), The consecration of a bishop takes place near the beginning of the Liturgy, since a bishop can, in addition to performing the Mystery of the Eucharist, also ordain priests and deacons. Before the commencement of the Holy Liturgy, the bishop-elect professes, in the middle of the church before the seated bishops who will consecrate him, in detail the doctrines of the Orthodox Christian Faith and pledges to observe the canons of the Apostles and Councils, the Typikon and customs of the Orthodox Church and to obey ecclesiastical authority. After the Little Entrance, the arch-priest and arch-deacon conduct the bishop-elect before the Royal Gates where he is met by the bishops and kneels before the altar on both knees. The Gospel Book is laid over his head and the consecrating bishops lay their hands upon the Gospel Book, while the prayers of ordination are read by the eldest bishop. After this, the newly consecrated bishop ascends the "synthranon" (bishop's throne in the sanctuary) for the first time. Customarily, the newly consecrated bishop ordains a priest and a deacon at the Liturgy during which he is consecrated.
A priest may serve only at the pleasure of his bishop. A bishop bestows faculties (permission to minister within his diocese) giving a priest chrism and an antimins; he may withdraw faculties and demand the return of these items. The ordination of a priest occurs before the Anaphora (Eucharistic Prayer) in order that he may on the same day take part in the celebration of the Eucharist: During the Great Entrance, the candidate for ordination carries the Aër (chalice veil) over his head (rather than on his shoulder, as a deacon otherwise carries it then) as a symbol of giving up his diaconate, and comes last in the procession and stands at the end of the pair of lines of the priests. After the Aër is taken from the candidate to cover the chalice and diskos, a chair is brought for the bishop to sit on by the northeast corner of the Holy Table (altar). Two deacons go to priest-elect who, at that point, had been standing alone in the middle of the church, and bow him down to the west (to the people) and to the east (to the clergy), asking their consent by saying “Command ye!” and then lead him through the holy doors of the altar where the archdeacon asks the bishop’s consent, saying, “Command, most sacred master!” after which a priest escorts the candidate three times around the Holy Table, during which he kisses each corner of the Holy Table as well as the bishop's epigonation and right hand and prostrates himself before the holy table at each circuit. The candidate is then taken to the southeast corner of the Holy Table and kneels on both knees, resting his forehead on the edge of the Holy Table. The ordaining bishop then places his omophor and right hand over the ordinand's head and recites aloud the first "Prayer of Cheirotonia" and then prays silently the other two prayers of cheirotonia while a deacon quietly recites a litany and the clergy, then the congregation, chant “Lord, have mercy”. Afterwards, the bishop brings the newly ordained priest to stand in the Holy Doors and presents him to the faithful. He then clothes the priest in each of his sacerdotal vestments, at each of which the people sing, "Worthy!". Later, after the Epiklesis of the Liturgy, the bishop hands him a portion of the Lamb (Host) saying:
A deacon may not perform any Sacrament and performs no liturgical services on his own but serves only as an assistant to a priest and may not even vest without the blessing of a priest. The ordination of a deacon occurs after the Anaphora (Eucharistic Prayer) since his role is not in performing the Holy Mystery but consists only in serving; the ceremony is much the same as at the ordination of a priest, but the deacon-elect is presented to the people and escorted to the holy doors by two sub-deacons (his peers, analogous to the two deacons who so present a priest-elect) is escorted three times around the Holy Table by a deacon, and he kneels on only one knee during the "Prayer of Cheirotonia". After being vested as a deacon and given a "liturgical fan (ripidion or hexapterygion)", he is led to the side of the Holy Table where he uses the ripidion to gently fan the Holy Gifts (consecrated Body and Blood of Christ).
The Anglican churches hold their bishops to be in apostolic succession, although there is some difference of opinion with regard to whether ordination is to be regarded as a sacrament. The Anglican Articles of Religion hold that only Baptism and the Lord's Supper are to be counted as sacraments of the gospel, and assert that other rites "commonly called Sacraments", considered to be sacraments by such as the Roman Catholic and Eastern churches, were not ordained by Christ in the Gospel. They do not have the nature of a sacrament of the gospel in the absence of any physical matter such as the water in Baptism and the bread and wine in the Eucharist. The Book of Common Prayer provides rites for ordination of bishops, priests and deacons. Only bishops may ordain. Within Anglicanism, three bishops are normally required for ordination to the episcopate, while one bishop is sufficient for performing ordinations to the priesthood and diaconate.
Lutherans reject the Roman Catholic understanding of holy orders because they do not think sacerdotalism is supported by the Bible. Martin Luther taught that each individual was expected to fulfill his God-appointed task in everyday life. The modern usage of the term vocation as a life-task was first employed by Martin Luther. In Luther's Small Catechism, the holy orders include but are not limited to the following: bishops, pastors, preachers, governmental offices, citizens, husbands, wives, children, employees, employers, young people, and widows. However, also according to the Book of Concord: "But if ordination be understood as applying to the ministry of the Word, we are not unwilling to call ordination a sacrament. For the ministry of the Word has God's command and glorious promises, Rom. 1:16: The Gospel is the power of God unto salvation to every one that believeth. Likewise, Isa. 55:11: So shall My Word be that goeth forth out of My mouth; it shall not return unto Me void, but it shall accomplish that which I please. 12.] If ordination be understood in this way, neither will we refuse to call the imposition of hands a sacrament. For the Church has the command to appoint ministers, which should be most pleasing to us, because we know that God approves this ministry, and is present in the ministry [that God will preach and work through men and those who have been chosen by men]."
The ministerial orders of the Catholic Church include the orders of bishops, deacons and presbyters, which in Latin is "sacerdos". The ordained priesthood and common priesthood (or priesthood of the all the baptized) are different in function and essence.
A distinction is made between "priest" and "presbyter". In the 1983 Code of Canon Law, "The Latin words "sacerdos" and "sacerdotium" are used to refer in general to the ministerial priesthood shared by bishops and presbyters. The words "presbyter, presbyterium and presbyteratus" refer to priests [in the English use of the word] and presbyters".
While the consecrated life is neither clerical nor lay by definition, clerics can be members of institutes of consecrated or secular (diocesan) life.
The sequence in which holy orders are received are: minor orders, deacon, priest, bishop.
For Catholics, it is typical in the year of seminary training that a man will be ordained to the diaconate, which Catholics since the Second Vatican Council sometimes call the "transitional diaconate" to distinguish men bound for priesthood from permanent deacons. They are licensed to preach sermons (under certain circumstances a permanent deacon may not receive faculties to preach), to perform baptisms, and to witness Catholic marriages, but to perform no other sacraments. They assist at the Eucharist or the Mass, but are not able to consecrate the bread and wine. Normally, after six months or more as a transitional deacon, a man will be ordained to the priesthood. Priests are able to preach, perform baptisms, confirm (with special dispensation from their ordinary), witness marriages, hear confessions and give absolutions, anoint the sick, and celebrate the Eucharist or the Mass.
Orthodox seminarians are typically tonsured as readers before entering seminary, and may later be made subdeacons or deacons; customs vary between seminaries and between Orthodox jurisdictions. Some deacons remain permanently in the diaconate while most subsequently are ordained as priests. Orthodox clergy are typically either married or monastic. Monastic deacons are called hierodeacons, monastic priests are called hieromonks. Orthodox clergy who marry must do so prior to ordination to the subdiaconate (or diaconate, according to local custom) and typically one is either tonsured a monk or married before ordination. A deacon or priest may not marry, or remarry if widowed, without abandoning his clerical office. Often, widowed priests take monastic vows. Orthodox bishops are always monks; a single or widowed man may be elected a bishop but he must be tonsured a monk before consecration as a bishop.
For Anglicans, a person is usually ordained a deacon once he (or she) has completed training at a theological college. The historic practice of a bishop tutoring a candidate himself ("reading for orders") is still to be found. The candidate then typically serves as an assistant curate and may later be ordained as a priest at the discretion of the bishop. Other deacons may choose to remain in this order. Anglican deacons can preach sermons, perform baptisms and conduct funerals, but, unlike priests, cannot celebrate the Eucharist. In most branches of the Anglican church, women can be ordained as priests, and in some of them, can also be ordained bishops.
Bishops are chosen from among priests in churches that adhere to Catholic usage.
In the Roman Catholic Church, bishops, like priests, are celibate and thus unmarried; further, a bishop is said to possess the fullness of the sacrament of holy orders, empowering him to ordain deacons, priests, and – with papal consent – other bishops. If a bishop, especially one acting as an ordinary – a head of a diocese or archdiocese – is to be ordained, three bishops must usually co-consecrate him with one bishop, usually an archbishop or the bishop of the place, being the chief consecrating prelate.
Among Eastern Rite Catholic and Eastern Orthodox churches, which permit married priests, bishops must either be unmarried or agree to abstain from contact with their wives. It is a common misconception that all such bishops come from religious orders; while this is generally true, it is not an absolute rule. In the case of both Catholics – (Western and) Eastern Catholic, Oriental Orthodox and Eastern Orthodox, they are usually leaders of territorial units called dioceses (or its equivalent in the east, an eparchy). Only bishops can validly administer the sacrament of holy orders.
The Roman Catholic Church unconditionally recognizes the validity of ordinations in the Eastern churches. Some Eastern Orthodox churches reordain Catholic priests who convert while others accept their Roman Catholic ordination using the concept of economia (church economy).
Anglican churches claim to have maintained apostolic succession. The succession of Anglican bishops is not universally recognized, however. The Roman Catholic Church judged Anglican orders invalid when Pope Leo XIII in 1896 wrote in "Apostolicae curae" that Anglican orders lack validity because the rite by which priests were ordained was not correctly worded from 1547 to 1553 and from 1559 to the time of Archbishop William Laud (Archbishop of Canterbury 1633–1645). The papacy claimed the form and matter was inadequate to make a Catholic bishop. The actual "mechanical" succession, prayer and laying on hands was not disputed. Two of the four consecrators of Matthew Parker in 1559 had been consecrated using the English Ordinal and two using the Roman Pontifical. Nonetheless, they believed that this caused a break of continuity in apostolic succession, making all further ordinations null and void.
Eastern Orthodox bishops have, on occasion, granted "economy" when Anglican priests convert to Orthodoxy. Various Orthodox churches have also declared Anglican orders valid subject to a finding that the bishops in question did indeed maintain the true faith, the Orthodox concept of apostolic succession being one in which the faith must be properly adhered to and transmitted, not simply that the ceremony by which a man is made a bishop is conducted correctly.
Changes in the Anglican Ordinal since King Edward VI, and a fuller appreciation of the pre-Reformation ordinals, suggest that the correctness of the enduring dismissal of Anglican orders is questionable. To reduce doubt concerning Anglican apostolic succession, especially since the 1930 Bonn agreement between the Anglican and Old Catholic churches, some Anglican bishops have included among their consecrators bishops of the Old Catholic Church, whose holy orders are recognised as valid and regular by the Roman Catholic Church.
Neither Roman Catholics nor Anglicans recognize the validity of ordinations of ministers in Protestant churches that do not maintain apostolic succession; but some Anglicans, especially Low Church or Evangelical ones, commonly treat Protestant ministers and their sacraments as valid. Rome also does not recognize the apostolic succession of those Lutheran bodies which retained apostolic succession.
Officially, the Anglican Communion accepts the ordinations of those denominations which are in full communion with their own churches, such as the Lutheran state churches of Scandinavia. Those clergy may preside at services requiring a priest if one is not otherwise available.
Married men may be ordained to the diaconate as Permanent Deacons, but in the Latin Rite of the Roman Catholic Church generally may not be ordained to the priesthood. In the Eastern Catholic Churches and in the Eastern Orthodox Church, married deacons may be ordained priests but may not become bishops. Bishops in the Eastern Rites and the Eastern Orthodox churches are almost always drawn from among monks, who have taken a vow of celibacy. They may be widowers, though; it is not required of them never to have been married.
In some cases, widowed permanent deacons have been ordained to the priesthood. There have been some situations in which men previously married and ordained to the priesthood in an Anglican church or in a Lutheran church have been ordained to the Catholic priesthood and allowed to function much as an Eastern Rite priest but in a Latin Rite setting. This is never "sub conditione" (conditionally), as there is in Catholic canon law no true priesthood in Protestant denominations. Such ordination may only happen with the approval of the priest's Bishop and a special permission by the Pope.
Anglican clergy may be married or may marry after ordination. In the Old Catholic Church and the Independent Catholic Churches there are no ordination restrictions related to marriage.
Ordination ritual and procedures vary by denomination. Different churches and denominations specify more or less rigorous requirements for entering into office, and the process of ordination is likewise given more or less ceremonial pomp depending on the group. Many Protestants still communicate authority and ordain to office by having the existing overseers physically lay hands on the candidates for office.
The American Methodist model is an episcopal system loosely based on the Anglican model, as the Methodist Church arose from the Anglican Church. It was first devised under the leadership of Bishops Thomas Coke and Francis Asbury of the Methodist Episcopal Church in the late 18th century. In this approach, an elder (or 'presbyter') is ordained to word (preaching and teaching), sacrament (administering Baptism and the Lord's Supper), order (administering the life of the church and, in the case of bishops, ordaining others for mission and ministry), and service. A deacon is a person ordained only to word and service.
In the United Methodist Church, for instance, seminary graduates are examined and approved by the Conference Board of Ordained Ministry and then the Clergy Session. They are accepted as "probationary (provisional) members of the conference." The resident bishop may commission them to full-time ministry as "provisional" ministers. (Before 1996, the graduate was ordained as a transitional deacon at this point, a provisional role since eliminated. The order of deacon is now a separate and distinct clergy order in the United Methodist Church.) After serving the probationary period, of a minimum of two years, the probationer is then examined again and either continued on probation, discontinued altogether, or approved for ordination. Upon final approval by the Clergy Session of the Conference, the probationer becomes a full member of the Conference and is then ordained as an elder or deacon by the resident Bishop. Those ordained as elders are members of the Order of Elders, and those ordained deacons are members of the Order of Deacons.
John Wesley appointed Thomas Coke (above mentioned as bishop) as 'Superintendent', his translation of the Greek "episcopos" – which is normally translated 'bishop' in English. The British Methodist Conference has two distinct orders of presbyter and deacon. It does not have bishops as a separate order of ministry. British Methodist Church has more than 500 superintendents, who are not a separate order of ministry but a role within the order of presbyters. The roles normally undertaken by bishops are expressed in ordaining presbyters and deacons by the annual Conference through its president (or a past president); in confirmation by all presbyters; in local oversight by superintendents and in regional oversight by chairs of District.
Presbyterian churches, following their Scottish forebears, reject the traditions surrounding overseers and instead identify the offices of bishop ("episkopos" in Greek) and elder ("presbuteros" in Greek, from which the term "presbyterian" comes). The two terms seem to be used interchangeably in the Bible (compare Titus 1.5–9 and I Tim. 3.2–7). Their form of church governance is known as presbyterian polity. While there is increasing authority with each level of gathering of elders ('Session' over a congregation or parish, then presbytery, then possibly a synod, then the General Assembly), there is no hierarchy of elders. Each elder has an equal vote at the court on which they stand.
Elders are usually chosen at their local level, either elected by the congregation and approved by the Session, or appointed directly by the Session. Some churches place limits on the term that the elders serve, while others ordain elders for life.
Presbyterians also ordain (by laying on of hands) ministers of Word and Sacrament (sometimes known as 'teaching elders'). These ministers are regarded simply as Presbyters ordained to a different function, but in practice they provide the leadership for local Session.
Some Presbyterians identify those appointed (by the laying on of hands) to serve in practical ways (Acts 6.1–7) as deacons ("diakonos" in Greek, meaning "servant"). In many congregations, a group of men or women is thus set aside to deal with matters such as congregational fabric and finance, releasing elders for more 'spiritual' work. These persons may be known as 'deacons', 'board members' or 'managers', depending on the local tradition. Unlike elders and minister, they are not usually 'ordained', and are often elected by the congregation for a set period of time.
Other Presbyterians have used an 'order of deacons' as full-time servants of the wider Church. Unlike ministers, they do not administer sacraments or routinely preach. The Church of Scotland has recently begun ordaining deacons to this role.
Unlike the Episcopalian system, but similar to the United Methodist system described above, the two Presbyterian offices are different in "kind" rather than in "degree", since one need not be a deacon before becoming an elder. Since there is no hierarchy, the two offices do not make up an "order" in the technical sense, but the terminology of holy orders is sometimes still developed.
Congregationalist churches implement different schemes, but the officers usually have less authority than in the presbyterian or episcopalian forms. Some ordain only ministers and rotate members on an advisory board (sometimes called a board of elders or a board of deacons). Because the positions are by comparison less powerful, there is usually less rigor or fanfare in how officers are ordained.
The Church of Jesus Christ of Latter-day Saints (LDS Church) accepts the legal authority of clergy to perform marriages but does not recognize any other sacraments performed by ministers not ordained to the Latter-day Saint priesthood. Although the Latter-day Saints do claim a doctrine of a certain spiritual "apostolic succession," it is significantly different from that claimed by Catholics and Protestants since there is no succession or continuity between the first century and the lifetime of Joseph Smith, the founder of the LDS church. Mormons teach that the priesthood was lost in ancient times not to be restored by Christ until the nineteenth century when it was given to Joseph Smith directly.
The Church of Jesus Christ of Latter-day Saints has a relatively open priesthood, ordaining nearly all worthy adult males and boys of the age of twelve and older. Latter-day Saint priesthood consists of two divisions: the Melchizedek Priesthood and Aaronic Priesthood. The Melchizedek Priesthood because Melchizedek was such a great high priest. Before his day it was called the Holy Priesthood, after the Order of the Son of God. But out of respect or reverence to the name of the Supreme Being, to avoid the too frequent repetition of his name, the church, in ancient days, called that priesthood after Melchizedek. The lesser priesthood is an appendage to the Melchizedek Priesthood. It is called the Aaronic Priesthood because it was conferred on Aaron and his sons throughout all their generations.
The offices, or ranks, of the Melchizedek order (in roughly descending order) include apostle, seventy, patriarch, high priest, and elder. The offices of the Aaronic order are bishop, priest, teacher, and deacon. The manner of ordination consists of the laying on of hands by two or more men holding at least the office being conferred while one acts as voice in conferring the priesthood or office and usually pronounces a blessing upon the recipient. Teachers and deacons do not have the authority to ordain others to the priesthood. All church members are authorized to teach and preach regardless of priesthood ordination so long as they maintain good standing within the church. The church does not use the term "holy orders."
Community of Christ has a largely volunteer priesthood, and all members of the priesthood are free to marry (as traditionally defined by the Christian community). The priesthood is divided into two orders, the Aaronic priesthood and the Melchisedec priesthood. The Aaronic order consists of the offices of deacon, teacher and priest. The Melchisedec Order consists of the offices of elder (including the specialized office of seventy) and high priest (including the specialized offices of evangelist, bishop, apostle, & prophet). Paid ministers include “appointees” and the general officers of the church, which include some specialized priesthood offices (such as the office of president, reserved for the three top members of the church leadership team). As of 1984, women have been eligible for priesthood, which is conferred through the sacrament of ordination by the laying-on-of-hands. While there is technically no age requirement for any office of priesthood, there is no automatic ordination or progression as in the LDS Church. Young people are occasionally ordained as deacon, and sometimes teacher or priest, but generally most priesthood members are called following completion of post secondary school education. In March 2007 a woman was ordained for the first time to the office of president.
The Roman Catholic Church, in accordance with its understanding of the theological tradition on the issue, and the definitive clarification found in the encyclical letter "Ordinatio sacerdotalis" (1994) written by Pope John Paul II, officially teaches that it has no authority to ordain female as priests and thus there is no possibility of women becoming priests at any time in the future. "Ordaining" women as deaconesses is not a possibility in any sacramental sense of the diaconate, for a deaconess is not simply a female who is a deacon but instead holds a position of lay service. As such, she does not receive the sacrament of holy orders. Many Anglican and Protestant churches ordain women, but in many cases, only to the office of deacon.
Various branches of the Orthodox churches, including the Greek Orthodox, currently set aside wow s deaconesses. Some churches are internally divided on whether the Scriptures permit the ordination of women. When one considers the relative size of the churches (1.1 billion Roman Catholics, 300 million Orthodox, 590 million Anglicans and Protestants), it is a minority of Christian churches that ordain women. Protestants constitute about 27 percent of Christians worldwide, and most of their churches that do ordain women have only done so within the past century.
In some traditions women may be ordained to the same orders as men. In others women are restricted from certain offices. Females may be ordained bishop in the Old Catholic churches and in the Anglican/Episcopal churches in Scotland, Ireland, Wales, Cuba, Brazil, South Africa, Canada, US, Australia, Aotearoa New Zealand and Polynesia. The Church of Ireland had installed Pat Storey in 2013. On 19 September 2013, Storey was chosen by the House of Bishops to succeed Richard Clarke as Bishop of Meath and Kildare. She was consecrated to the episcopate at Christ Church Cathedral, Dublin, on 30 November 2013. She is the first woman to be elected as a bishop in the Church of Ireland and the first females to be an Anglican Communion bishop in Ireland and Great Britain. The Church of England's General Synod voted in 2014 to allow females to be ordained to the episcopate, dwith Libby Lane being the first woman to be ordained bishop. Continuing Anglican churches of the world do not permit women to be ordained. In some Protestant denominations, female may serve as assistant pastors but not as pastors in charge of congregations. In some denominations, females can be ordained to be an elder or deacon. Some denominations allow for the ordination of females for certain religious orders. Within certain traditions, such as the Anglican and Lutheran, there is a diversity of theology and practice regarding ordination of women and females
The ordination of lesbian, gay, bisexual or transgender clergy who are sexually active, and open about it, represents a fiercely contested subject within many mainline Protestant communities. The majority of churches are opposed to such ordinations because they view homosexuality as a sin and incompatible with Biblical teaching and traditional Christian practice. Yet there are an increasing number of Christian congregations and communities that are open to ordaining people who are gay or lesbian. These are liberal Protestant denominations, such as the Episcopal Church, the United Church of Christ, and the Evangelical Lutheran Church in America, plus the small Metropolitan Community Church, founded as a church intending to minister primarily to LGBT people, and the Church of Sweden where such clergy may serve in senior clerical positions. The Church of Norway has for many years have had both gay and lesbian priests, even bishops, and in 2006 the first woman who was appointed a bishop in Norway, came out as an active homosexual herself, and that she had been a homosexual since before she joined the church.
The issue of ordination has caused particular controversy in the worldwide Anglican Communion, following the approval of Gene Robinson to be Bishop of New Hampshire in the US Episcopal Church. | https://en.wikipedia.org/wiki?curid=13631 |
Homer
Homer (; , "Hómēros") is the presumed author of the "Iliad" and the "Odyssey", two epic poems that are the central works of ancient Greek literature. The "Iliad" is set during the Trojan War, the ten-year siege of the city of Troy by a coalition of Greek kingdoms. It focuses on a quarrel between King Agamemnon and the warrior Achilles lasting a few weeks during the last year of the war. The "Odyssey" focuses on the ten-year journey home of Odysseus, king of Ithaca, after the fall of Troy. Many accounts of Homer's life circulated in classical antiquity, the most widespread being that he was a blind bard from Ionia, a region of central coastal Anatolia in present-day Turkey. Modern scholars consider these accounts legendary.
The Homeric Question – concerning by whom, when, where and under what circumstances the "Iliad" and "Odyssey" were composed – continues to be debated. Broadly speaking, modern scholarly opinion falls into two groups. One holds that most of the "Iliad" and (according to some) the "Odyssey" are the works of a single poet of genius. The other considers the Homeric poems to be the result of a process of working and reworking by many contributors, and that "Homer" is best seen as a label for an entire tradition. It is generally accepted that the poems were composed at some point around the late eighth or early seventh century BC.
The poems are in Homeric Greek, also known as Epic Greek, a literary language which shows a mixture of features of the Ionic and Aeolic dialects from different centuries; the predominant influence is Eastern Ionic. Most researchers believe that the poems were originally transmitted orally. From antiquity until the present day, the influence of Homeric epic on Western civilization has been great, inspiring many of its most famous works of literature, music, art and film. The Homeric epics were the greatest influence on ancient Greek culture and education; to Plato, Homer was simply the one who "has taught Greece" – "ten Hellada pepaideuken".
Today only the "Iliad" and "Odyssey" are associated with the name 'Homer'. In antiquity, a very large number of other works were sometimes attributed to him, including the "Homeric Hymns", the "Contest of Homer and Hesiod", the "Little Iliad", the "Nostoi", the "Thebaid", the "Cypria", the "Epigoni", the comic mini-epic "Batrachomyomachia" ("The Frog-Mouse War"), the "Margites", the "Capture of Oechalia", and the "Phocais". These claims are not considered authentic today and were by no means universally accepted in the ancient world. As with the multitude of legends surrounding Homer's life, they indicate little more than the centrality of Homer to ancient Greek culture.
Many traditions circulated in the ancient world concerning Homer, most of which are lost. Modern scholarly consensus is that they have no value as history. Some claims were established early and repeated often. They include that Homer was blind (taking as self-referential a passage describing the blind bard Demodocus), that he was born in Chios, that he was the son of the river Meles and the nymph Critheïs, that he was a wandering bard, that he composed a varying list of other works (the "Homerica"), that he died either in Ios or after failing to solve a riddle set by fishermen, and various explanations for the name "Homer". The two best known ancient biographies of Homer are the "Life of Homer" by the Pseudo-Herodotus and the "Contest of Homer and Hesiod".
The study of Homer is one of the oldest topics in scholarship, dating back to antiquity. Nonetheless, the aims of Homeric studies have changed over the course of the millennia. The earliest preserved comments on Homer concern his treatment of the gods, which hostile critics such as the poet Xenophanes of Colophon denounced as immoral. The allegorist Theagenes of Rhegium is said to have defended Homer by arguing that the Homeric poems are allegories. The "Iliad" and the "Odyssey" were widely used as school texts in ancient Greek and Hellenistic cultures. They were the first literary works taught to all students. The "Iliad", particularly its first few books, was far more intently studied than the "Odyssey" during the Hellenistic and Roman periods.
As a result of the poems' prominence in classical Greek education, extensive commentaries on them developed to explain parts of the poems that were culturally or linguistically difficult. During the Hellenistic and Roman periods, many interpreters, especially the Stoics, who believed that Homeric poems conveyed Stoic doctrines, regarded them as allegories, containing hidden wisdom. Perhaps partially because of the Homeric poems' extensive use in education, many authors believed that Homer's original purpose had been to educate. Homer's wisdom became so widely praised that he began to acquire the image of almost a prototypical philosopher. Byzantine scholars such as Eustathius of Thessalonica and John Tzetzes produced commentaries, extensions and scholia to Homer, especially in the twelfth century. Eustathius's commentary on the "Iliad" alone is massive, sprawling over nearly 4,000 oversized pages in a twenty-first century printed version and his commentary on the "Odyssey" an additional nearly 2,000.
In 1488, the Greek scholar Demetrios Chalkokondyles published the "editio princeps" of the Homeric poems. The earliest modern Homeric scholars started with the same basic approaches towards the Homeric poems as scholars in antiquity. The allegorical interpretation of the Homeric poems that had been so prevalent in antiquity returned to become the prevailing view of the Renaissance. Renaissance humanists praised Homer as the archetypically wise poet, whose writings contain hidden wisdom, disguised through allegory. In western Europe during the Renaissance, Virgil was more widely read than Homer and Homer was often seen through a Virgilian lens.
In 1664, contradicting the widespread praise of Homer as the epitome of wisdom, François Hédelin, abbé d'Aubignac wrote a scathing attack on the Homeric poems, declaring that they were incoherent, immoral, tasteless, and without style, that Homer never existed, and that the poems were hastily cobbled together by incompetent editors from unrelated oral songs. Fifty years later, the English scholar Richard Bentley concluded that Homer did exist, but that he was an obscure, prehistoric oral poet whose compositions bear little relation to the "Iliad" and the "Odyssey" as they have been passed down. According to Bentley, Homer "wrote a Sequel of Songs and Rhapsodies, to be sung by himself for small Earnings and good Cheer at Festivals and other Days of Merriment; the "Ilias" he wrote for men, and the "Odysseis" for the other Sex. These loose songs were not collected together in the Form of an epic Poem till Pisistratus' time, about 500 Years after."
Friedrich August Wolf's "Prolegomena ad Homerum", published in 1795, argued that much of the material later incorporated into the "Iliad" and the "Odyssey" was originally composed in the tenth century BC in the form of short, separate oral songs, which passed through oral tradition for roughly four hundred years before being assembled into prototypical versions of the "Iliad" and the "Odyssey" in the sixth century BC by literate authors. After being written down, Wolf maintained that the two poems were extensively edited, modernized, and eventually shaped into their present state as artistic unities. Wolf and the "Analyst" school, which led the field in the nineteenth century, sought to recover the original, authentic poems which were thought to be concealed by later excrescences.
Within the Analyst school were two camps: proponents of the "lay theory," which held that the "Iliad" and the "Odyssey" were put together from a large number of short, independent songs, and proponents of the "nucleus theory", which held that Homer had originally composed shorter versions of the "Iliad" and the "Odyssey", which later poets expanded and revised. A small group of scholars opposed to the Analysts, dubbed "Unitarians", saw the later additions as superior, the work of a single inspired poet. By around 1830, the central preoccupations of Homeric scholars, dealing with whether or not "Homer" actually existed, when and how the Homeric poems originated, how they were transmitted, when and how they were finally written down, and their overall unity, had been dubbed "the Homeric Question".
Following World War I, the Analyst school began to fall out of favor among Homeric scholars. It did not die out entirely, but it came to be increasingly seen as a discredited dead end. Starting in around 1928, Milman Parry and Albert Lord, after their studies of folk bards in the Balkans, developed the "Oral-Formulaic Theory" that the Homeric poems were originally composed through improvised oral performances, which relied on traditional epithets and poetic formulas. This theory found very wide scholarly acceptance and explained many previously puzzling features of the Homeric poems, including their unusually archaic language, their extensive use of stock epithets, and their other "repetitive" features. Many scholars concluded that the "Homeric question" had finally been answered. Meanwhile, the 'Neoanalysts' sought to bridge the gap between the 'Analysts' and 'Unitarians'. The Neoanalysts sought to trace the relationships between the Homeric poems and other epic poems, which have now been lost, but of which modern scholars do possess some patchy knowledge. Knowledge of earlier versions of the epics can be derived from anomalies of structure and detail in our surviving version of the Iliad and Odyssey. These anomalies point to earlier versions of the Iliad in which Ajax played a more prominent role, in which the Achaean embassy to Achilles comprised different characters, and in which Patroclus was actually mistaken for Achilles by the Trojans. They point to earlier versions of the Odyssey in which Telemachus went in search of news of his father not to Menelaus in Sparta but to Idomeneus in Crete, in which Telemachus met up with his father in Crete and conspired with him to return to Ithaca disguised as the soothsayer Theoclymenus, and in which Penelope recognized Odysseus much earlier in the narrative and conspired with him in the destruction of the suitors. Neoanalysis can be viewed as a form of Analysis informed by the principles of Oral Theory, recognizing as it does the existence and influence of previously existing tales and yet appreciating the technique of a single poet in adapting them to his Iliad and Odyssey.
Most contemporary scholars, although they disagree on other questions about the genesis of the poems, agree that the "Iliad" and the "Odyssey" were not produced by the same author, based on "the many differences of narrative manner, theology, ethics, vocabulary, and geographical perspective, and by the apparently imitative character of certain passages of the "Odyssey" in relation to the "Iliad"." Nearly all scholars agree that the "Iliad" and the "Odyssey" are unified poems, in that each poem shows a clear overall design, and that they are not merely strung together from unrelated songs. It is also generally agreed that each poem was composed mostly by a single author, who probably relied heavily on older oral traditions. Nearly all scholars agree that the "Doloneia" in Book X of the "Iliad" is not part of the original poem, but rather a later insertion by a different poet.
Some ancient scholars believed Homer to have been an eyewitness to the Trojan War; others thought he had lived up to 500 years afterwards. Contemporary scholars continue to debate the date of the poems. A long history of oral transmission lies behind the composition of the poems, complicating the search for a precise date. At one extreme, Richard Janko has proposed a date for both poems to the eighth century BC based on linguistic analysis and statistics. Barry B. Powell dates the composition of the "Iliad" and the "Odyssey" to sometime between 800 and 750 BC, based on the statement from Herodotus, who lived in the late fifth century BC, that Homer lived four hundred years before his own time "and not more" (καὶ οὐ πλέοσι), and on the fact that the poems do not mention hoplite battle tactics, inhumation, or literacy. Martin Litchfield West has argued that the "Iliad" echoes the poetry of Hesiod, and that it must have been composed around 660–650 BC at the earliest, with the "Odyssey" up to a generation later. He also interprets passages in the "Iliad" as showing knowledge of historical events that occurred in the ancient Near East during the middle of the seventh century BC, including the destruction of Babylon by Sennacherib in 689 BC and the Sack of Thebes by Ashurbanipal in 663/4 BC. At the other extreme, a few American scholars such as Gregory Nagy see "Homer" as a continually evolving tradition, which grew much more stable as the tradition progressed, but which did not fully cease to continue changing and evolving until as late as the middle of the second century BC.
"'Homer" is a name of unknown etymological origin, around which many theories were erected in antiquity. One such linkage was to the Greek ("hómēros"), "hostage" (or "surety"). The explanations suggested by modern scholars tend to mirror their position on the overall Homeric question. Nagy interprets it as "he who fits (the song) together". West has advanced both possible Greek and Phoenician etymologies.
Scholars continue to debate questions such as whether the Trojan War actually took place – and if so when and where – and to what extent the society depicted by Homer is based on his own or one which was, even at the time of the poems' composition, known only as legend. The Homeric epics are largely set in the east and center of the Mediterranean, with some scattered references to Egypt, Ethiopia and other distant lands, in a warlike society that resembles that of the Greek world slightly before the hypothesized date of the poems' composition.
In ancient Greek chronology, the sack of Troy was dated to 1184 BC. By the nineteenth century, there was widespread scholarly skepticism that the Trojan War had ever happened and that Troy had even existed, but in 1873 Heinrich Schliemann announced to the world that he had discovered the ruins of Homer's Troy at Hissarlik in modern Turkey. Some contemporary scholars think the destruction of Troy VIIa "circa" 1220 BC was the origin of the myth of the Trojan War, others that the poem was inspired by multiple similar sieges that took place over the centuries.
Most scholars now agree that the Homeric poems depict customs and elements of the material world that are derived from different periods of Greek history. For instance, the heroes in the poems use bronze weapons, characteristic of the Bronze Age in which the poems are set, rather than the later Iron Age during which they were composed; yet the same heroes are cremated (an Iron Age practice) rather than buried (as they were in the Bronze Age). In some parts of the Homeric poems, heroes are accurately described as carrying large shields like those used by warriors during the Mycenaean period, but, in other places, they are instead described carrying the smaller shields that were commonly used during the time when the poems were written in the early Iron Age.
In the "Iliad" 10.260–265, Odysseus is described as wearing a helmet made of boar's tusks. Such helmets were not worn in Homer's time, but were commonly worn by aristocratic warriors between 1600 and 1150 BC. The decipherment of Linear B in the 1950s by Michael Ventris and continued archaeological investigation has increased modern scholars' understanding of Aegean civilisation, which in many ways resembles the ancient Near East more than the society described by Homer. Some aspects of the Homeric world are simply made up; for instance, the "Iliad" 22.145–56 describes there being two springs that run near the city of Troy, one that runs steaming hot and the other that runs icy cold. It is here that Hector takes his final stand against Achilles. Archaeologists, however, have uncovered no evidence that springs of this description ever actually existed.
The Homeric epics are written in an artificial literary language or 'Kunstsprache' only used in epic hexameter poetry. Homeric Greek shows features of multiple regional Greek dialects and periods, but is fundamentally based on Ionic Greek, in keeping with the tradition that Homer was from Ionia. Linguistic analysis suggests that the "Iliad" was composed slightly before the "Odyssey", and that Homeric formulae preserve older features than other parts of the poems.
The Homeric poems were composed in unrhymed dactylic hexameter; ancient Greek metre was quantity-based rather than stress-based. Homer frequently uses set phrases such as epithets ('crafty Odysseus', 'rosy-fingered Dawn', 'owl-eyed Athena', etc.), Homeric formulae ('and then answered [him/her], Agamemnon, king of men', 'when the early-born rose-fingered Dawn came to light', 'thus he/she spoke'), simile, type scenes, ring composition and repetition. These habits aid the extemporizing bard, and are characteristic of oral poetry. For instance, the main words of a Homeric sentence are generally placed towards the beginning, whereas literate poets like Virgil or Milton use longer and more complicated syntactical structures. Homer then expands on these ideas in subsequent clauses; this technique is called parataxis.
The so-called 'type scenes' ("typischen Scenen"), were named by Walter Arend in 1933. He noted that Homer often, when describing frequently recurring activities such as eating, praying, fighting and dressing, used blocks of set phrases in sequence that were then elaborated by the poet. The 'Analyst' school had considered these repetitions as un-Homeric, whereas Arend interpreted them philosophically. Parry and Lord noted that these conventions are found in many other cultures.
'Ring composition' or chiastic structure (when a phrase or idea is repeated at both the beginning and end of a story, or a series of such ideas first appears in the order A, B, C... before being reversed as ...C, B, A) has been observed in the Homeric epics. Opinion differs as to whether these occurrences are a conscious artistic device, a mnemonic aid or a spontaneous feature of human storytelling.
Both of the Homeric poems begin with an invocation to the Muse. In the "Iliad", the poet invokes her to sing of "the anger of Achilles", and, in the "Odyssey", he asks her to sing of "the man of many ways". A similar opening was later employed by Virgil in his "Aeneid".
The orally transmitted Homeric poems were put into written form at some point between the eighth and sixth centuries BC. Some scholars believe that they were dictated to a scribe by the poet and that our inherited versions of the "Iliad" and "Odyssey" were in origin orally-dictated texts. Albert Lord noted that the Balkan bards that he was studying revised and expanded their songs in their process of dictating. Some scholars hypothesize that a similar process of revision and expansion occurred when the Homeric poems were first written down. Other scholars hold that, after the poems were created in the eighth century, they continued to be orally transmitted with considerable revision until they were written down in the sixth century. After textualisation, the poems were each divided into 24 rhapsodes, today referred to as books, and labelled by the letters of the Greek alphabet. Most scholars attribute the book divisions to the Hellenistic scholars of Alexandria, in Egypt. Some trace the divisions back further to the Classical period. Very few credit Homer himself with the divisions.
In antiquity, it was widely held that the Homeric poems were collected and organised in Athens in the late sixth century BC by the tyrant Peisistratos (died 528/7 BC), in what subsequent scholars have dubbed the "Peisistratean recension". The idea that the Homeric poems were originally transmitted orally and first written down during the reign of Peisistratos is referenced by the first-century BC Roman orator Cicero and is also referenced in a number of other surviving sources, including two ancient "Lives of Homer". From around 150 BC, the texts of the Homeric poems seem to have become relatively established. After the establishment of the Library of Alexandria, Homeric scholars such as Zenodotus of Ephesus, Aristophanes of Byzantium and in particular Aristarchus of Samothrace helped establish a canonical text.
The first printed edition of Homer was produced in 1488 in Milan, Italy. Today scholars use medieval manuscripts, papyri and other sources; some argue for a "multi-text" view, rather than seeking a single definitive text. The nineteenth-century edition of Arthur Ludwich mainly follows Aristarchus's work, whereas van Thiel's (1991, 1996) follows the medieval vulgate. Others, such as Martin West (1998–2000) or T.W. Allen, fall somewhere between these two extremes.
This is a partial list of translations into English of Homer's "Iliad" and "Odyssey". | https://en.wikipedia.org/wiki?curid=13633 |
Hugo Gernsback
Hugo Gernsback (; born Hugo Gernsbacher, August 16, 1884 – August 19, 1967) was a Luxembourgish-American inventor, writer, editor, and magazine publisher, best known for publications including the first science fiction magazine. His contributions to the genre as publisher—although not as a writer—were so significant that, along with the novelists H. G. Wells and Jules Verne, he is sometimes called "The Father of Science Fiction". In his honour, annual awards presented at the World Science Fiction Convention are named the "Hugos".
Gernsback was born in 1884 in Luxembourg City, to Berta (Dürlacher), a housewife, and Moritz Gernsbacher, a winemaker. His family was Jewish. Gernsback emigrated to the United States in 1904 and later became a naturalized citizen. He married three times: to Rose Harvey in 1906, Dorothy Kantrowitz in 1921, and Mary Hancher in 1951. In 1925, he founded radio station WRNY, which was broadcast from the 18th floor of the Roosevelt Hotel in New York City. In 1928, WRNY aired some of the first television broadcasts. During the show, audio stopped and each artist waved or bowed onscreen. When audio resumed, they performed. Gernsback is also considered a pioneer in amateur radio.
Before helping to create science fiction, Gernsback was an entrepreneur in the electronics industry, importing radio parts from Europe to the United States and helping to popularize amateur "wireless". In April 1908 he founded "Modern Electrics", the world's first magazine about both electronics and radio, called "wireless" at the time. While the cover of the magazine itself states it was a catalog, most historians note that it contained articles, features, and plotlines, qualifying it as a magazine.
Under its auspices, in January 1909, he founded the Wireless Association of America, which had 10,000 members within a year. In 1912, Gernsback said that he estimated 400,000 people in the U.S. were involved in amateur radio. In 1913, he founded a similar magazine, "The Electrical Experimenter", which became "Science and Invention" in 1920. It was in these magazines that he began including scientific fiction stories alongside science journalism—including his novel "Ralph 124C 41+" which he ran for 12 months from April 1911 in "Modern Electrics".
Hugo Gernsback started the Radio News magazine for amateur radio enthusiasts in 1919.
He died at Roosevelt Hospital (Mount Sinai West as of 2020) in New York City on August 19, 1967.
Gernsback provided a forum for the modern genre of science fiction in 1926 by founding the first magazine dedicated to it, "Amazing Stories". The inaugural April issue comprised a one-page editorial and reissues of six stories, three less than ten years old and three by Poe, Verne, and Wells. He said he became interested in the concept after reading a translation of the work of Percival Lowell as a child. His idea of a perfect science fiction story was "75 percent literature interwoven with 25 percent science". He also played an important role in starting science fiction fandom, by organizing the Science Fiction League and by publishing the addresses of people who wrote letters to his magazines. Fans began to organize, and became aware of themselves as a movement, a social force; this was probably decisive for the subsequent history of the genre. He also created the term "science fiction", though he preferred the term "scientifiction".
In 1929, he lost ownership of his first magazines after a bankruptcy lawsuit. There is some debate about whether this process was genuine, manipulated by publisher Bernarr Macfadden, or was a Gernsback scheme to begin another company. After losing control of "Amazing Stories", Gernsback founded two new science fiction magazines, "Science Wonder Stories" and "Air Wonder Stories". A year later, due to Depression-era financial troubles, the two were merged into "Wonder Stories", which Gernsback continued to publish until 1936, when it was sold to Thrilling Publications and renamed "Thrilling Wonder Stories". Gernsback returned in 1952–53 with "Science-Fiction Plus".
Gernsback was noted for sharp (and sometimes shady) business practices, and for paying his writers extremely low fees or not paying them at all. H. P. Lovecraft and Clark Ashton Smith referred to him as "Hugo the Rat".
As Barry Malzberg has said:
Gernsback's venality and corruption, his sleaziness and his utter disregard for the financial rights of authors, have been well documented and discussed in critical and fan literature. That the founder of genre science fiction who gave his name to the field's most prestigious award and who was the Guest of Honor at the 1952 Worldcon was pretty much a crook (and a contemptuous crook who stiffed his writers but paid himself $100K a year as President of Gernsback Publications) has been clearly established.
Jack Williamson, who had to hire an attorney associated with the American Fiction Guild to force Gernsback to pay him, summed up his importance for the genre:
At any rate, his main influence in the field was simply to start Amazing and Wonder Stories and get SF out to the public newsstands—and to name the genre he had earlier called "scientifiction."
Frederik Pohl said in 1965 that Gernsback's "Amazing Stories" published "the kind of stories Gernsback himself used to write: a sort of animated catalogue of gadgets". Gernsback's fiction includes the novel "Ralph 124C 41+"; the title is a pun on the phrase "one to foresee for many" ("one plus"). Even though "Ralph 124C 41+" has been described as pioneering many ideas and themes found in later SF work, it has often been neglected due to what most critics deem poor artistic quality. Author Brian Aldiss called the story a "tawdry illiterate tale" and a "sorry concoction", while author and editor Lester del Rey called it "simply dreadful." While most other modern critics have little positive to say about the story's writing, "Ralph 124C 41+" is considered by science fiction critic Gary Westfahl as "essential text for all studies of science fiction."
Gernsback's second novel, "Baron Münchausen's Scientific Adventures", was serialized in "Amazing Stories" in 1928.
Gernsback's third (and final) novel, "Ultimate World", written c. 1958, was not published until 1971. Lester del Rey described it simply as "a bad book", marked more by routine social commentary than by scientific insight or extrapolation. James Blish, in a caustic review, described the novel as "incompetent, pedantic, graceless, incredible, unpopulated and boring" and concluded that its publication "accomplishes nothing but the placing of a blot on the memory of a justly honored man."
Gernsback combined his fiction and science into "Everyday Science and Mechanics" magazine, serving as the editor in the 1930s.
The Hugo Awards or "Hugos" are the annual achievement awards presented at the World Science Fiction Convention, selected in a process that ends with vote by current Convention members. They originated and acquired the "Hugo" nickname during the 1950s and were formally defined as a convention responsibility under the name "Science Fiction Achievement Awards" early in the 1960s. The nickname soon became almost universal and its use legally protected; "Hugo Award(s)" replaced the longer name in all official uses after the 1991 cycle. | https://en.wikipedia.org/wiki?curid=13635 |
History of computing hardware
The history of computing hardware covers the developments from early simple devices to aid calculation to modern day computers. Before the 20th century, most calculations were done by humans. Early mechanical tools to help humans with digital calculations, like the abacus, were referred to as "calculating machines" or "calculators" (and other proprietary names). The machine operator was called the "computer".
The first aids to computation were purely mechanical devices which required the operator to set up the initial values of an elementary arithmetic operation, then manipulate the device to obtain the result. Later, computers represented numbers in a continuous form (e.g. distance along a scale, rotation of a shaft, or a voltage). Numbers could also be represented in the form of digits, automatically manipulated by a mechanism. Although this approach generally required more complex mechanisms, it greatly increased the precision of results. The development of transistor technology and then the integrated circuit chip led to a series of breakthroughs, starting with transistor computers and then integrated circuit computers, causing digital computers to largely replace analog computers. Metal-oxide-semiconductor (MOS) large-scale integration (LSI) then enabled semiconductor memory and the microprocessor, leading to another key breakthrough, the miniaturized personal computer (PC), in the 1970s. The cost of computers gradually became so low that personal computers by the 1990s, and then mobile computers (smartphones and tablets) in the 2000s, became ubiquitous.
Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was probably a form of tally stick.The Lebombo bone from the mountains between Swaziland and South Africa may be the oldest known mathematical artifact. It dates from 35,000 BCE and consists of 29 distinct notches that were deliberately cut into a baboon's fibula. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, probably livestock or grains, sealed in hollow unbaked clay containers. The use of counting rods is one example. The abacus was early used for arithmetic tasks. What we now call the Roman abacus was used in Babylonia as early as c. 2700–2300 BC. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money.
Several analog computers were constructed in ancient and medieval times to perform astronomical calculations. These included the astrolabe and Antikythera mechanism from the Hellenistic world (c. 150–100 BC). In Roman Egypt, Hero of Alexandria (c. 10–70 AD) made mechanical devices including automata and a programmable cart. Other early mechanical devices used to perform one or another type of calculations include the planisphere and other mechanical computing devices invented by Abu Rayhan al-Biruni (c. AD 1000); the equatorium and universal latitude-independent astrolabe by Abū Ishāq Ibrāhīm al-Zarqālī (c. AD 1015); the astronomical analog computers of other medieval Muslim astronomers and engineers; and the astronomical clock tower of Su Song (1094) during the Song dynasty. The castle clock, a hydropowered mechanical astronomical clock invented by Ismail al-Jazari in 1206, was the first programmable analog computer. Ramon Llull invented the Lullian Circle: a notional machine for calculating answers to philosophical questions (in this case, to do with Christianity) via logical combinatorics. This idea was taken up by Leibniz centuries later, and is thus one of the founding elements in computing and information science.
Scottish mathematician and physicist John Napier discovered that the multiplication and division of numbers could be performed by the addition and subtraction, respectively, of the logarithms of those numbers. While producing the first logarithmic tables, Napier needed to perform many tedious multiplications. It was at this point that he designed his 'Napier's bones', an abacus-like device that greatly simplified calculations that involved multiplication and division.
Since real numbers can be represented as distances or intervals on a line, the slide rule was invented in the 1620s, shortly after Napier's work, to allow multiplication and division operations to be carried out significantly faster than was previously possible. Edmund Gunter built a calculating device with a single logarithmic scale at the University of Oxford. His device greatly simplified arithmetic calculations, including multiplication and division. William Oughtred greatly improved this in 1630 with his circular slide rule. He followed this up with the modern slide rule in 1632, essentially a combination of two Gunter rules, held together with the hands. Slide rules were used by generations of engineers and other mathematically involved professional workers, until the invention of the pocket calculator.
Wilhelm Schickard, a German polymath, designed a calculating machine in 1623 which combined a mechanised form of Napier's rods with the world's first mechanical adding machine built into the base. Because it made use of a single-tooth gear there were circumstances in which its carry mechanism would jam. A fire destroyed at least one of the machines in 1624 and it is believed Schickard was too disheartened to build another.
In 1642, while still a teenager, Blaise Pascal started some pioneering work on calculating machines and after three years of effort and 50 prototypes he invented a mechanical calculator. He built twenty of these machines (called Pascal's calculator or Pascaline) in the following ten years. Nine Pascalines have survived, most of which are on display in European museums. A continuing debate exists over whether Schickard or Pascal should be regarded as the "inventor of the mechanical calculator" and the range of issues to be considered is discussed elsewhere.
Gottfried Wilhelm von Leibniz invented the stepped reckoner and his famous stepped drum mechanism around 1672. He attempted to create a machine that could be used not only for addition and subtraction but would utilise a moveable carriage to enable long multiplication and division. Leibniz once said "It is unworthy of excellent men to lose hours like slaves in the labour of calculation which could safely be relegated to anyone else if machines were used." However, Leibniz did not incorporate a fully successful carry mechanism. Leibniz also described the binary numeral system, a central ingredient of all modern computers. However, up to the 1940s, many subsequent designs (including Charles Babbage's machines of the 1822 and even ENIAC of 1945) were based on the decimal system.
Around 1820, Charles Xavier Thomas de Colmar created what would over the rest of the century become the first successful, mass-produced mechanical calculator, the Thomas Arithmometer. It could be used to add and subtract, and with a moveable carriage the operator could also multiply, and divide by a process of long multiplication and long division. It utilised a stepped drum similar in conception to that invented by Leibniz. Mechanical calculators remained in use until the 1970s.
In 1804, French weaver Joseph Marie Jacquard developed a loom in which the pattern being woven was controlled by a paper tape constructed from punched cards. The paper tape could be changed without changing the mechanical design of the loom. This was a landmark achievement in programmability. His machine was an improvement over similar weaving looms. Punched cards were preceded by punch bands, as in the machine proposed by Basile Bouchon. These bands would inspire information recording for automatic pianos and more recently numerical control machine tools.
In the late 1880s, the American Herman Hollerith invented data storage on punched cards that could then be read by a machine. To process these punched cards, he invented the tabulator and the keypunch machine. His machines used electromechanical relays and counters. Hollerith's method was used in the 1890 United States Census. That census was processed two years faster than the prior census had been. Hollerith's company eventually became the core of IBM.
By 1920, electromechanical tabulating machines could add, subtract, and print accumulated totals. Machine functions were directed by inserting dozens of wire jumpers into removable control panels. When the United States instituted Social Security in 1935, IBM punched-card systems were used to process records of 26 million workers. Punched cards became ubiquitous in industry and government for accounting and administration.
Leslie Comrie's articles on punched-card methods and W. J. Eckert's publication of "Punched Card Methods in Scientific Computation" in 1940, described punched-card techniques sufficiently advanced to solve some differential equations or perform multiplication and division using floating point representations, all on punched cards and unit record machines. Such machines were used during World War II for cryptographic statistical processing, as well as a vast number of administrative uses. The Astronomical Computing Bureau, Columbia University, performed astronomical calculations representing the state of the art in computing.
The book "IBM and the Holocaust" by Edwin Black outlines the ways in which IBM's technology helped facilitate Nazi genocide through generation and tabulation of punch cards based on national census data. "See also: Dehomag"
By the 20th century, earlier mechanical calculators, cash registers, accounting machines, and so on were redesigned to use electric motors, with gear position as the representation for the state of a variable. The word "computer" was a job title assigned to primarily women who used these calculators to perform mathematical calculations. By the 1920s, British scientist Lewis Fry Richardson's interest in weather prediction led him to propose human computers and numerical analysis to model the weather; to this day, the most powerful computers on Earth are needed to adequately model its weather using the Navier–Stokes equations.
Companies like Friden, Marchant Calculator and Monroe made desktop mechanical calculators from the 1930s that could add, subtract, multiply and divide. In 1948, the Curta was introduced by Austrian inventor Curt Herzstark. It was a small, hand-cranked mechanical calculator and as such, a descendant of Gottfried Leibniz's Stepped Reckoner and Thomas's Arithmometer.
The world's first "all-electronic desktop" calculator was the British Bell Punch ANITA, released in 1961. It used vacuum tubes, cold-cathode tubes and Dekatrons in its circuits, with 12 cold-cathode "Nixie" tubes for its display. The ANITA sold well since it was the only electronic desktop calculator available, and was silent and quick. The tube technology was superseded in June 1963 by the U.S. manufactured Friden EC-130, which had an all-transistor design, a stack of four 13-digit numbers displayed on a CRT, and introduced reverse Polish notation (RPN).
Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his revolutionary difference engine, designed to aid in navigational calculations, in 1833 he realized that a much more general design, an Analytical Engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. It employed ordinary base-10 fixed-point arithmetic.
The Engine incorporated an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete.
There was to be a store, or memory, capable of holding 1,000 numbers of 40 decimal digits each (ca. 16.7 kB). An arithmetical unit, called the "mill", would be able to perform all four arithmetic operations, plus comparisons and optionally square roots. Initially it was conceived as a difference engine curved back upon itself, in a generally circular layout, with the long store exiting off to one side. (Later drawings depict a regularized grid layout.) Like the central processing unit (CPU) in a modern computer, the mill would rely on its own internal procedures, roughly equivalent to microcode in modern CPUs, to be stored in the form of pegs inserted into rotating drums called "barrels", to carry out some of the more complex instructions the user's program might specify.
The programming language to be employed by users was akin to modern day assembly languages. Loops and conditional branching were possible, and so the language as conceived would have been Turing-complete as later defined by Alan Turing. Three different types of punch cards were used: one for arithmetical operations, one for numerical constants, and one for load and store operations, transferring numbers from the store to the arithmetical unit or back. There were three separate readers for the three types of cards.
The machine was about a century ahead of its time. However, the project was slowed by various problems including disputes with the chief machinist building parts for it. All the parts for his machine had to be made by hand—this was a major problem for a machine with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to difficulties not only of politics and financing, but also to his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Ada Lovelace translated and added notes to the ""Sketch of the Analytical Engine"" by Luigi Federico Menabrea. This appears to be the first published description of programming, so Ada Lovelace is widely regarded as the first computer programmer.
Following Babbage, although unaware of his earlier work, was Percy Ludgate, a clerk to a corn merchant in Dublin, Ireland. He independently designed a programmable mechanical computer, which he described in a work that was published in 1909.
In the first half of the 20th century, analog computers were considered by many to be the future of computing. These devices used the continuously changeable aspects of physical phenomena such as electrical, mechanical, or hydraulic quantities to model the problem being solved, in contrast to digital computers that represented varying quantities symbolically, as their numerical values change. As an analog computer does not use discrete values, but rather continuous values, processes cannot be reliably repeated with exact equivalence, as they can with Turing machines.
The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson, later Lord Kelvin, in 1872. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location and was of great utility to navigation in shallow waters. His device was the foundation for further developments in analog computing.
The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the brother of the more famous Lord Kelvin. He explored the possible construction of such calculators, but was stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output.
An important advance in analog computing was the development of the first fire-control systems for long range ship gunlaying. When gunnery ranges increased dramatically in the late 19th century it was no longer a simple matter of calculating the proper aim point, given the flight times of the shells. Various spotters on board the ship would relay distance measures and observations to a central plotting station. There the fire direction teams fed in the location, speed and direction of the ship and its target, as well as various adjustments for Coriolis effect, weather effects on the air, and other adjustments; the computer would then output a firing solution, which would be fed to the turrets for laying. In 1912, British engineer Arthur Pollen developed the first electrically powered mechanical analogue computer (called at the time the Argo Clock). It was used by the Imperial Russian Navy in World War I. The alternative Dreyer Table fire control system was fitted to British capital ships by mid-1916.
Mechanical devices were also used to aid the accuracy of aerial bombing. Drift Sight was the first such aid, developed by Harry Wimperis in 1916 for the Royal Naval Air Service; it measured the wind speed from the air, and used that measurement to calculate the wind's effects on the trajectory of the bombs. The system was later improved with the Course Setting Bomb Sight, and reached a climax with World War II bomb sights, Mark XIV bomb sight (RAF Bomber Command) and the Norden (United States Army Air Forces).
The art of mechanical analog computing reached its zenith with the differential analyzer, built by H. L. Hazen and Vannevar Bush at MIT starting in 1927, which built on the mechanical integrators of James Thomson and the torque amplifiers invented by H. W. Nieman. A dozen of these devices were built before their obsolescence became obvious; the most powerful was constructed at the University of Pennsylvania's Moore School of Electrical Engineering, where the ENIAC was built.
A fully electronic analog computer was built by Helmut Hölzer in 1942 at Peenemünde Army Research Center
.
By the 1950s the success of digital electronic computers had spelled the end for most analog computing machines, but hybrid analog computers, controlled by digital electronics, remained in substantial use into the 1950s and 1960s, and later in some specialized applications.
The principle of the modern computer was first described by computer scientist Alan Turing, who set out the idea in his seminal 1936 paper, "On Computable Numbers". Turing reformulated Kurt Gödel's 1931 results on the limits of proof and computation, replacing Gödel's universal arithmetic-based formal language with the formal and simple hypothetical devices that became known as Turing machines. He proved that some such machine would be capable of performing any conceivable mathematical computation if it were representable as an algorithm. He went on to prove that there was no solution to the "Entscheidungsproblem" by first showing that the halting problem for Turing machines is undecidable: in general, it is not possible to decide algorithmically whether a given Turing machine will ever halt.
He also introduced the notion of a "universal machine" (now known as a universal Turing machine), with the idea that such a machine could perform the tasks of any other machine, or in other words, it is provably capable of computing anything that is computable by executing a program stored on tape, allowing the machine to be programmable. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine.
The era of modern computing began with a flurry of development before and during World War II. Most digital computers built in this period were electromechanical – electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes.
The Z2 was one of the earliest examples of an electromechanical relay computer, and was created by German engineer Konrad Zuse in 1940. It was an improvement on his earlier Z1; although it used the same mechanical memory, it replaced the arithmetic and control logic with electrical relay circuits.
In the same year, electro-mechanical devices called bombes were built by British cryptologists to help decipher German Enigma-machine-encrypted secret messages during World War II. The bombe's initial design was created in 1939 at the UK Government Code and Cypher School (GC&CS) at Bletchley Park by Alan Turing, with an important refinement devised in 1940 by Gordon Welchman. The engineering design and construction was the work of Harold Keen of the British Tabulating Machine Company. It was a substantial development from a device that had been designed in 1938 by Polish Cipher Bureau cryptologist Marian Rejewski, and known as the "cryptologic bomb" (Polish: ""bomba kryptologiczna"").
In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22-bit word length that operated at a clock frequency of about 5–10 Hz. Program code and data were stored on punched film. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating point numbers. Replacement of the hard-to-implement decimal system (used in Charles Babbage's earlier design) by the simpler binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was probably a Turing-complete machine. In two 1936 patent applications, Zuse also anticipated that machine instructions could be stored in the same storage used for data—the key insight of what became known as the von Neumann architecture, first implemented in 1948 in America in the electromechanical IBM SSEC and in Britain in the fully electronic Manchester Baby.
Zuse suffered setbacks during World War II when some of his machines were destroyed in the course of Allied bombing campaigns. Apparently his work remained largely unknown to engineers in the UK and US until much later, although at least IBM was aware of it as it financed his post-war startup company in 1946 in return for an option on Zuse's patents.
In 1944, the Harvard Mark I was constructed at IBM's Endicott laboratories; it was a similar general purpose electro-mechanical computer to the Z3, but was not quite Turing-complete.
The term digital was first suggested by George Robert Stibitz and refers to where a signal, such as a voltage, is not used to directly represent a value (as it would be in an analog computer), but to encode it. In November 1937, George Stibitz, then working at Bell Labs (1930–1941), completed a relay-based calculator he later dubbed the "Model K" (for "kitchen table", on which he had assembled it), which became the first binary adder. Typically signals have two states – low (usually representing 0) and high (usually representing 1), but sometimes three-valued logic is used, especially in high-density memory. Modern computers generally use binary logic, but many early machines were decimal computers. In these machines, the basic unit of data was the decimal digit, encoded in one of several schemes, including binary-coded decimal or BCD, bi-quinary, excess-3, and two-out-of-five code.
The mathematical basis of digital computing is Boolean algebra, developed by the British mathematician George Boole in his work "The Laws of Thought", published in 1854. His Boolean algebra was further refined in the 1860s by William Jevons and Charles Sanders Peirce, and was first presented systematically by Ernst Schröder and A. N. Whitehead. In 1879 Gottlob Frege develops the formal approach to logic and proposes the first logic language for logical equations.
In the 1930s and working independently, American electronic engineer Claude Shannon and Soviet logician Victor Shestakov both showed a one-to-one correspondence between the concepts of Boolean logic and certain electrical circuits, now called logic gates, which are now ubiquitous in digital computers. They showed that electronic relays and switches can realize the expressions of Boolean algebra. This thesis essentially founded practical digital circuit design.
Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. Machines such as the Z3, the Atanasoff–Berry Computer, the Colossus computers, and the ENIAC were built by hand, using circuits containing relays or valves (vacuum tubes), and often used punched cards or punched paper tape for input and as the main (non-volatile) storage medium.
The engineer Tommy Flowers joined the telecommunications branch of the General Post Office in 1926. While working at the research station in Dollis Hill in the 1930s, he began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation 5 years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes.
In the US, in 1940 Arthur Dickinson (IBM) invented the first digital electronic computer. This calculating device was fully electronic – control, calculations and output (the first electronic display). John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed the Atanasoff–Berry Computer (ABC) in 1942, the first binary electronic digital calculating device. This design was semi-electronic (electro-mechanical control and electronic calculations), and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. However, its paper card writer/reader was unreliable and the regenerative drum contact system was mechanical. The machine's special-purpose nature and lack of changeable, stored program distinguish it from modern computers.
Computers whose logic was primarily built using vacuum tubes are now known as first generation computers.
During World War II, British codebreakers at Bletchley Park, north of London, achieved a number of successes at breaking encrypted enemy military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes. Women often operated these bombe machines. They ruled out possible Enigma settings by performing chains of logical deductions implemented electrically. Most possibilities led to a contradiction, and the few remaining could be tested by hand.
The Germans also developed a series of teleprinter encryption systems, quite different from Enigma. The Lorenz SZ 40/42 machine was used for high-level Army communications, code-named "Tunny" by the British. The first intercepts of Lorenz messages began in 1941. As part of an attack on Tunny, Max Newman and his colleagues developed the Heath Robinson, a fixed-function machine to aid in code breaking. Tommy Flowers, a senior engineer at the Post Office Research Station was recommended to Max Newman by Alan Turing and spent eleven months from early February 1943 designing and building the more flexible Colossus computer (which superseded the Heath Robinson). After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February.
Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Data input to Colossus was by photoelectric reading of a paper tape transcription of the enciphered intercepted message. This was arranged in a continuous loop so that it could be read and re-read multiple times – there being no internal store for the data. The reading mechanism ran at 5,000 characters per second with the paper tape moving at . Colossus Mark 1 contained 1500 thermionic valves (tubes), but Mark 2 with 2400 valves and five processors in parallel, was both 5 times faster and simpler to operate than Mark 1, greatly speeding the decoding process. Mark 2 was designed while Mark 1 was being constructed. Allen Coombs took over leadership of the Colossus Mark 2 project when Tommy Flowers moved on to other projects. The first Mark 2 Colossus became operational on 1 June 1944, just in time for the Allied Invasion of Normandy on D-Day.
Most of the use of Colossus was in determining the start positions of the Tunny rotors for a message, which was called "wheel setting". Colossus included the first-ever use of shift registers and systolic arrays, enabling five simultaneous tests, each involving up to 100 Boolean calculations. This enabled five different possible start positions to be examined for one transit of the paper tape. As well as wheel setting some later Colossi included mechanisms intended to help determine pin patterns known as "wheel breaking". Both models were programmable using switches and plug panels in a way their predecessors had not been. Ten Mk 2 Colossi were operational by the end of the war.
Without the use of these machines, the Allies would have been deprived of the very valuable intelligence that was obtained from reading the vast quantity of enciphered high-level telegraphic messages between the German High Command (OKW) and their army commands throughout occupied Europe. Details of their existence, design, and use were kept secret well into the 1970s. Winston Churchill personally issued an order for their destruction into pieces no larger than a man's hand, to keep secret that the British were capable of cracking Lorenz SZ cyphers (from German rotor stream cipher machines) during the oncoming Cold War. Two of the machines were transferred to the newly formed GCHQ and the others were destroyed. As a result, the machines were not included in many histories of computing. A reconstructed working copy of one of the Colossus machines is now on display at Bletchley Park.
The US-built ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the US. Although the ENIAC was similar to the Colossus it was much faster and more flexible. It was unambiguously a Turing-complete device and could compute any problem that would fit into its memory. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were women who had been trained as mathematicians.
It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High-speed memory was limited to 20 words (equivalent to about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. One of its major engineering feats was to minimize the effects of tube burnout, which was a common problem in machine reliability at that time. The machine was in almost constant use for the next ten years.
Early computing machines were programmable in the sense that they could follow the sequence of steps they had been set up to execute, but the "program", or steps that the machine was to execute, were set up usually by changing how the wires were plugged into a patch panel or plugboard. "Reprogramming", when it was possible at all, was a laborious process, starting with engineers working out flowcharts, designing the new set up, and then the often-exacting process of physically re-wiring patch panels. Stored-program computers, by contrast, were designed to store a set of instructions (a program), in memory – typically the same memory as stored data.
The theoretical basis for the stored-program computer had been proposed by Alan Turing in his 1936 paper. In 1945 Turing joined the National Physical Laboratory and began his work on developing an electronic stored-program digital computer. His 1945 report ‘Proposed Electronic Calculator’ was the first specification for such a device.
Meanwhile, John von Neumann at the Moore School of Electrical Engineering, University of Pennsylvania, circulated his "First Draft of a Report on the EDVAC" in 1945. Although substantially similar to Turing's design and containing comparatively little engineering detail, the computer architecture it outlined became known as the "von Neumann architecture". Turing presented a more detailed paper to the National Physical Laboratory (NPL) Executive Committee in 1946, giving the first reasonably complete design of a stored-program computer, a device he called the Automatic Computing Engine (ACE). However, the better-known EDVAC design of John von Neumann, who knew of Turing's theoretical work, received more publicity, despite its incomplete nature and questionable lack of attribution of the sources of some of the ideas.
Turing thought that the speed and the size of computer memory were crucial elements, so he proposed a high-speed memory of what would today be called 25 KB, accessed at a speed of 1 MHz. The ACE implemented subroutine calls, whereas the EDVAC did not, and the ACE also used "Abbreviated Computer Instructions," an early form of programming language.
The Manchester Baby was the world's first electronic stored-program computer. It was built at the Victoria University of Manchester by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948.
The machine was not intended to be a practical computer but was instead designed as a testbed for the Williams tube, the first random-access digital storage device. Invented by Freddie Williams and Tom Kilburn at the University of Manchester in 1946 and 1947, it was a cathode ray tube that used an effect called secondary emission to temporarily store electronic binary data, and was used successfully in several early computers.
Although the computer was small and primitive by the standards of the 1990s, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project was initiated at the university to develop it into a more usable computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer.
The Baby had a 32-bit word length and a memory of 32 words. As it was designed to be the simplest possible stored-program computer, the only arithmetic operations implemented in hardware were subtraction and negation; other arithmetic operations were implemented in software. The first of three programs written for the machine found the highest proper divisor of 218 (262,144), a calculation that was known would take a long time to run—and so prove the computer's reliability—by testing every integer from 218 − 1 downwards, as division was implemented by repeated subtraction of the divisor. The program consisted of 17 instructions and ran for 52 minutes before reaching the correct answer of 131,072, after the Baby had performed 3.5 million operations (for an effective CPU speed of 1.1 kIPS).
The Experimental machine led on to the development of the Manchester Mark 1 at the University of Manchester. Work began in August 1948, and the first version was operational by April 1949; a program written to search for Mersenne primes ran error-free for nine hours on the night of 16/17 June 1949.
The machine's successful operation was widely reported in the British press, which used the phrase "electronic brain" in describing it to their readers.
The computer is especially historically significant because of its pioneering inclusion of index registers, an innovation which made it easier for a program to read sequentially through an array of words in memory. Thirty-four patents resulted from the machine's development, and many of the ideas behind its design were incorporated in subsequent commercial products such as the and 702 as well as the Ferranti Mark 1. The chief designers, Frederic C. Williams and Tom Kilburn, concluded from their experiences with the Mark 1 that computers would be used more in scientific roles than in pure mathematics. In 1951 they started development work on Meg, the Mark 1's successor, which would include a floating point unit.
The other contender for being the first recognizably modern digital stored-program computer was the EDSAC, designed and constructed by Maurice Wilkes and his team at the University of Cambridge Mathematical Laboratory in England at the University of Cambridge in 1949. The machine was inspired by John von Neumann's seminal "First Draft of a Report on the EDVAC" and was one of the first usefully operational electronic digital stored-program computer.
EDSAC ran its first programs on 6 May 1949, when it calculated a table of squares and a list of prime numbers.The EDSAC also served as the basis for the first commercially applied computer, the LEO I, used by food manufacturing company J. Lyons & Co. Ltd. EDSAC 1 and was finally shut down on 11 July 1958, having been superseded by EDSAC 2 which stayed in use until 1965.
ENIAC inventors John Mauchly and J. Presper Eckert proposed the EDVAC's construction in August 1944, and design work for the EDVAC commenced at the University of Pennsylvania's Moore School of Electrical Engineering, before the ENIAC was fully operational. The design implemented a number of important architectural and logical improvements conceived during the ENIAC's construction, and a high-speed serial-access memory. However, Eckert and Mauchly left the project and its construction floundered.
It was finally delivered to the U.S. Army's Ballistics Research Laboratory at the Aberdeen Proving Ground in August 1949, but due to a number of problems, the computer only began operation in 1951, and then only on a limited basis.
The first commercial computer was the Ferranti Mark 1, built by Ferranti and delivered to the University of Manchester in February 1951. It was based on the Manchester Mark 1. The main improvements over the Manchester Mark 1 were in the size of the primary storage (using random access Williams tubes), secondary storage (using a magnetic drum), a faster multiplier, and additional instructions. The basic cycle time was 1.2 milliseconds, and a multiplication could be completed in about 2.16 milliseconds. The multiplier used almost a quarter of the machine's 4,050 vacuum tubes (valves). A second machine was purchased by the University of Toronto, before the design was revised into the Mark 1 Star. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam.
In October 1947, the directors of J. Lyons & Company, a British catering company famous for its teashops but with strong interests in new office management techniques, decided to take an active role in promoting the commercial development of computers. The LEO I computer became operational in April 1951 and ran the world's first regular routine office computer job. On 17 November 1951, the J. Lyons company began weekly operation of a bakery valuations job on the LEO (Lyons Electronic Office). This was the first business to go live on a stored program computer.
In June 1951, the UNIVAC I (Universal Automatic Computer) was delivered to the U.S. Census Bureau. Remington Rand eventually sold 46 machines at more than US$1 million each ($ as of 2020). UNIVAC was the first "mass produced" computer. It used 5,200 vacuum tubes and consumed 125 kW of power. Its primary storage was serial-access mercury delay lines capable of storing 1,000 words of 11 decimal digits plus sign (72-bit words).
IBM introduced a smaller, more affordable computer in 1954 that proved very popular. The IBM 650 weighed over 900 kg, the attached power supply weighed around 1350 kg and both were held in separate cabinets of roughly 1.5 meters by 0.9 meters by 1.8 meters. It cost US$500,000 ($ as of 2020) or could be leased for US$3,500 a month ($ as of 2020). Its drum memory was originally 2,000 ten-digit words, later expanded to 4,000 words. Memory limitations such as this were to dominate programming for decades afterward. The program instructions were fetched from the spinning drum as the code ran. Efficient execution using drum memory was provided by a combination of hardware architecture: the instruction format included the address of the next instruction; and software: the Symbolic Optimal Assembly Program, SOAP, assigned instructions to the optimal addresses (to the extent possible by static analysis of the source program). Thus many instructions were, when needed, located in the next row of the drum to be read and additional wait time for drum rotation was not required.
In 1951, British scientist Maurice Wilkes developed the concept of microprogramming from the realisation that the central processing unit of a computer could be controlled by a miniature, highly specialised computer program in high-speed ROM. Microprogramming allows the base instruction set to be defined or extended by built-in programs (now called firmware or microcode). This concept greatly simplified CPU development. He first described this at the University of Manchester Computer Inaugural Conference in 1951, then published in expanded form in "IEEE Spectrum" in 1955.
It was widely used in the CPUs and floating-point units of mainframe and other computers; it was implemented for the first time in EDSAC 2, which also used multiple identical "bit slices" to simplify design. Interchangeable, replaceable tube assemblies were used for each bit of the processor.
Magnetic drum memories were developed for the US Navy during WW II with the work continuing at Engineering Research Associates (ERA) in 1946 and 1947. ERA, then a part of Univac included a drum memory in its 1103, announced in February 1953. The first mass-produced computer, the IBM 650, also announced in 1953 had about 8.5 kilobytes of drum memory.
Magnetic core memory patented in 1949 with its first usage demonstrated for the Whirlwind computer in August 1953. Commercialization followed quickly. Magnetic core was used in peripherals of the IBM 702 delivered in July 1955, and later in the 702 itself. The IBM 704 (1955) and the Ferranti Mercury (1957) used magnetic-core memory. It went on to dominate the field into the 1970s, when it was replaced with semiconductor memory. Magnetic core peaked in volume about 1975 and declined in usage and market share thereafter.
As late as 1980, PDP-11/45 machines using magnetic-core main memory and drums for swapping were still in use at many of the original UNIX sites.
The bipolar transistor was invented in 1947. From 1955 onward transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Silicon junction transistors were much more reliable than vacuum tubes and had longer service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. Transistors greatly reduced computers' size, initial cost, and operating cost. Typically, second-generation computers were composed of large numbers of printed circuit boards such as the IBM Standard Modular System, each carrying one to four logic gates or flip-flops.
At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Initially the only devices available were germanium point-contact transistors, less reliable than the valves they replaced but which consumed far less power. Their first transistorised computer, and the first in the world, was operational by 1953, and a second version was completed there in April 1955. The 1955 version used 200 transistors, 1,300 solid-state diodes, and had a power consumption of 150 watts. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer.
That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The design featured a 64-kilobyte magnetic drum memory store with multiple moving heads that had been designed at the National Physical Laboratory, UK. By 1953 this team had transistor circuits operating to read and write on a smaller magnetic drum from the Royal Radar Establishment. The machine used a low clock speed of only 58 kHz to avoid having to use any valves to generate the clock waveforms.
CADET used 324 point-contact transistors provided by the UK company Standard Telephones and Cables; 76 junction transistors were used for the first stage amplifiers for data read from the drum, since point-contact transistors were too noisy. From August 1956 CADET was offering a regular computing service, during which it often executed continuous computing runs of 80 hours or more. Problems with the reliability of early batches of point contact and alloyed junction transistors meant that the machine's mean time between failures was about 90 minutes, but this improved once the more reliable bipolar junction transistors became available.
The Manchester University Transistor Computer's design was adopted by the local engineering firm of Metropolitan-Vickers in their Metrovick 950, the first commercial transistor computer anywhere. Six Metrovick 950s were built, the first completed in 1956. They were successfully deployed within various departments of the company and were in use for about five years. A second generation computer, the IBM 1401, captured about one third of the world market. IBM installed more than ten thousand 1401s between 1960 and 1964.
Transistorized electronics improved not only the CPU (Central Processing Unit), but also the peripheral devices. The second generation disk data storage units were able to store tens of millions of letters and digits. Next to the fixed disk storage units, connected to the CPU via high-speed data transmission, were removable disk data storage units. A removable disk pack can be easily exchanged with another pack in a few seconds. Even if the removable disks' capacity is smaller than fixed disks, their interchangeability guarantees a nearly unlimited quantity of data close at hand. Magnetic tape provided archival capability for this data, at a lower cost than disk.
Many second-generation CPUs delegated peripheral device communications to a secondary processor. For example, while the communication processor controlled card reading and punching, the main CPU executed calculations and binary branch instructions. One databus would bear data between the main CPU and core memory at the CPU's fetch-execute cycle rate, and other databusses would typically serve the peripheral devices. On the PDP-1, the core memory's cycle time was 5 microseconds; consequently most arithmetic instructions took 10 microseconds (100,000 operations per second) because most operations took at least two memory cycles; one for the instruction, one for the operand data fetch.
During the second generation remote terminal units (often in the form of Teleprinters like a Friden Flexowriter) saw greatly increased use. Telephone connections provided sufficient speed for early remote terminals and allowed hundreds of kilometers separation between remote-terminals and the computing center. Eventually these stand-alone computer networks would be generalized into an interconnected "network of networks"—the Internet.
The early 1960s saw the advent of supercomputing. The Atlas was a joint development between the University of Manchester, Ferranti, and Plessey, and was first installed at Manchester University and officially commissioned in 1962 as one of the world's first supercomputers – considered to be the most powerful computer in the world at that time. It was said that whenever Atlas went offline half of the United Kingdom's computer capacity was lost. It was a second-generation machine, using discrete germanium transistors. Atlas also pioneered the Atlas Supervisor, "considered by many to be the first recognisable modern operating system".
In the US, a series of computers at Control Data Corporation (CDC) were designed by Seymour Cray to use innovative designs and parallelism to achieve superior computational peak performance. The CDC 6600, released in 1964, is generally considered the first supercomputer. The CDC 6600 outperformed its predecessor, the IBM 7030 Stretch, by about a factor of 3. With performance of about 1 megaFLOPS, the CDC 6600 was the world's fastest computer from 1964 to 1969, when it relinquished that status to its successor, the CDC 7600.
The "third-generation" of digital electronic computers used integrated circuit (IC) chips as the basis of their logic.
The idea of an integrated circuit was conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer.
The first working integrated circuits were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. Kilby's invention was a hybrid integrated circuit (hybrid IC). It had external wire connections, which made it difficult to mass-produce.
Noyce came up with his own idea of an integrated circuit half a year after Kilby. Noyce's invention was a monolithic integrated circuit (IC) chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. The basis for Noyce's monolithic IC was Fairchild's planar process, which allowed integrated circuits to be laid out using the same principles as those of printed circuits. The planar process was developed by Noyce's colleague Jean Hoerni in early 1959, based on the silicon surface passivation and thermal oxidation processes developed by Mohamed M. Atalla at Bell Labs in the late 1950s.
Third generation (integrated circuit) computers first appeared in the early 1960s in computers developed for government purposes, and then in commercial computers beginning in the mid-1960s.
The MOSFET (metal-oxide-semiconductor field-effect transistor, or MOS transistor) was invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959. In addition to data processing, the MOSFET enabled the practical use of MOS transistors as memory cell storage elements, a function previously served by magnetic cores. Semiconductor memory, also known as MOS memory, was cheaper and consumed less power than magnetic-core memory. MOS random-access memory (RAM), in the form of static RAM (SRAM), was developed by John Schmidt at Fairchild Semiconductor in 1964. In 1966, Robert Dennard at the IBM Thomas J. Watson Research Center developed MOS dynamic RAM (DRAM). In 1967, Dawon Kahng and Simon Sze at Bell Labs developed the floating-gate MOSFET, the basis for MOS non-volatile memory such as EPROM, EEPROM and flash memory.
The "fourth-generation" of digital electronic computers used microprocessors as the basis of their logic. The microprocessor has origins in the MOS integrated circuit (MOS IC) chip. The MOS IC was first proposed by Mohamed M. Atalla at Bell Labs in 1960, and then fabricated by Fred Heiman and Steven Hofstein at RCA in 1962. Due to rapid MOSFET scaling, MOS IC chips rapidly increased in complexity at a rate predicted by Moore's law, leading to large-scale integration (LSI) with hundreds of transistors on a single MOS chip by the late 1960s. The application of MOS LSI chips to computing was the basis for the first microprocessors, as engineers began recognizing that a complete computer processor could be contained on a single MOS LSI chip.
The subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor". The earliest multi-chip microprocessors were the Four-Phase Systems AL-1 in 1969 and Garrett AiResearch MP944 in 1970, developed with multiple MOS LSI chips. The first single-chip microprocessor was the Intel 4004, developed on a single PMOS LSI chip. It was designed and realized by Ted Hoff, Federico Faggin, Masatoshi Shima and Stanley Mazor at Intel, and released in 1971. Tadashi Sasaki and Masatoshi Shima at Busicom, a calculator manufacturer, had the initial insight that the CPU could be a single MOS LSI chip, supplied by Intel.
While the earliest microprocessor ICs literally contained only the processor, i.e. the central processing unit, of a computer, their progressive development naturally led to chips containing most or all of the internal electronic parts of a computer. The integrated circuit in the image on the right, for example, an Intel 8742, is an 8-bit microcontroller that includes a CPU running at 12 MHz, 128 bytes of RAM, 2048 bytes of EPROM, and I/O in the same chip.
During the 1960s there was considerable overlap between second and third generation technologies. IBM implemented its IBM Solid Logic Technology modules in hybrid circuits for the IBM System/360 in 1964. As late as 1975, Sperry Univac continued the manufacture of second-generation machines such as the UNIVAC 494. The Burroughs large systems such as the B5000 were stack machines, which allowed for simpler programming. These pushdown automatons were also implemented in minicomputers and microprocessors later, which influenced programming language design. Minicomputers served as low-cost computer centers for industry, business and universities. It became possible to simulate analog circuits with the "simulation program with integrated circuit emphasis", or SPICE (1971) on minicomputers, one of the programs for electronic design automation ().
The microprocessor led to the development of the microcomputer, small, low-cost computers that could be owned by individuals and small businesses. Microcomputers, the first of which appeared in the 1970s, became ubiquitous in the 1980s and beyond.
While which specific system is considered the first microcomputer is a matter of debate, as there were several unique hobbyist systems developed based on the Intel 4004 and its successor, the Intel 8008, the first commercially available microcomputer kit was the Intel 8080-based Altair 8800, which was announced in the January 1975 cover article of "Popular Electronics". However, this was an extremely limited system in its initial stages, having only 256 bytes of DRAM in its initial package and no input-output except its toggle switches and LED register display. Despite this, it was initially surprisingly popular, with several hundred sales in the first year, and demand rapidly outstripped supply. Several early third-party vendors such as Cromemco and Processor Technology soon began supplying additional S-100 bus hardware for the Altair 8800.
In April 1975 at the Hannover Fair, Olivetti presented the P6060, the world's first complete, pre-assembled personal computer system. The central processing unit consisted of two cards, code named PUCE1 and PUCE2, and unlike most other personal computers was built with TTL components rather than a microprocessor. It had one or two 8" floppy disk drives, a 32-character plasma display, 80-column graphical thermal printer, 48 Kbytes of RAM, and BASIC language. It weighed . As a complete system, this was a significant step from the Altair, though it never achieved the same success. It was in competition with a similar product by IBM that had an external floppy disk drive.
From 1975 to 1977, most microcomputers, such as the MOS Technology KIM-1, the Altair 8800, and some versions of the Apple I, were sold as kits for do-it-yourselfers. Pre-assembled systems did not gain much ground until 1977, with the introduction of the Apple II, the Tandy TRS-80, the first SWTPC computers, and the Commodore PET. Computing has evolved with microcomputer architectures, with features added from their larger brethren, now dominant in most market segments.
A NeXT Computer and its object-oriented development tools and libraries were used by Tim Berners-Lee and Robert Cailliau at CERN to develop the world's first web server software, CERN httpd, and also used to write the first web browser, WorldWideWeb.
Systems as complicated as computers require very high reliability. ENIAC remained on, in continuous operation from 1947 to 1955, for eight years before being shut down. Although a vacuum tube might fail, it would be replaced without bringing down the system. By the simple strategy of never shutting down ENIAC, the failures were dramatically reduced. The vacuum-tube SAGE air-defense computers became remarkably reliable – installed in pairs, one off-line, tubes likely to fail did so when the computer was intentionally run at reduced power to find them. Hot-pluggable hard disks, like the hot-pluggable vacuum tubes of yesteryear, continue the tradition of repair during continuous operation. Semiconductor memories routinely have no errors when they operate, although operating systems like Unix have employed memory tests on start-up to detect failing hardware. Today, the requirement of reliable performance is made even more stringent when server farms are the delivery platform. Google has managed this by using fault-tolerant software to recover from hardware failures, and is even working on the concept of replacing entire server farms on-the-fly, during a service event.
In the 21st century, multi-core CPUs became commercially available. Content-addressable memory (CAM) has become inexpensive enough to be used in networking, and is frequently used for on-chip cache memory in modern microprocessors, although no computer system has yet implemented hardware CAMs for use in programming languages. Currently, CAMs (or associative arrays) in software are programming-language-specific. Semiconductor memory cell arrays are very regular structures, and manufacturers prove their processes on them; this allows price reductions on memory products. During the 1980s, CMOS logic gates developed into devices that could be made as fast as other circuit types; computer power consumption could therefore be decreased dramatically. Unlike the continuous current draw of a gate based on other logic types, a CMOS gate only draws significant current during the 'transition' between logic states, except for leakage.
This has allowed computing to become a commodity which is now ubiquitous, embedded in many forms, from greeting cards and telephones to satellites. The thermal design power which is dissipated during operation has become as essential as computing speed of operation. In 2006 servers consumed 1.5% of the total energy budget of the U.S. The energy consumption of computer data centers was expected to double to 3% of world consumption by 2011. The SoC (system on a chip) has compressed even more of the integrated circuitry into a single chip; SoCs are enabling phones and PCs to converge into single hand-held wireless mobile devices.
MIT Technology Review reported 10 November 2017 that IBM has created a 50-qubit computer; currently its quantum state lasts 50 microseconds. Physical Review X reported a technique for 'single-gate sensing as a viable readout method for spin qubits' (a singlet-triplet spin state in silicon) on 26 November 2018. A Google team has succeeded in operating their RF pulse modulator chip at 3 Kelvin, simplifying the cryogenics of their 72-qubit computer, which is setup to operate at 0.3 Kelvin; but the readout circuitry and another driver remain to be brought into the cryogenics. "See: Quantum supremacy" Silicon qubit systems have demonstrated entanglement at non-local distances.
Computing hardware and its software have even become a metaphor for the operation of the universe.
An indication of the rapidity of development of this field can be inferred from the history of the seminal 1947 article by Burks, Goldstine and von Neumann. By the time that anyone had time to write anything down, it was obsolete. After 1945, others read John von Neumann's "First Draft of a Report on the EDVAC", and immediately started implementing their own systems. To this day, the rapid pace of development has continued, worldwide.
A 1966 article in "Time" predicted that: "By 2000, the machines will be producing so much that everyone in the U.S. will, in effect, be independently wealthy. How to use leisure time will be a major problem." | https://en.wikipedia.org/wiki?curid=13636 |
Hausdorff space
In topology and related branches of mathematics, a Hausdorff space, separated space or T2 space is a topological space where for any two distinct points there exist neighbourhoods of each which are disjoint from each other. Of the many separation axioms that can be imposed on a topological space, the "Hausdorff condition" (T2) is the most frequently used and discussed. It implies the uniqueness of limits of sequences, nets, and filters.
Hausdorff spaces are named after Felix Hausdorff, one of the founders of topology. Hausdorff's original definition of a topological space (in 1914) included the Hausdorff condition as an axiom.
Points formula_1 and formula_2 in a topological space formula_3 can be "separated by neighbourhoods" if there exists a neighbourhood formula_4 of formula_1 and a neighbourhood formula_6 of formula_2 such that formula_4 and formula_6 are disjoint (formula_10).
formula_3 is a Hausdorff space if all distinct points in formula_3 are pairwise neighbourhood-separable. This condition is the third separation axiom (after formula_13), which is why Hausdorff spaces are also called formula_14 spaces. The name "separated space" is also used.
A related, but weaker, notion is that of a preregular space. formula_3 is a preregular space if any two topologically distinguishable points can be separated by disjoint neighbourhoods. Preregular spaces are also called "formula_16 spaces".
The relationship between these two conditions is as follows. A topological space is Hausdorff if and only if it is both preregular (i.e. topologically distinguishable points are separated by neighbourhoods) and Kolmogorov (i.e. distinct points are topologically distinguishable). A topological space is preregular if and only if its Kolmogorov quotient is Hausdorff.
For a topological space "X", the following are equivalent:
Almost all spaces encountered in analysis are Hausdorff; most importantly, the real numbers (under the standard metric topology on real numbers) are a Hausdorff space. More generally, all metric spaces are Hausdorff. In fact, many spaces of use in analysis, such as topological groups and topological manifolds, have the Hausdorff condition explicitly stated in their definitions.
A simple example of a topology that is T1 but is not Hausdorff is the cofinite topology defined on an infinite set.
Pseudometric spaces typically are not Hausdorff, but they are preregular, and their use in analysis is usually only in the construction of Hausdorff gauge spaces. Indeed, when analysts run across a non-Hausdorff space, it is still probably at least preregular, and then they simply replace it with its Kolmogorov quotient, which is Hausdorff.
In contrast, non-preregular spaces are encountered much more frequently in abstract algebra and algebraic geometry, in particular as the Zariski topology on an algebraic variety or the spectrum of a ring. They also arise in the model theory of intuitionistic logic: every complete Heyting algebra is the algebra of open sets of some topological space, but this space need not be preregular, much less Hausdorff, and in fact usually is neither. The related concept of Scott domain also consists of non-preregular spaces.
While the existence of unique limits for convergent nets and filters implies that a space is Hausdorff, there are non-Hausdorff T1 spaces in which every convergent sequence has a unique limit.
Subspaces and products of Hausdorff spaces are Hausdorff, but quotient spaces of Hausdorff spaces need not be Hausdorff. In fact, "every" topological space can be realized as the quotient of some Hausdorff space.
Hausdorff spaces are T1, meaning that all singletons are closed. Similarly, preregular spaces are R0.
Another nice property of Hausdorff spaces is that compact sets are always closed. This may fail in non-Hausdorff spaces such as the Sierpiński space.
The definition of a Hausdorff space says that points can be separated by neighborhoods. It turns out that this implies something which is seemingly stronger: in a Hausdorff space every pair of disjoint compact sets can also be separated by neighborhoods, in other words there is a neighborhood of one set and a neighborhood of the other, such that the two neighborhoods are disjoint. This is an example of the general rule that compact sets often behave like points.
Compactness conditions together with preregularity often imply stronger separation axioms. For example, any locally compact preregular space is completely regular. Compact preregular spaces are normal, meaning that they satisfy Urysohn's lemma and the Tietze extension theorem and have partitions of unity subordinate to locally finite open covers. The Hausdorff versions of these statements are: every locally compact Hausdorff space is Tychonoff, and every compact Hausdorff space is normal Hausdorff.
The following results are some technical properties regarding maps (continuous and otherwise) to and from Hausdorff spaces.
Let "f" : "X" → "Y" be a continuous function and suppose "Y" is Hausdorff. Then the graph of "f", formula_17, is a closed subset of "X" × "Y".
Let "f" : "X" → "Y" be a function and let formula_18 be its kernel regarded as a subspace of "X" × "X".
If "f,g" : "X" → "Y" are continuous maps and "Y" is Hausdorff then the equalizer formula_19 is closed in "X". It follows that if "Y" is Hausdorff and "f" and "g" agree on a dense subset of "X" then "f" = "g". In other words, continuous functions into Hausdorff spaces are determined by their values on dense subsets.
Let "f" : "X" → "Y" be a closed surjection such that "f"−1("y") is compact for all "y" ∈ "Y". Then if "X" is Hausdorff so is "Y".
Let "f" : "X" → "Y" be a quotient map with "X" a compact Hausdorff space. Then the following are equivalent:
All regular spaces are preregular, as are all Hausdorff spaces. There are many results for topological spaces that hold for both regular and Hausdorff spaces.
Most of the time, these results hold for all preregular spaces; they were listed for regular and Hausdorff spaces separately because the idea of preregular spaces came later.
On the other hand, those results that are truly about regularity generally do not also apply to nonregular Hausdorff spaces.
There are many situations where another condition of topological spaces (such as paracompactness or local compactness) will imply regularity if preregularity is satisfied.
Such conditions often come in two versions: a regular version and a Hausdorff version.
Although Hausdorff spaces are not, in general, regular, a Hausdorff space that is also (say) locally compact will be regular, because any Hausdorff space is preregular.
Thus from a certain point of view, it is really preregularity, rather than regularity, that matters in these situations.
However, definitions are usually still phrased in terms of regularity, since this condition is better known than preregularity.
See History of the separation axioms for more on this issue.
The terms "Hausdorff", "separated", and "preregular" can also be applied to such variants on topological spaces as uniform spaces, Cauchy spaces, and convergence spaces.
The characteristic that unites the concept in all of these examples is that limits of nets and filters (when they exist) are unique (for separated spaces) or unique up to topological indistinguishability (for preregular spaces).
As it turns out, uniform spaces, and more generally Cauchy spaces, are always preregular, so the Hausdorff condition in these cases reduces to the T0 condition.
These are also the spaces in which completeness makes sense, and Hausdorffness is a natural companion to completeness in these cases.
Specifically, a space is complete if and only if every Cauchy net has at "least" one limit, while a space is Hausdorff if and only if every Cauchy net has at "most" one limit (since only Cauchy nets can have limits in the first place).
The algebra of continuous (real or complex) functions on a compact Hausdorff space is a commutative C*-algebra, and conversely by the Banach–Stone theorem one can recover the topology of the space from the algebraic properties of its algebra of continuous functions. This leads to noncommutative geometry, where one considers noncommutative C*-algebras as representing algebras of functions on a noncommutative space. | https://en.wikipedia.org/wiki?curid=13637 |
Hawkwind
Hawkwind are an English rock band known as one of the earliest space rock groups. Since their formation in November 1969, Hawkwind have gone through many incarnations and have incorporated many different styles into their music, including hard rock, progressive rock and psychedelic rock. They are also regarded as an influential proto-punk band. Their lyrics favour urban and science fiction themes.
Many musicians, dancers and writers have worked with the band since their inception. Notable musicians who have performed in Hawkwind include Lemmy, Ginger Baker, Robert Calvert, Nik Turner and Huw Lloyd-Langton. However, the band are most closely associated with their founder, singer, songwriter and guitarist Dave Brock, who is the only remaining original member.
Hawkwind are best known for the song "Silver Machine", which became a number three UK hit single in 1972, but they scored further hit singles with "Urban Guerrilla" (another Top 40 hit) and "Shot Down in the Night". The band had a run of twenty-two of their albums charting in the UK from 1971 to 1993.
Dave Brock and Mick Slattery had been in the London-based psychedelic band Famous Cure, and a meeting with bassist John Harrison revealed a mutual interest in electronic music which led the trio to embark upon a new musical venture together. Seventeen-year-old drummer Terry Ollis replied to an advert in a music weekly, while Nik Turner and Michael "Dik Mik" Davies, old acquaintances of Brock, offered help with transport and gear, but were soon pulled into the band.
Gatecrashing a local talent night at the All Saints Hall, Notting Hill, they were so disorganised as to not even have a name, opting for "Group X" at the last minute, nor any songs, choosing to play an extended 20-minute jam on the Byrds' "Eight Miles High". BBC Radio 1 DJ John Peel was in the audience and was impressed enough to tell event organiser, Douglas Smith, to keep an eye on them. Smith signed them up and got them a deal with Liberty Records on the back of a deal he was setting up for Cochise.
The band settled on the name "Hawkwind" after briefly being billed as "Group X" and "Hawkwind Zoo".
An Abbey Road session took place recording demos of "Hurry on Sundown" and others (included on the remasters version of "Hawkwind"), after which Slattery left to be replaced by Huw Lloyd-Langton, who had known Brock from his days working in a music shop selling guitar strings to Brock, then a busker.
Pretty Things guitarist Dick Taylor was brought in to produce the 1970 debut album "Hawkwind". Although it was not a commercial success, it did bring them to the attention of the UK underground scene, which found them playing free concerts, benefit gigs, and festivals. Playing free outside the Bath Festival, they encountered another Ladbroke Grove based band, the Pink Fairies, who shared similar interests in music and recreational activities; a friendship developed which led to the two bands becoming running partners and performing as "Pinkwind". Their use of drugs, however, led to the departure of Harrison, who did not partake, to be replaced briefly by Thomas Crimble (about July 1970 – March 1971). Crimble played on a few BBC sessions before leaving to help organise the Glastonbury Free Festival 1971; he sat in during the band's performance there. Lloyd-Langton also quit, after a bad LSD trip at the Isle of Wight Festival led to a nervous breakdown.
Their follow-up album, 1971's "In Search of Space", brought greater commercial success, reaching number 18 on the UK album charts. This album offered a refinement of the band's image and philosophy courtesy of graphic artist Barney Bubbles and underground press writer Robert Calvert, as depicted in the accompanying "Hawklog" booklet, which would be further developed into the "Space Ritual" stage show. Science fiction author Michael Moorcock and dancer Stacia also started contributing to the band. Dik Mik had left the band, replaced by sound engineer Del Dettmar, but chose to return for this album giving the band two electronics players. Bass player Dave Anderson, who had been in the German band Amon Düül II, had also joined and played on the album but departed before its release because of personal tensions with some other members of the band. Anderson and Lloyd-Langton then formed the short-lived band Amon Din. Meanwhile, Ollis quit, unhappy with the commercial direction the band were heading in.
The addition of bassist Ian "Lemmy" Kilmister and drummer Simon King propelled the band to greater heights. One of the early gigs the band played was a benefit for the Greasy Truckers at The Roundhouse on 13 February 1972. A live album of the concert, "Greasy Truckers Party", was released, and after re-recording the vocal, a single, "Silver Machine", was also released, reaching number three in the UK charts. This generated sufficient funds for the subsequent album "Doremi Fasol Latido" Space Ritual tour. The show featured dancers Stacia and Miss Renee typically performing either topless or wearing only body paint, mime artist Tony Carrera and a light show by Liquid Len and was recorded on the elaborate package "Space Ritual". At the height of their success, in 1973, the band released the single "Urban Guerrilla", which coincided with an IRA bombing campaign in London, so the BBC refused to play it and the band's management reluctantly decided to withdraw it fearing accusations of opportunism, despite the disc having already climbed to number 39 in the UK chart.
Dik Mik departed during 1973 and Calvert ended his association with the band to concentrate on solo projects. Dettmar also indicated that he was to leave the band, so Simon House was recruited as keyboardist and violinist playing live shows, a North America tour and recording the 1974 album "Hall of the Mountain Grill". Dettmar left after a European tour and emigrated to Canada, whilst Alan Powell deputised for an incapacitated King on that European tour, but remained giving the band two drummers.
At the beginning of 1975, the band recorded the album "Warrior on the Edge of Time" in collaboration with Michael Moorcock, loosely based on his Eternal Champion figure. However, during a North American tour in May, Lemmy was caught in possession of amphetamine crossing the border from the US into Canada. The border police mistook the powder for cocaine and he was jailed, forcing the band to cancel some shows. Fed up with his erratic behaviour, the band dismissed the bass player replacing him with their long-standing friend and former Pink Fairies guitarist Paul Rudolph. Lemmy then teamed up with another Pink Fairies guitarist, Larry Wallis, to form Motörhead, named after the last song he had written for Hawkwind.
Calvert made a guest appearance with the band for their headline set at the Reading Festival in August 1975, after which he chose to rejoin the band as a full-time lead vocalist. Stacia chose to relinquish her dancing duties and settle down to family life. The band changed record company to Tony Stratton-Smith's Charisma Records and, on Stratton-Smith's suggestion, band management from Douglas Smith to Tony Howard.
"Astounding Sounds, Amazing Music" is the first album of this era. On the eve of recording the follow-up "Back on the Streets" single, Turner was dismissed for his erratic live playing and Powell was deemed surplus to requirements. After a tour to promote the single and during the recording of the next album, Rudolph was also dismissed, for allegedly trying to steer the band into a musical direction at odds with Calvert and Brock's vision.
Adrian "Ade" Shaw, who, as bass player for Magic Muscle, had supported Hawkwind on the "Space Ritual" tour, came in for the 1977 album "Quark, Strangeness and Charm". The band continued to enjoy moderate commercial success, but Calvert's mental illness often caused problems. A manic phase saw the band abandon a European tour in France, while a depression phase during a 1978 North American tour convinced Brock to disband the group. In between these two tours, the band had recorded the album "PXR5" in January 1978, but its release was delayed until 1979.
On 23 December 1977 in Barnstaple, Brock and Calvert had performed a one-off gig with Devon band Ark as the Sonic Assassins, and looking for a new project in 1978, bassist Harvey Bainbridge and drummer Martin Griffin were recruited from this event. Steve Swindells was recruited as keyboard player. The band was named Hawklords, (probably for legal reasons, the band having recently split from their management), and recording took place on a farm in Devon using a mobile studio, resulting in the album "25 Years On". King had originally been the drummer for the project but quit during recording sessions to return to London, while House, who had temporarily left the band to join a David Bowie tour, elected to remain with Bowie full-time, but nevertheless contributed violin to these sessions. At the end of the band's UK tour, Calvert, wanting King back in the band, dismissed Griffin, then promptly resigned himself, choosing to pursue a career in literature. Swindells left to record a solo album after an offer had been made to him by the record company ATCO.
In late 1979, Hawkwind reformed with Brock, Bainbridge and King being joined by Huw Lloyd-Langton (who had played on the debut album) and Tim Blake (formerly of Gong), embarking upon a UK tour despite not having a record deal or any product to promote. Some shows were recorded and a deal was made with Bronze Records, resulting in the "Live Seventy Nine" album, quickly followed by the studio album "Levitation". However, during the recording of "Levitation" King quit and Ginger Baker was drafted in for the sessions, but he chose to stay with the band for the tour, during which Blake left to be replaced by Keith Hale.
In 1981 Baker and Hale left after their insistence that Bainbridge should be dismissed was ignored, and Brock and Bainbridge elected to handle synthesisers and sequencers themselves, with drummer Griffin from the Hawklords rejoining. Three albums, which again saw Moorcock contributing lyrics and vocals, were recorded for RCA/Active: "Sonic Attack", the electronic "Church of Hawkwind" and "Choose Your Masques". This band headlined the 1981 Glastonbury Festival and made an appearance at the 1982 Donington Monsters of Rock Festival, as well as continuing to play the summer solstice at Stonehenge Free Festival.
In the early 1980s, Brock had started using drum machines for his home demos and became increasingly frustrated at the inability of drummers to keep perfect time, leading to a succession of drummers coming and going. First, Griffin was ousted and the band tried King again, but, unhappy with his playing at that time, he was rejected. Andy Anderson briefly joined while he was also playing for The Cure, and Robert Heaton also filled the spot briefly prior to the rise of New Model Army. Lloyd Langton Group drummer John Clark did some recording sessions, and in late 1983 Rick Martinez joined the band to play drums on the "Earth Ritual" tour in February and March 1984, later replaced by Clive Deamer.
Turner had returned as a guest for the 1982 "Choose Your Masques" tour and was invited back permanently. Further tours ensued with Phil "Dead Fred" Reeves augmenting the line-up on keyboards and violin, but neither Turner nor Reeves would appear on the only recording of 1983–84, "The Earth Ritual Preview", however there was a guest spot for Lemmy. The "Earth Ritual" tour was filmed for Hawkwind's first video release, "Night of the Hawk".
Alan Davey was a young fan of the band who had sent a tape of his playing to Brock, and Brock chose to oust Reeves moving Bainbridge from bass to keyboards to accommodate Davey. This experimental line-up played at the Stonehenge Free Festival in 1984, which was filmed and release as "Stonehenge 84". Subsequent personal and professional tensions between Brock and Turner led to the latter's expulsion at the beginning of 1985. Clive Deamer, who was deemed "too professional" for the band, was eventually replaced in 1985 by Danny Thompson Jr (son of folk-rock bassist Danny Thompson), a friend of Alan Davey, and remained almost to the end of the decade.
Hawkwind's association with Moorcock climaxed in their most ambitious project, "The Chronicle of the Black Sword", based loosely around the Elric series of books and theatrically staged with Tony Crerar as the central character. Moorcock contributed lyrics, but only performed some spoken pieces on some live dates. The tour was recorded and issued as an album "Live Chronicles" and video "The Chronicle of the Black Sword". The band also performed at the Worldcon (World Science Fiction Convention) in Brighton.
In August 1985, The band performed at Crystal Palace Bowl, with several other rock bands, for a benefit concert for Pete Townshend's Double-O anti-heroin charity. Lemmy and Stacey were reunited with the band for this event. Vera Lynn closed the show.
A headline appearance at the 1986 Reading Festival was followed by a UK tour to promote the "Live Chronicles" album which was filmed and released as "Chaos". In 1988 the band recorded the album "The Xenon Codex" with Guy Bidmead, but all was not well in the band and soon after, both Lloyd-Langton and Thompson departed.
Drummer Richard Chadwick, who joined in the summer of 1988, had been playing in small alternative free festival bands, most notably Bath's Smart Pils, for a decade and had frequently crossed paths with Hawkwind and Brock. He was initially invited simply to play with the band, but eventually replaced stand in drummer Mick Kirton to become the band's drummer to the present day.
To fill in the gap of lead sound, lost when Lloyd-Langton left, violinist House was re-instated into the line-up in 1989 (having previously been a member from 1974 until 1978), and, notably, Hawkwind embarked on their first North American visit in eleven years (since the somewhat disastrous 1978 tour), in which House did not partake. The successfully received tour was the first of several over the coming years, in an effort by the band to re-introduce themselves to the American market.
Bridget Wishart, an associate of Chadwick's from the festival circuit, also joined to become the band's one and only singing front-woman, the band had been fronted in earlier days by Stacia but only as a dancer. This band produced two albums, 1990's "Space Bandits" and 1991's "Palace Springs" and also filmed a 1-hour appearance for the "Bedrock TV" series with dancer Julie Murray-Anderson, who performed with Hawkwind between 1988 and 1991.
1990 saw Hawkwind tour North America again, the second instalment in a series of American visits made at around this time in an effort to re-establish the Hawkwind brand in America. The original business plan was to hold three consecutive US tours, annually, from 1989–1991, with the first losing money, the second breaking even, and the third turning a profit, ultimately bringing Hawkwind back into recognition across the Atlantic. Progress, however, was somewhat stunted, due to ex-member Nik Turner touring the United States with his own band at the time, in which the shows were often marketed as Hawkwind.
Still supporting Space Bandits, 1991 commenced with perhaps the most surprising Hawkwind tour in the band's history, without Dave Brock. Brock's temporary replacement was former Smart Pils guitarist Steve Bemand (who had played with Chadwick and Wishart in the Demented Stoats). The tour began in Amsterdam on 12 March and took in Germany, Greece, Italy and France before wrapping up in Belgium on 10 April after 24 dates.
In 1991 Bainbridge, House and Wishart departed and the band continued as a three piece relying heavily on synthesisers and sequencers to create a wall-of-sound. The 1992 album "Electric Tepee" combined hard rock and light ambient pieces, while "It is the Business of the Future to be Dangerous" is almost devoid of the rock leanings. "The Business Trip" is a record of the previous album's tour, but rockier as would be expected from a live outing. The "White Zone" album was released under the alias Psychedelic Warriors to distance itself entirely from the rock expectancy of Hawkwind.
A general criticism of techno music at that time was its facelessness and lack of personality, which the band were coming to feel also plagued them. Ron Tree had known the band on the festival circuit and offered his services as a front-man, and the band duly employed him for the album "Alien 4" and its accompanying tour which resulted in the album "Love in Space" and "video".
In 1996, unhappy with the musical direction of the band, bassist Davey left, forming his own Middle-Eastern flavoured hard-rock group Bedouin and a Motörhead tribute act named Ace of Spades. His bass playing role was reluctantly picked up by singer Tree and the band were joined full-time by lead guitarist Jerry Richards (another stalwart of the festival scene, playing for Tubilah Dog who had merged with Brock's Agents of Chaos during 1988) for the albums "Distant Horizons" and "In Your Area". Rasta chanter Captain Rizz also joined the band for guest spots during live shows.
Hawkestra — a re-union event featuring appearances from past and present members — had originally been intended to coincide with the band's 30th anniversary and the release of the career spanning "Epocheclipse – 30 Year Anthology" set, but logistical problems delayed it until 21 October 2000. It took place at the Brixton Academy with about 20 members taking part in a more than 3-hour set, which was filmed and recorded. Guests included Samantha Fox who sang "Master of the Universe". However, arguments and disputes over financial recompense and musical input resulted in the prospect of the event being re-staged unlikely, and any album or DVD release being indefinitely shelved.
The Hawkestra had set a template for Brock to assemble a core band of Tree, Brock, Richards, Davey, Chadwick and for the use of former members as guests on live shows and studio recordings. The 2000 Christmas Astoria show was recorded with contributions from House, Blake, Rizz, Moorcock, Jez Huggett and Keith Kniveton and released as "Yule Ritual" the following year. In 2001, Davey agreed to rejoin the band permanently, but only after the departure of Tree and Richards.
Meanwhile, having rekindled relationships with old friends at the Hawkestra, Turner organised further Hawkestra gigs resulting in the formation of xhawkwind.com, a band consisting mainly of ex-Hawkwind members and playing old Hawkwind songs. An appearance at Guilfest in 2002 led to confusion as to whether this actually was Hawkwind, sufficiently irking Brock into taking legal action to prohibit Turner from trading under the name Hawkwind. Turner lost the case and the band began performing as Space Ritual.
An appearance at the Canterbury Sound Festival in August 2001, resulting in another live album "Canterbury Fayre 2001", saw guest appearances from Lloyd-Langton, House, Kniveton with Arthur Brown on "Silver Machine". The band organised the first of their own weekend festivals, named Hawkfest, in Devon in the summer of 2002. Brown joined the band in 2002 for a Winter tour which featured some Kingdom Come songs and saw appearances from Blake and Lloyd-Langton, the Newcastle show being released on DVD as "Out of the Shadows" and the London show on CD as "Spaced Out in London".
In 2005 a new album "Take Me to Your Leader" was released. Recorded by the core band of Brock/Davey/Chadwick, contributors included new keyboardist Jason Stuart, Arthur Brown, tabloid writer and TV personality Matthew Wright, 1970s New Wave singer Lene Lovich, Simon House and Jez Huggett. This was followed in 2006 by the CD/DVD "Take Me to Your Future".
The band were the subject of an hour-long television documentary entitled "Hawkwind: Do Not Panic" that aired on BBC Four as part of the "Originals" series. It was broadcast on 30 March 2007 and repeated on 10 August 2007. Although Brock participated in its making he did not appear in the programme, it is alleged that he requested all footage of himself be removed after he was denied any artistic control over the documentary. In one of the documentary's opening narratives regarding Brock, it is stated that he declined to be interviewed for the programme because of Nik Turner's involvement, indicating that the two men have still not reconciled over the xhawkwind.com incident.
December 2006 saw the official departure of Alan Davey, who left to perform and record with two new bands: Gunslinger and Thunor. He was replaced by Mr Dibs, a long-standing member of the road crew. The band performed at their annual Hawkfest festival and headlined the US festival Nearfest and played gigs in PA and NY. At the end of 2007, Tim Blake once again joined the band filling the lead role playing keyboards and theremin. The band played 5 Christmas dates, the London show being released as an audio CD and video DVD under the title "Knights of Space".
In January 2008 the band reversed its anti-taping policy, long a sore-point with many fans, announcing that it would allow audio recording and non-commercial distribution of such recordings, provided there was no competing official release. At the end of 2008, Atomhenge Records (a subsidiary of Cherry Red Records) commenced the re-issuing of Hawkwind's back catalogue from the years 1976 through to 1997 with the release of two triple CD anthologies "Spirit of the Age (anthology 1976–84)" and "The Dream Goes On (anthology 1985–97)".
On 8 September 2008 keyboard player Jason Stuart died due to a brain haemorrhage. In October 2008, Niall Hone (former Tribe of Cro) joined Hawkwind for their Winter 2008 tour playing guitar, along with returning synth/theremin player Tim Blake. In this period, Hone also occasionally played bass guitar alongside Mr Dibs and used laptops for live electronic improvisation.
In 2009, the band began occasionally featuring Jon Sevink from The Levellers as guest violinist at some shows. Later that year, Hawkwind embarked on a winter tour to celebrate the band's 40th anniversary, including two gigs on 28 and 29 August marking the anniversary of their first live performances. In 2010, Hawkwind held their annual Hawkfest at the site of the original Isle of Wight Festival, marking the 40th anniversary of their appearance there.
On 21 June 2010, Hawkwind released a studio album entitled "Blood of the Earth" on Eastworld Records. During and since the "Blood of the Earth" support tours, Hone's primary on-stage responsibility shifted to bass, while Mr. Dibs moved to a more traditional lead singer/front man role.
In 2011, Hawkwind toured Australia for the second time.
April 2012 saw the release of a new album, "Onward", again on Eastworld. Keyboardist Dead Fred rejoined Hawkwind for the 2012 tour in support of "Onward" and has since remained with the band. In November 2012, Brock, Chadwick and Hone — credited as "Hawkwind Light Orchestra" — released "Stellar Variations" on Esoteric Recordings.
2013 marked the first Hawkeaster, a two-day festival held in Seaton, Devon during the Easter weekend. A US tour was booked for October 2013, but due to health issues, was postponed and later cancelled.
In February 2014, as part of a one-off Space Ritual performance, Hawkwind performed at the O2 Shepherd's Bush Empire featuring an appearance by Brian Blessed for the spoken word element of Sonic Attack; a studio recording of this performance was released as a single in September 2014. Later in the year, former Soft Machine guitarist John Etheridge joined the live line-up of the band, though he had departed again prior to early 2015 dates.
Following Hawkeaster 2015, Hawkwind made their debut visit to Japan, playing two sold-out shows in Tokyo. Hawkwind performed two Solstice Ritual shows in December 2015, with Steve Hillage guesting, and Haz Wheaton joining Hawkwind on bass guitar. Wheaton is a former member of the band's road crew who had previously appeared with Technicians of Spaceship Hawkwind, a "skeleton crew" spin off live band. Additionally, he had guested on bass for Dave Brock's solo album "Brockworld" released earlier in the year.
The band released "The Machine Stops" on 15 April 2016. The album marked Wheaton's first appearance on a Hawkwind studio album, and the first album without Tim Blake's involvement since he had rejoined the band in 2010 and appeared on "Blood of the Earth". His departure was offset by increased synthesiser work by Hone and Brock.
Dead Fred's last live appearance with Hawkwind was at The Eastbourne Winter Gardens April 1, 2016. Hone took over keyboards and synth duties live until though Blake returned for shows in summer 2016.
It was announced in November 2016 that Hawkwind were recording a new studio album, entitled "Into The Woods". Keyboardist-guitarist Magnus Martin replaced both Hone and Blake in the lineup for the new album, leaving the 2017 core band composed of Brock, Chadwick, Mr Dibs, Wheaton and Martin.
In 2018, Hawkwind recorded an acoustic album "The Road to Utopia" consisting primarily of cover versions of their 1970s songs with production, arrangement and additional orchestrations by Mike Batt and a guest appearance from Eric Clapton. Batt was scheduled to conduct a series concerts of Hawkwind songs featuring the band and orchestra in October and November.
In May 2018 Haz Wheaton left and later joined Electric Wizard. Niall Hone returned on bass. Mr Dibs left on August 22 stating ‘irreconcilable differences’ in a statement on the Hawkwind fans Facebook page.
In October 2019, Hawkwind released "All Aboard the Skylark," marketed as a return to their space rock roots. This was the first album with the line-up of Brock, Chadwick, Hone, and Martin. Accompanying the CD version, and sold as a separate vinyl LP, was "Acoustic Daze." This recording included tracks from the 2018 album "The Road to Utopia", minus the additions by Batt and Clapton.
Hawkwind have been cited as an influence by artists such as Al Jourgensen of Ministry, Monster Magnet, the Sex Pistols (who covered "Silver Machine"), Henry Rollins and Dez Cadena of Black Flag, Siobhan Fahey, Ty Segall, The Mekano Set, and Ozric Tentacles.
Hard rock musician Lemmy of the band Motörhead gained a lot from his tenure in Hawkwind. He has remarked, "I really found myself as an instrumentalist in Hawkwind. Before that I was just a guitar player who was pretending to be good, when actually I was no good at all. In Hawkwind I became a good bass player. It was where I learned I was good at something."
Current members | https://en.wikipedia.org/wiki?curid=13644 |
Horse
The horse ("Equus ferus caballus") is one of two extant subspecies of "Equus ferus". It is an odd-toed ungulate mammal belonging to the taxonomic family Equidae. The horse has evolved over the past 45 to 55 million years from a small multi-toed creature, "Eohippus", into the large, single-toed animal of today. Humans began domesticating horses around 4000 BC, and their domestication is believed to have been widespread by 3000 BC. Horses in the subspecies "caballus" are domesticated, although some domesticated populations live in the wild as feral horses. These feral populations are not true wild horses, as this term is used to describe horses that have never been domesticated, such as the endangered Przewalski's horse, a separate subspecies, and the only remaining true wild horse. There is an extensive, specialized vocabulary used to describe equine-related concepts, covering everything from anatomy to life stages, size, colors, markings, breeds, locomotion, and behavior.
Horses are adapted to run, allowing them to quickly escape predators, possessing an excellent sense of balance and a strong fight-or-flight response. Related to this need to flee from predators in the wild is an unusual trait: horses are able to sleep both standing up and lying down, with younger horses tending to sleep significantly more than adults. Female horses, called mares, carry their young for approximately 11 months, and a young horse, called a foal, can stand and run shortly following birth. Most domesticated horses begin training under a saddle or in a harness between the ages of two and four. They reach full adult development by age five, and have an average lifespan of between 25 and 30 years.
Horse breeds are loosely divided into three categories based on general temperament: spirited "hot bloods" with speed and endurance; "cold bloods", such as draft horses and some ponies, suitable for slow, heavy work; and "warmbloods", developed from crosses between hot bloods and cold bloods, often focusing on creating breeds for specific riding purposes, particularly in Europe. There are more than 300 breeds of horse in the world today, developed for many different uses.
Horses and humans interact in a wide variety of sport competitions and non-competitive recreational pursuits, as well as in working activities such as police work, agriculture, entertainment, and therapy. Horses were historically used in warfare, from which a wide variety of riding and driving techniques developed, using many different styles of equipment and methods of control. Many products are derived from horses, including meat, milk, hide, hair, bone, and pharmaceuticals extracted from the urine of pregnant mares. Humans provide domesticated horses with food, water, and shelter, as well as attention from specialists such as veterinarians and farriers.
Specific terms and specialized language are used to describe equine anatomy, different life stages, and colors and breeds.
Depending on breed, management and environment, the modern domestic horse has a life expectancy of 25 to 30 years. Uncommonly, a few animals live into their 40s and, occasionally, beyond. The oldest verifiable record was "Old Billy", a 19th-century horse that lived to the age of 62. In modern times, Sugar Puff, who had been listed in "Guinness World Records" as the world's oldest living pony, died in 2007 at age 56.
Regardless of a horse or pony's actual birth date, for most competition purposes a year is added to its age each January 1 of each year in the Northern Hemisphere and each August 1 in the Southern Hemisphere. The exception is in endurance riding, where the minimum age to compete is based on the animal's actual calendar age.
The following terminology is used to describe horses of various ages:
In horse racing, these definitions may differ: For example, in the British Isles, Thoroughbred horse racing defines colts and fillies as less than five years old. However, Australian Thoroughbred racing defines colts and fillies as less than four years old.
The height of horses is measured at the highest point of the withers, where the neck meets the back. This point is used because it is a stable point of the anatomy, unlike the head or neck, which move up and down in relation to the body of the horse.
In English-speaking countries, the height of horses is often stated in units of hands and inches: one hand is equal to . The height is expressed as the number of full hands, followed by a point, then the number of additional inches, and ending with the abbreviation "h" or "hh" (for "hands high"). Thus, a horse described as "15.2 h" is 15 hands plus 2 inches, for a total of in height.
The size of horses varies by breed, but also is influenced by nutrition. Light riding horses usually range in height from and can weigh from . Larger riding horses usually start at about and often are as tall as , weighing from . Heavy or draft horses are usually at least high and can be as tall as high. They can weigh from about .
The largest horse in recorded history was probably a Shire horse named Mammoth, who was born in 1848. He stood high and his peak weight was estimated at . The current record holder for the world's smallest horse is Thumbelina, a fully mature miniature horse affected by dwarfism. She is tall and weighs .
Ponies are taxonomically the same animals as horses. The distinction between a horse and pony is commonly drawn on the basis of height, especially for competition purposes. However, height alone is not dispositive; the difference between horses and ponies may also include aspects of phenotype, including conformation and temperament.
The traditional standard for height of a horse or a pony at maturity is . An animal 14.2 h or over is usually considered to be a horse and one less than 14.2 h a pony, but there are many exceptions to the traditional standard. In Australia, ponies are considered to be those under . For competition in the Western division of the United States Equestrian Federation, the cutoff is . The International Federation for Equestrian Sports, the world governing body for horse sport, uses metric measurements and defines a pony as being any horse measuring less than at the withers without shoes, which is just over 14.2 h, and , or just over 14.2 h, with shoes.
Height is not the sole criterion for distinguishing horses from ponies. Breed registries for horses that typically produce individuals both under and over 14.2 h consider all animals of that breed to be horses regardless of their height. Conversely, some pony breeds may have features in common with horses, and individual animals may occasionally mature at over 14.2 h, but are still considered to be ponies.
Ponies often exhibit thicker manes, tails, and overall coat. They also have proportionally shorter legs, wider barrels, heavier bone, shorter and thicker necks, and short heads with broad foreheads. They may have calmer temperaments than horses and also a high level of intelligence that may or may not be used to cooperate with human handlers. Small size, by itself, is not an exclusive determinant. For example, the Shetland pony which averages , is considered a pony. Conversely, breeds such as the Falabella and other miniature horses, which can be no taller than , are classified by their registries as very small horses, not ponies.
Horses have 64 chromosomes. The horse genome was sequenced in 2007. It contains 2.7 billion DNA base pairs, which is larger than the dog genome, but smaller than the human genome or the bovine genome. The map is available to researchers.
Horses exhibit a diverse array of coat colors and distinctive markings, described by a specialized vocabulary. Often, a horse is classified first by its coat color, before breed or sex. Horses of the same color may be distinguished from one another by white markings, which, along with various spotting patterns, are inherited separately from coat color.
Many genes that create horse coat colors and patterns have been identified. Current genetic tests can identify at least 13 different alleles influencing coat color, and research continues to discover new genes linked to specific traits. The basic coat colors of chestnut and black are determined by the gene controlled by the Melanocortin 1 receptor, also known as the "extension gene" or "red factor," as its recessive form is "red" (chestnut) and its dominant form is black. Additional genes control suppression of black color to point coloration that results in a bay, spotting patterns such as pinto or leopard, dilution genes such as palomino or dun, as well as graying, and all the other factors that create the many possible coat colors found in horses.
Horses that have a white coat color are often mislabeled; a horse that looks "white" is usually a middle-aged or older gray. Grays are born a darker shade, get lighter as they age, but usually keep black skin underneath their white hair coat (with the exception of pink skin under white markings). The only horses properly called white are born with a predominantly white hair coat and pink skin, a fairly rare occurrence. Different and unrelated genetic factors can produce white coat colors in horses, including several different alleles of dominant white and the sabino-1 gene. However, there are no "albino" horses, defined as having both pink skin and red eyes.
Gestation lasts approximately 340 days, with an average range 320–370 days, and usually results in one foal; twins are rare. Horses are a precocial species, and foals are capable of standing and running within a short time following birth. Foals are usually born in the spring. The estrous cycle of a mare occurs roughly every 19–22 days and occurs from early spring into autumn. Most mares enter an "anestrus" period during the winter and thus do not cycle in this period. Foals are generally weaned from their mothers between four and six months of age.
Horses, particularly colts, sometimes are physically capable of reproduction at about 18 months, but domesticated horses are rarely allowed to breed before the age of three, especially females. Horses four years old are considered mature, although the skeleton normally continues to develop until the age of six; maturation also depends on the horse's size, breed, sex, and quality of care. Larger horses have larger bones; therefore, not only do the bones take longer to form bone tissue, but the epiphyseal plates are larger and take longer to convert from cartilage to bone. These plates convert after the other parts of the bones, and are crucial to development.
Depending on maturity, breed, and work expected, horses are usually put under saddle and trained to be ridden between the ages of two and four. Although Thoroughbred race horses are put on the track as young as the age of two in some countries, horses specifically bred for sports such as dressage are generally not put under saddle until they are three or four years old, because their bones and muscles are not solidly developed. For endurance riding competition, horses are not deemed mature enough to compete until they are a full 60 calendar months (five years) old.
The horse skeleton averages 205 bones. A significant difference between the horse skeleton and that of a human is the lack of a collarbone—the horse's forelimbs are attached to the spinal column by a powerful set of muscles, tendons, and ligaments that attach the shoulder blade to the torso. The horse's four legs and hooves are also unique structures. Their leg bones are proportioned differently from those of a human. For example, the body part that is called a horse's "knee" is actually made up of the carpal bones that correspond to the human wrist. Similarly, the hock contains bones equivalent to those in the human ankle and heel. The lower leg bones of a horse correspond to the bones of the human hand or foot, and the fetlock (incorrectly called the "ankle") is actually the proximal sesamoid bones between the cannon bones (a single equivalent to the human metacarpal or metatarsal bones) and the proximal phalanges, located where one finds the "knuckles" of a human. A horse also has no muscles in its legs below the knees and hocks, only skin, hair, bone, tendons, ligaments, cartilage, and the assorted specialized tissues that make up the hoof.
The critical importance of the feet and legs is summed up by the traditional adage, "no foot, no horse". The horse hoof begins with the distal phalanges, the equivalent of the human fingertip or tip of the toe, surrounded by cartilage and other specialized, blood-rich soft tissues such as the laminae. The exterior hoof wall and horn of the sole is made of keratin, the same material as a human fingernail. The end result is that a horse, weighing on average , travels on the same bones as would a human on tiptoe. For the protection of the hoof under certain conditions, some horses have horseshoes placed on their feet by a professional farrier. The hoof continually grows, and in most domesticated horses needs to be trimmed (and horseshoes reset, if used) every five to eight weeks, though the hooves of horses in the wild wear down and regrow at a rate suitable for their terrain.
Horses are adapted to grazing. In an adult horse, there are 12 incisors at the front of the mouth, adapted to biting off the grass or other vegetation. There are 24 teeth adapted for chewing, the premolars and molars, at the back of the mouth. Stallions and geldings have four additional teeth just behind the incisors, a type of canine teeth called "tushes". Some horses, both male and female, will also develop one to four very small vestigial teeth in front of the molars, known as "wolf" teeth, which are generally removed because they can interfere with the bit. There is an empty interdental space between the incisors and the molars where the bit rests directly on the gums, or "bars" of the horse's mouth when the horse is bridled.
An estimate of a horse's age can be made from looking at its teeth. The teeth continue to erupt throughout life and are worn down by grazing. Therefore, the incisors show changes as the horse ages; they develop a distinct wear pattern, changes in tooth shape, and changes in the angle at which the chewing surfaces meet. This allows a very rough estimate of a horse's age, although diet and veterinary care can also affect the rate of tooth wear.
Horses are herbivores with a digestive system adapted to a forage diet of grasses and other plant material, consumed steadily throughout the day. Therefore, compared to humans, they have a relatively small stomach but very long intestines to facilitate a steady flow of nutrients. A horse will eat of food per day and, under normal use, drink of water. Horses are not ruminants, they have only one stomach, like humans, but unlike humans, they can utilize cellulose, a major component of grass. Horses are hindgut fermenters. Cellulose fermentation by symbiotic bacteria occurs in the cecum, or "water gut", which food goes through before reaching the large intestine. Horses cannot vomit, so digestion problems can quickly cause colic, a leading cause of death.
The horses' senses are based on their status as prey animals, where they must be aware of their surroundings at all times. They have the largest eyes of any land mammal, and are lateral-eyed, meaning that their eyes are positioned on the sides of their heads. This means that horses have a range of vision of more than 350°, with approximately 65° of this being binocular vision and the remaining 285° monocular vision. Horses have excellent day and night vision, but they have two-color, or dichromatic vision; their color vision is somewhat like red-green color blindness in humans, where certain colors, especially red and related colors, appear as a shade of green.
Their sense of smell, while much better than that of humans, is not quite as good as that of a dog. It is believed to play a key role in the social interactions of horses as well as detecting other key scents in the environment. Horses have two olfactory centers. The first system is in the nostrils and nasal cavity, which analyze a wide range of odors. The second, located under the nasal cavity, are the Vomeronasal organs, also called Jacobson's organs. These have a separate nerve pathway to the brain and appear to primarily analyze pheromones.
A horse's hearing is good, and the pinna of each ear can rotate up to 180°, giving the potential for 360° hearing without having to move the head. Noise impacts the behavior of horses and certain kinds of noise may contribute to stress: A 2013 study in the UK indicated that stabled horses were calmest in a quiet setting, or if listening to country or classical music, but displayed signs of nervousness when listening to jazz or rock music. This study also recommended keeping music under a volume of 21 decibels. An Australian study found that stabled racehorses listening to talk radio had a higher rate of gastric ulcers than horses listening to music, and racehorses stabled where a radio was played had a higher overall rate of ulceration than horses stabled where there was no radio playing.
Horses have a great sense of balance, due partly to their ability to feel their footing and partly to highly developed proprioception—the unconscious sense of where the body and limbs are at all times. A horse's sense of touch is well-developed. The most sensitive areas are around the eyes, ears, and nose. Horses are able to sense contact as subtle as an insect landing anywhere on the body.
Horses have an advanced sense of taste, which allows them to sort through fodder and choose what they would most like to eat, and their prehensile lips can easily sort even small grains. Horses generally will not eat poisonous plants, however, there are exceptions; horses will occasionally eat toxic amounts of poisonous plants even when there is adequate healthy food.
All horses move naturally with four basic gaits: the four-beat walk, which averages ; the two-beat trot or jog at (faster for harness racing horses); the canter or lope, a three-beat gait that is ; and the gallop. The gallop averages , but the world record for a horse galloping over a short, sprint distance is . Besides these basic gaits, some horses perform a two-beat pace, instead of the trot. There also are several four-beat "ambling" gaits that are approximately the speed of a trot or pace, though smoother to ride. These include the lateral rack, running walk, and tölt as well as the diagonal fox trot. Ambling gaits are often genetic in some breeds, known collectively as gaited horses. Often, gaited horses replace the trot with one of the ambling gaits.
Horses are prey animals with a strong fight-or-flight response. Their first reaction to a threat is to startle and usually flee, although they will stand their ground and defend themselves when flight is impossible or if their young are threatened. They also tend to be curious; when startled, they will often hesitate an instant to ascertain the cause of their fright, and may not always flee from something that they perceive as non-threatening. Most light horse riding breeds were developed for speed, agility, alertness and endurance; natural qualities that extend from their wild ancestors. However, through selective breeding, some breeds of horses are quite docile, particularly certain draft horses.
Horses are herd animals, with a clear hierarchy of rank, led by a dominant individual, usually a mare. They are also social creatures that are able to form companionship attachments to their own species and to other animals, including humans. They communicate in various ways, including vocalizations such as nickering or whinnying, mutual grooming, and body language. Many horses will become difficult to manage if they are isolated, but with training, horses can learn to accept a human as a companion, and thus be comfortable away from other horses. However, when confined with insufficient companionship, exercise, or stimulation, individuals may develop stable vices, an assortment of bad habits, mostly stereotypies of psychological origin, that include wood chewing, wall kicking, "weaving" (rocking back and forth), and other problems.
Studies have indicated that horses perform a number of cognitive tasks on a daily basis, meeting mental challenges that include food procurement and identification of individuals within a social system. They also have good spatial discrimination abilities. They are naturally curious and apt to investigate things they have not seen before. Studies have assessed equine intelligence in areas such as problem solving, speed of learning, and memory. Horses excel at simple learning, but also are able to use more advanced cognitive abilities that involve categorization and concept learning. They can learn using habituation, desensitization, classical conditioning, and operant conditioning, and positive and negative reinforcement. One study has indicated that horses can differentiate between "more or less" if the quantity involved is less than four.
Domesticated horses may face greater mental challenges than wild horses, because they live in artificial environments that prevent instinctive behavior whilst also learning tasks that are not natural. Horses are animals of habit that respond well to regimentation, and respond best when the same routines and techniques are used consistently. One trainer believes that "intelligent" horses are reflections of intelligent trainers who effectively use response conditioning techniques and positive reinforcement to train in the style that best fits with an individual animal's natural inclinations.
Horses are mammals, and as such are warm-blooded, or endothermic creatures, as opposed to cold-blooded, or poikilothermic animals. However, these words have developed a separate meaning in the context of equine terminology, used to describe temperament, not body temperature. For example, the "hot-bloods", such as many race horses, exhibit more sensitivity and energy, while the "cold-bloods", such as most draft breeds, are quieter and calmer. Sometimes "hot-bloods" are classified as "light horses" or "riding horses", with the "cold-bloods" classified as "draft horses" or "work horses".
"Hot blooded" breeds include "oriental horses" such as the Akhal-Teke, Arabian horse, Barb and now-extinct Turkoman horse, as well as the Thoroughbred, a breed developed in England from the older oriental breeds. Hot bloods tend to be spirited, bold, and learn quickly. They are bred for agility and speed. They tend to be physically refined—thin-skinned, slim, and long-legged. The original oriental breeds were brought to Europe from the Middle East and North Africa when European breeders wished to infuse these traits into racing and light cavalry horses.
Muscular, heavy draft horses are known as "cold bloods", as they are bred not only for strength, but also to have the calm, patient temperament needed to pull a plow or a heavy carriage full of people. They are sometimes nicknamed "gentle giants". Well-known draft breeds include the Belgian and the Clydesdale. Some, like the Percheron, are lighter and livelier, developed to pull carriages or to plow large fields in drier climates. Others, such as the Shire, are slower and more powerful, bred to plow fields with heavy, clay-based soils. The cold-blooded group also includes some pony breeds.
"Warmblood" breeds, such as the Trakehner or Hanoverian, developed when European carriage and war horses were crossed with Arabians or Thoroughbreds, producing a riding horse with more refinement than a draft horse, but greater size and milder temperament than a lighter breed. Certain pony breeds with warmblood characteristics have been developed for smaller riders. Warmbloods are considered a "light horse" or "riding horse".
Today, the term "Warmblood" refers to a specific subset of sport horse breeds that are used for competition in dressage and show jumping. Strictly speaking, the term "warm blood" refers to any cross between cold-blooded and hot-blooded breeds. Examples include breeds such as the Irish Draught or the Cleveland Bay. The term was once used to refer to breeds of light riding horse other than Thoroughbreds or Arabians, such as the Morgan horse.
Horses are able to sleep both standing up and lying down. In an adaptation from life in the wild, horses are able to enter light sleep by using a "stay apparatus" in their legs, allowing them to doze without collapsing. Horses sleep better when in groups because some animals will sleep while others stand guard to watch for predators. A horse kept alone will not sleep well because its instincts are to keep a constant eye out for danger.
Unlike humans, horses do not sleep in a solid, unbroken period of time, but take many short periods of rest. Horses spend four to fifteen hours a day in standing rest, and from a few minutes to several hours lying down. Total sleep time in a 24-hour period may range from several minutes to a couple of hours, mostly in short intervals of about 15 minutes each. The average sleep time of a domestic horse is said to be 2.9 hours per day.
Horses must lie down to reach REM sleep. They only have to lie down for an hour or two every few days to meet their minimum REM sleep requirements. However, if a horse is never allowed to lie down, after several days it will become sleep-deprived, and in rare cases may suddenly collapse as it involuntarily slips into REM sleep while still standing. This condition differs from narcolepsy, although horses may also suffer from that disorder.
The horse adapted to survive in areas of wide-open terrain with sparse vegetation, surviving in an ecosystem where other large grazing animals, especially ruminants, could not. Horses and other equids are odd-toed ungulates of the order Perissodactyla, a group of mammals that was dominant during the Tertiary period. In the past, this order contained 14 families, but only three—Equidae (the horse and related species), Tapiridae (the tapir), and Rhinocerotidae (the rhinoceroses)—have survived to the present day.
The earliest known member of the family Equidae was the "Hyracotherium", which lived between 45 and 55 million years ago, during the Eocene period. It had 4 toes on each front foot, and 3 toes on each back foot. The extra toe on the front feet soon disappeared with the "Mesohippus", which lived 32 to 37 million years ago. Over time, the extra side toes shrank in size until they vanished. All that remains of them in modern horses is a set of small vestigial bones on the leg below the knee, known informally as splint bones. Their legs also lengthened as their toes disappeared until they were a hooved animal capable of running at great speed. By about 5 million years ago, the modern "Equus" had evolved. Equid teeth also evolved from browsing on soft, tropical plants to adapt to browsing of drier plant material, then to grazing of tougher plains grasses. Thus proto-horses changed from leaf-eating forest-dwellers to grass-eating inhabitants of semi-arid regions worldwide, including the steppes of Eurasia and the Great Plains of North America.
By about 15,000 years ago, "Equus ferus" was a widespread holarctic species. Horse bones from this time period, the late Pleistocene, are found in Europe, Eurasia, Beringia, and North America. Yet between 10,000 and 7,600 years ago, the horse became extinct in North America and rare elsewhere. The reasons for this extinction are not fully known, but one theory notes that extinction in North America paralleled human arrival. Another theory points to climate change, noting that approximately 12,500 years ago, the grasses characteristic of a steppe ecosystem gave way to shrub tundra, which was covered with unpalatable plants.
A truly wild horse is a species or subspecies with no ancestors that were ever domesticated. Therefore, most "wild" horses today are actually feral horses, animals that escaped or were turned loose from domestic herds and the descendants of those animals. Only two never-domesticated subspecies, the tarpan and the Przewalski's horse, survived into recorded history and only the latter survives today.
The Przewalski's horse ("Equus ferus przewalskii"), named after the Russian explorer Nikolai Przhevalsky, is a rare Asian animal. It is also known as the Mongolian wild horse; Mongolian people know it as the "taki", and the Kyrgyz people call it a "kirtag". The subspecies was presumed extinct in the wild between 1969 and 1992, while a small breeding population survived in zoos around the world. In 1992, it was reestablished in the wild due to the conservation efforts of numerous zoos. Today, a small wild breeding population exists in Mongolia. There are additional animals still maintained at zoos throughout the world.
The tarpan or European wild horse ("Equus ferus ferus") was found in Europe and much of Asia. It survived into the historical era, but became extinct in 1909, when the last captive died in a Russian zoo. Thus, the genetic line was lost. Attempts have been made to recreate the tarpan, which resulted in horses with outward physical similarities, but nonetheless descended from domesticated ancestors and not true wild horses.
Periodically, populations of horses in isolated areas are speculated to be relict populations of wild horses, but generally have been proven to be feral or domestic. For example, the Riwoche horse of Tibet was proposed as such, but testing did not reveal genetic differences from domesticated horses. Similarly, the Sorraia of Portugal was proposed as a direct descendant of the Tarpan based on shared characteristics, but genetic studies have shown that the Sorraia is more closely related to other horse breeds and that the outward similarity is an unreliable measure of relatedness.
Besides the horse, there are six other species of genus "Equus" in the Equidae family. These are the ass or donkey, "Equus asinus"; the mountain zebra, "Equus zebra"; plains zebra, "Equus quagga"; Grévy's zebra, "Equus grevyi"; the kiang, "Equus kiang"; and the onager, "Equus hemionus".
Horses can crossbreed with other members of their genus. The most common hybrid is the mule, a cross between a "jack" (male donkey) and a mare. A related hybrid, a hinny, is a cross between a stallion and a jenny (female donkey). Other hybrids include the zorse, a cross between a zebra and a horse. With rare exceptions, most hybrids are sterile and cannot reproduce.
Domestication of the horse most likely took place in central Asia prior to 3500 BC. Two major sources of information are used to determine where and when the horse was first domesticated and how the domesticated horse spread around the world. The first source is based on palaeological and archaeological discoveries; the second source is a comparison of DNA obtained from modern horses to that from bones and teeth of ancient horse remains.
The earliest archaeological evidence for the domestication of the horse comes from sites in Ukraine and Kazakhstan, dating to approximately 3500–4000 BC. By 3000 BC, the horse was completely domesticated and by 2000 BC there was a sharp increase in the number of horse bones found in human settlements in northwestern Europe, indicating the spread of domesticated horses throughout the continent. The most recent, but most irrefutable evidence of domestication comes from sites where horse remains were interred with chariots in graves of the Sintashta and Petrovka cultures c. 2100 BC.
Domestication is also studied by using the genetic material of present-day horses and comparing it with the genetic material present in the bones and teeth of horse remains found in archaeological and palaeological excavations. The variation in the genetic material shows that very few wild stallions contributed to the domestic horse, while many mares were part of early domesticated herds. This is reflected in the difference in genetic variation between the DNA that is passed on along the paternal, or sire line (Y-chromosome) versus that passed on along the maternal, or dam line (mitochondrial DNA). There are very low levels of Y-chromosome variability, but a great deal of genetic variation in mitochondrial DNA. There is also regional variation in mitochondrial DNA due to the inclusion of wild mares in domestic herds. Another characteristic of domestication is an increase in coat color variation. In horses, this increased dramatically between 5000 and 3000 BC.
Before the availability of DNA techniques to resolve the questions related to the domestication of the horse, various hypotheses were proposed. One classification was based on body types and conformation, suggesting the presence of four basic prototypes that had adapted to their environment prior to domestication. Another hypothesis held that the four prototypes originated from a single wild species and that all different body types were entirely a result of selective breeding after domestication. However, the lack of a detectable substructure in the horse has resulted in a rejection of both hypotheses.
Feral horses are born and live in the wild, but are descended from domesticated animals. Many populations of feral horses exist throughout the world. Studies of feral herds have provided useful insights into the behavior of prehistoric horses, as well as greater understanding of the instincts and behaviors that drive horses that live in domesticated conditions.
There are also semi-feral horses in many parts of the world, such as Dartmoor and the New Forest in the UK, where the animals are all privately owned but live for significant amounts of time in "wild" conditions on undeveloped, often public, lands. Owners of such animals often pay a fee for grazing rights.
The concept of purebred bloodstock and a controlled, written breed registry has come to be particularly significant and important in modern times. Sometimes purebred horses are incorrectly or inaccurately called "thoroughbreds". Thoroughbred is a specific breed of horse, while a "purebred" is a horse (or any other animal) with a defined pedigree recognized by a breed registry. Horse breeds are groups of horses with distinctive characteristics that are transmitted consistently to their offspring, such as conformation, color, performance ability, or disposition. These inherited traits result from a combination of natural crosses and artificial selection methods. Horses have been selectively bred since their domestication. An early example of people who practiced selective horse breeding were the Bedouin, who had a reputation for careful practices, keeping extensive pedigrees of their Arabian horses and placing great value upon pure bloodlines. These pedigrees were originally transmitted via an oral tradition. In the 14th century, Carthusian monks of southern Spain kept meticulous pedigrees of bloodstock lineages still found today in the Andalusian horse.
Breeds developed due to a need for "form to function", the necessity to develop certain characteristics in order to perform a particular type of work. Thus, a powerful but refined breed such as the Andalusian developed as riding horses with an aptitude for dressage. Heavy draft horses were developed out of a need to perform demanding farm work and pull heavy wagons. Other horse breeds had been developed specifically for light agricultural work, carriage and road work, various sport disciplines, or simply as pets. Some breeds developed through centuries of crossing other breeds, while others descended from a single foundation sire, or other limited or restricted foundation bloodstock. One of the earliest formal registries was General Stud Book for Thoroughbreds, which began in 1791 and traced back to the foundation bloodstock for the breed. There are more than 300 horse breeds in the world today.
Worldwide, horses play a role within human cultures and have done so for millennia. Horses are used for leisure activities, sports, and working purposes. The Food and Agriculture Organization (FAO) estimates that in 2008, there were almost 59,000,000 horses in the world, with around 33,500,000 in the Americas, 13,800,000 in Asia and 6,300,000 in Europe and smaller portions in Africa and Oceania. There are estimated to be 9,500,000 horses in the United States alone. The American Horse Council estimates that horse-related activities have a direct impact on the economy of the United States of over $39 billion, and when indirect spending is considered, the impact is over $102 billion. In a 2004 "poll" conducted by Animal Planet, more than 50,000 viewers from 73 countries voted for the horse as the world's 4th favorite animal.
Communication between human and horse is paramount in any equestrian activity; to aid this process horses are usually ridden with a saddle on their backs to assist the rider with balance and positioning, and a bridle or related headgear to assist the rider in maintaining control. Sometimes horses are ridden without a saddle, and occasionally, horses are trained to perform without a bridle or other headgear. Many horses are also driven, which requires a harness, bridle, and some type of vehicle.
Historically, equestrians honed their skills through games and races. Equestrian sports provided entertainment for crowds and honed the excellent horsemanship that was needed in battle. Many sports, such as dressage, eventing and show jumping, have origins in military training, which were focused on control and balance of both horse and rider. Other sports, such as rodeo, developed from practical skills such as those needed on working ranches and stations. Sport hunting from horseback evolved from earlier practical hunting techniques. Horse racing of all types evolved from impromptu competitions between riders or drivers. All forms of competition, requiring demanding and specialized skills from both horse and rider, resulted in the systematic development of specialized breeds and equipment for each sport. The popularity of equestrian sports through the centuries has resulted in the preservation of skills that would otherwise have disappeared after horses stopped being used in combat.
Horses are trained to be ridden or driven in a variety of sporting competitions. Examples include show jumping, dressage, three-day eventing, competitive driving, endurance riding, gymkhana, rodeos, and fox hunting. Horse shows, which have their origins in medieval European fairs, are held around the world. They host a huge range of classes, covering all of the mounted and harness disciplines, as well as "In-hand" classes where the horses are led, rather than ridden, to be evaluated on their conformation. The method of judging varies with the discipline, but winning usually depends on style and ability of both horse and rider.
Sports such as polo do not judge the horse itself, but rather use the horse as a partner for human competitors as a necessary part of the game. Although the horse requires specialized training to participate, the details of its performance are not judged, only the result of the rider's actions—be it getting a ball through a goal or some other task. Examples of these sports of partnership between human and horse include jousting, in which the main goal is for one rider to unseat the other, and buzkashi, a team game played throughout Central Asia, the aim being to capture a goat carcass while on horseback.
Horse racing is an equestrian sport and major international industry, watched in almost every nation of the world. There are three types: "flat" racing; steeplechasing, i.e. racing over jumps; and harness racing, where horses trot or pace while pulling a driver in a small, light cart known as a sulky. A major part of horse racing's economic importance lies in the gambling associated with it.
There are certain jobs that horses do very well, and no technology has yet developed to fully replace them. For example, mounted police horses are still effective for certain types of patrol duties and crowd control. Cattle ranches still require riders on horseback to round up cattle that are scattered across remote, rugged terrain. Search and rescue organizations in some countries depend upon mounted teams to locate people, particularly hikers and children, and to provide disaster relief assistance. Horses can also be used in areas where it is necessary to avoid vehicular disruption to delicate soil, such as nature reserves. They may also be the only form of transport allowed in wilderness areas. Horses are quieter than motorized vehicles. Law enforcement officers such as park rangers or game wardens may use horses for patrols, and horses or mules may also be used for clearing trails or other work in areas of rough terrain where vehicles are less effective.
Although machinery has replaced horses in many parts of the world, an estimated 100 million horses, donkeys and mules are still used for agriculture and transportation in less developed areas. This number includes around 27 million working animals in Africa alone. Some land management practices such as cultivating and logging can be efficiently performed with horses. In agriculture, less fossil fuel is used and increased environmental conservation occurs over time with the use of draft animals such as horses. Logging with horses can result in reduced damage to soil structure and less damage to trees due to more selective logging.
Horses have been used in warfare for most of recorded history. The first archaeological evidence of horses used in warfare dates to between 4000 and 3000 BC, and the use of horses in warfare was widespread by the end of the Bronze Age. Although mechanization has largely replaced the horse as a weapon of war, horses are still seen today in limited military uses, mostly for ceremonial purposes, or for reconnaissance and transport activities in areas of rough terrain where motorized vehicles are ineffective. Horses have been used in the 21st century by the Janjaweed militias in the War in Darfur.
Modern horses are often used to reenact many of their historical work purposes. Horses are used, complete with equipment that is authentic or a meticulously recreated replica, in various live action historical reenactments of specific periods of history, especially recreations of famous battles. Horses are also used to preserve cultural traditions and for ceremonial purposes. Countries such as the United Kingdom still use horse-drawn carriages to convey royalty and other VIPs to and from certain culturally significant events. Public exhibitions are another example, such as the Budweiser Clydesdales, seen in parades and other public settings, a team of draft horses that pull a beer wagon similar to that used before the invention of the modern motorized truck.
Horses are frequently used in television, films and literature. They are sometimes featured as a major character in films about particular animals, but also used as visual elements that assure the accuracy of historical stories. Both live horses and iconic images of horses are used in advertising to promote a variety of products. The horse frequently appears in coats of arms in heraldry, in a variety of poses and equipment. The mythologies of many cultures, including Greco-Roman, Hindu, Islamic, and Norse, include references to both normal horses and those with wings or additional limbs, and multiple myths also call upon the horse to draw the chariots of the Moon and Sun. The horse also appears in the 12-year cycle of animals in the Chinese zodiac related to the Chinese calendar.
People of all ages with physical and mental disabilities obtain beneficial results from an association with horses. Therapeutic riding is used to mentally and physically stimulate disabled persons and help them improve their lives through improved balance and coordination, increased self-confidence, and a greater feeling of freedom and independence. The benefits of equestrian activity for people with disabilities has also been recognized with the addition of equestrian events to the Paralympic Games and recognition of para-equestrian events by the International Federation for Equestrian Sports (FEI). Hippotherapy and therapeutic horseback riding are names for different physical, occupational, and speech therapy treatment strategies that utilize equine movement. In hippotherapy, a therapist uses the horse's movement to improve their patient's cognitive, coordination, balance, and fine motor skills, whereas therapeutic horseback riding uses specific riding skills.
Horses also provide psychological benefits to people whether they actually ride or not. "Equine-assisted" or "equine-facilitated" therapy is a form of experiential psychotherapy that uses horses as companion animals to assist people with mental illness, including anxiety disorders, psychotic disorders, mood disorders, behavioral difficulties, and those who are going through major life changes. There are also experimental programs using horses in prison settings. Exposure to horses appears to improve the behavior of inmates and help reduce recidivism when they leave.
Horses are raw material for many products made by humans throughout history, including byproducts from the slaughter of horses as well as materials collected from living horses.
Products collected from living horses include mare's milk, used by people with large horse herds, such as the Mongols, who let it ferment to produce kumis. Horse blood was once used as food by the Mongols and other nomadic tribes, who found it a convenient source of nutrition when traveling. Drinking their own horses' blood allowed the Mongols to ride for extended periods of time without stopping to eat. The drug Premarin is a mixture of estrogens extracted from the urine of pregnant mares (pregnant mares' urine), and was previously a widely used drug for hormone replacement therapy. The tail hair of horses can be used for making bows for string instruments such as the violin, viola, cello, and double bass.
Horse meat has been used as food for humans and carnivorous animals throughout the ages. Approximately 5 million horses are slaughtered each year for meat worldwide. It is eaten in many parts of the world, though consumption is taboo in some cultures, and a subject of political controversy in others. Horsehide leather has been used for boots, gloves, jackets, baseballs, and baseball gloves. Horse hooves can also be used to produce animal glue. Horse bones can be used to make implements. Specifically, in Italian cuisine, the horse tibia is sharpened into a probe called a "spinto", which is used to test the readiness of a (pig) ham as it cures. In Asia, the saba is a horsehide vessel used in the production of kumis.
Horses are grazing animals, and their major source of nutrients is good-quality forage from hay or pasture. They can consume approximately 2% to 2.5% of their body weight in dry feed each day. Therefore, a adult horse could eat up to of food. Sometimes, concentrated feed such as grain is fed in addition to pasture or hay, especially when the animal is very active. When grain is fed, equine nutritionists recommend that 50% or more of the animal's diet by weight should still be forage.
Horses require a plentiful supply of clean water, a minimum of to per day. Although horses are adapted to live outside, they require shelter from the wind and precipitation, which can range from a simple shed or shelter to an elaborate stable.
Horses require routine hoof care from a farrier, as well as vaccinations to protect against various diseases, and dental examinations from a veterinarian or a specialized equine dentist. If horses are kept inside in a barn, they require regular daily exercise for their physical health and mental well-being. When turned outside, they require well-maintained, sturdy fences to be safely contained. Regular grooming is also helpful to help the horse maintain good health of the hair coat and underlying skin. | https://en.wikipedia.org/wiki?curid=13645 |
Gerald Gardner (Wiccan)
Gerald Brosseau Gardner (1884–1964), also known by the craft name Scire, was an English Wiccan, as well as an author and an amateur anthropologist and archaeologist. He was instrumental in bringing the Contemporary Pagan religion of Wicca to public attention, writing some of its definitive religious texts and founding the tradition of Gardnerian Wicca.
Born into an upper-middle-class family in Blundellsands, Lancashire, Gardner spent much of his childhood abroad in Madeira. In 1900, he moved to colonial Ceylon, and then in 1911 to Malaya, where he worked as a civil servant, independently developing an interest in the native peoples and writing papers and a book about their magical practices. After his retirement in 1936, he travelled to Cyprus, penning the novel "A Goddess Arrives" before returning to England. Settling down near the New Forest, he joined an occult group, the Rosicrucian Order Crotona Fellowship, through which he said he had encountered the New Forest coven into which he was initiated in 1939. Believing the coven to be a survival of the pre-Christian witch-cult discussed in the works of Margaret Murray, he decided to revive the faith, supplementing the coven's rituals with ideas borrowed from Freemasonry, ceremonial magic and the writings of Aleister Crowley to form the Gardnerian tradition of Wicca.
Moving to London in 1945, he became intent on propagating this religion, attracting media attention and writing about it in "High Magic's Aid" (1949), "Witchcraft Today" (1954) and "The Meaning of Witchcraft" (1959). Founding a Wiccan group known as the Bricket Wood coven, he introduced a string of High Priestesses into the religion, including Doreen Valiente, Lois Bourne, Patricia Crowther and Eleanor Bone, through which the Gardnerian community spread throughout Britain and subsequently into Australia and the United States in the late 1950s and early 1960s. Involved for a time with Cecil Williamson, Gardner also became director of the Museum of Magic and Witchcraft on the Isle of Man, which he ran until his death.
Gardner is internationally recognised as the "Father of Wicca" among the Pagan and occult communities. His claims regarding the New Forest coven have been widely scrutinised, with Gardner being the subject of investigation for historians and biographers Aidan Kelly, Ronald Hutton and Philip Heselton.
Gardner's family was wealthy and upper middle class, running a family firm, Joseph Gardner and Sons, which described itself as "the oldest private company in the timber trade within the British Empire." Specialising in the import of hardwood, the company had been founded in the mid-18th century by Edmund Gardner (b. 1721), an entrepreneur who would subsequently become a Freeman of Liverpool. Gerald's father, William Robert Gardner (1844–1935) had been the youngest son of Joseph Gardner (b. 1791), after whom the firm had been renamed, and who with his wife Maria had had five sons and three daughters. In 1867, William had been sent to New York City, in order to further the interests of the family firm. Here, he had met an American, Louise Burguelew Ennis, the daughter of a wholesale stationer; entering a relationship, they were married in Manhattan on 25 November 1868. After a visit to England, the couple returned to the US, where they settled in Mott Haven, Morrisania in New York State. It was here that their first child, Harold Ennis Gardner, was born in 1870. At some point in the next two years they moved back to England, by 1873 settling into The Glen, a large Victorian house in Blundellsands in Lancashire, north-west England, which was developing into a wealthy suburb of Liverpool. It was here that their second child, Robert "Bob" Marshall Gardner, was born in 1874.
In 1876 the family moved into one of the neighbouring houses, Ingle Lodge, and it was here that the couple's third son, Gerald Brosseau Gardner, was born on Friday 13 June 1884. A fourth child, Francis Douglas Gardner, was then born in 1886. Gerald would rarely see Harold, who went on to study Law at the University of Oxford, but saw more of Bob, who drew pictures for him, and Douglas, with whom he shared his nursery. The Gardners employed an Irish nursemaid named Josephine "Com" McCombie, who was entrusted with taking care of the young Gerald; she would subsequently become the dominant figure of his childhood, spending far more time with him than his parents. Gardner suffered with asthma from a young age, having particular difficulty in the cold Lancashire winters. His nursemaid offered to take him to warmer climates abroad at his father's expense in the hope that this condition would not be so badly affected. Subsequently, in summer 1888, Gerald and Com travelled via London to Nice in the south of France. After several more years spent in the Mediterranean, in 1891 they went to the Canary Islands, and it was here that Gardner first developed his lifelong interest in weaponry. From there, they then went on to Accra in the Gold Coast (modern Ghana). Accra was followed by a visit to Funchal on the Portuguese colony of Madeira; they would spend most of the next nine years on the island, only returning to England for three or four months in the summer.
According to Gardner's first biographer, Jack Bracelin, Com was very flirtatious and "clearly looked on these trips as mainly manhunts", viewing Gardner as a nuisance. As a result, he was largely left to his own devices, which he spent going out, meeting new people and learning about foreign cultures. In Madeira, he also began collecting weapons, many of which were remnants from the Napoleonic Wars, displaying them on the wall of his hotel room. As a result of his illness and these foreign trips, Gardner ultimately never attended school, or gained any formal education. He taught himself to read by looking at copies of "The Strand Magazine" but his writing betrayed his poor education all his life, with highly eccentric spelling and grammar. A voracious reader, one of the books that most influenced him at the time was Florence Marryat's "There Is No Death" (1891), a discussion of spiritualism, and from which he gained a firm belief in the existence of an afterlife.
In 1900, Com married David Elkington, one of her many suitors who owned a tea plantation in the British colony of Ceylon (modern Sri Lanka). It was agreed with the Gardners that Gerald would live with her on a tea plantation named Ladbroke Estate in Maskeliya district, where he could learn the tea trade. In 1901 Gardner and the Elkingtons lived briefly in a bungalow in Kandy, where a neighbouring bungalow had just been vacated by the occultists Aleister Crowley and Charles Henry Allan Bennett. At his father's expense, Gardner trained as a "creeper", or trainee planter, learning all about the growing of tea; although he disliked the "dreary endlessness" of the work, he enjoyed being outdoors and near to the forests. He lived with the Elkingtons until 1904, when he moved into his own bungalow and began earning a living working on the Non Pareil tea estate below the Horton Plains. He spent much of his spare time hunting deer and trekking through the local forests, becoming acquainted with the Singhalese natives and taking a great interest in their Buddhist beliefs. In December 1904, his parents and younger brother visited, with his father asking him to invest in a pioneering rubber plantation which Gardner was to manage; located near the village of Belihil Oya, it was known as the Atlanta Estate, but allowed him a great deal of leisure time. Exploring his interest in weaponry, in 1907 Gardner joined the Ceylon Planters Rifle Corps, a local volunteer force composed of European tea and rubber planters intent on protecting their interests from foreign aggression or domestic insurrection.
In 1907 Gardner returned to Britain for several months' leave, spending time with his family and joining the Legion of Frontiersmen, a militia founded to repel the threat of German invasion. During his visit, Gardner spent a lot of time with family relations known as the Sergenesons. Gardner became very friendly with this side of his family, whom his Anglican parents avoided because they were Methodists. According to Gardner, the Surgenesons readily talked about the paranormal with him; the patriarch of the family, Ted Surgeneson, believed that fairies were living in his garden and would say "I can often feel they're there, and sometimes I've seen them", though he readily admitted the possibility that it was all in his imagination. It was from the Sergenesons that Gardner claimed to have discovered a family rumour that his grandfather, Joseph, had been a practising witch, after being converted to the practice by his mistress. Another unconfirmed family belief repeated by Gardner was that a Scottish ancestor, Grissell Gairdner, had been burned as a witch in Newburgh in 1610.
Gardner returned to Ceylon in late 1907 and settled down to the routine of managing the rubber plantation. In 1910 he was initiated as an Apprentice Freemason into the Sphinx Lodge No. 107 in Colombo, affiliated with the Irish Grand Lodge. Gardner placed great importance on this new activity; In order to attend masonic meetings, he had to arrange a weekend's leave, walk 15 miles to the nearest railway station in Haputale, and then catch a train to the city. He entered into the second and third degrees of Freemasonry within the next month, but this enthusiasm seems also to have waned, and he resigned the next year, probably because he intended to leave Ceylon. The experiment with rubber growing at the Atlanta Estate had proved relatively unsuccessful, and Gardner's father decided to sell the property in 1911, leaving Gerald unemployed.
That year, Gardner moved to British North Borneo, gaining employment as a rubber planter at the Mawo Estate at Membuket. However, he did not get on well with the plantation's manager, a racist named R. J. Graham who had wanted to deforest the entire local area. Instead Gardner became friendly with many of the locals, including the Dyak and Dusun people. An amateur anthropologist, Gardner was fascinated by the indigenous way of life, particularly the local forms of weaponry such as the "sumpitan". He was intrigued by the tattoos of the Dayaks and pictures of him in later life show large snake or dragon tattoos on his forearms, presumably obtained at this time. Taking a great interest in indigenous religious beliefs, Gardner told his first biographer that he had attended Dusun séances or healing rituals. He was unhappy with the working conditions and the racist attitudes of his colleagues, and when he developed malaria he felt that this was the last straw; he left Borneo and moved to Singapore, in what was then known as the Straits Settlements, part of British Malaya.
Arriving in Singapore, he initially planned to return to Ceylon, but was offered a job working as an assistant on a rubber plantation in Perak, northern Malaya, and decided to take it, working for the Borneo Company. Arriving in the area, he decided to supplement this income by purchasing his own estate, Bukit Katho, on which he could grow rubber; initially sized at 450 acres, Gardner purchased various pieces of adjacent land until it covered 600 acres. Here, Gardner made friends with an American man known as Cornwall, who had converted to Islam and married a local Malay woman. Through Cornwall, Gardner was introduced to many locals, whom he soon befriended, including members of the Senoi and Malay peoples. Cornwall invited Gardner to make the "Shahada", the Muslim confession of faith, which he did; it allowed him to gain the trust of locals, although would he would never become a practising Muslim. Cornwall was however an unorthodox Muslim, and his interest in local peoples included their magical and spiritual beliefs, to which he also introduced Gardner, who took a particular interest in the "kris", a ritual knife with magical uses.
In 1915, Gardner again joined a local volunteer militia, the Malay States Volunteer Rifles. Although between 1914 and 1918 World War I was raging in Europe, its effects were little felt in Malaya, apart from the 1915 Singapore Mutiny. Gardner was keen to do more towards the war effort and in 1916 once again returned to Britain. He attempted to join the British Navy, but was turned down due to ill health. Unable to fight on the front lines, he began working as an orderly in the Voluntary Aid Detachment (VAD) in the First Western General Hospital, Fazakerley, located on the outskirts of Liverpool. He was working in the VAD when casualties came back from the Battle of the Somme and he was engaged in looking after patients and assisting in changing wound dressings. He soon had to give this up when his malaria returned, and so decided to return to Malaya in October 1916 because of the warmer climate.
He continued to manage the rubber plantation but after the end of the war, commodity prices dropped and by 1921 it was difficult to make a profit. He returned again to Britain, in what later biographer Philip Heselton speculated might have been an unsuccessful attempt to ask his father for money. Returning to Malaya, Gardner found that the Borneo Company had sacked him, and he was forced to find work with the Public Works Department. In September 1923 he successfully applied to the Office of Customs to become a government-inspector of rubber plantations, a job that involved a great amount of travelling around the country, something he enjoyed. After a brief but serious illness, the Johore government reassigned Gardner to an office in the Lands Office while he recovered, eventually being promoted to Principal Officer of Customs. In this capacity, he was made an Inspector of Rubber Shops, overseeing the regulation and sale of rubber in the country. In 1926 he was placed in charge of monitoring shops selling opium, noting regular irregularities and a thriving illegal trade in the controlled substance; believing opium to be essentially harmless, there is evidence indicating that Gardner probably took many bribes in this position, earning himself a small fortune.
Gardner's mother had died in 1920, but he had not returned to Britain on that occasion. However, in 1927 his father became very ill with dementia, and Gardner decided to visit him. On his return to Britain, Gardner began to investigate spiritualism and mediumship. He soon had several encounters which he attributed to spirits of deceased family members. Continuing to visit Spiritualist churches and séances, he was highly critical of much of what he saw, although he encountered several mediums he considered genuine. One medium apparently made contact with a deceased cousin of Gardner's, an event which impressed him greatly. His first biographer Jack Bracelin reports that this was a watershed in Gardner's life, and that a previous academic interest in spiritualism and life after death thereafter became a matter of firm personal belief for him. The very same evening (28 July 1927) after Gardner had met this medium, he met the woman he was to marry; Dorothea Frances Rosedale, known as Donna, a relation of his sister-in-law Edith. He asked her to marry him the next day and she agreed. Because his leave was coming to an end very soon, they married quickly on 16 August at St Jude's Church, Kensington, and then honeymooned in Ryde on the Isle of Wight, before heading via France to Malaya.
Arriving in the country, the couple settled into a bungalow at Bukit Japon in Johor Bahru. Here, he once more became involved in Freemasonry, joining the Johore Royal Lodge No. 3946, but had retired from it by April 1931. Gardner also returned to his old interests in the anthropology of Malaya, witnessing the magical practices performed by the locals, and he readily accepted a belief in magic. During his time in Malaya, Gardner became increasingly interested in local customs, particularly those involved in folk magic and weapons. Gardner was not only interested in the anthropology of Malaya, but also in its archaeology. He began excavations at the city of Johore Lama, alone and in secret, as the local Sultan considered archaeologists little better than grave-robbers. Prior to Gardner's investigations, no serious archaeological excavation had occurred at the city, though he himself soon unearthed four miles of earthworks, and uncovered finds that included tombs, pottery, and porcelain dating from Ming China. He went on to begin further excavations at the royal cemetery of Kota Tinggi, and the jungle city of Syong Penang. His finds were displayed as an exhibit on the "Early History of Johore" at the National Museum of Singapore, and several beads that he had discovered suggested that trade went on between the Roman Empire and the Malays, presumably, Gardner thought, via India. He also found gold coins originating from Johore and he published academic papers on both the beads and the coins.
By the early 1930s Gardner's activities had moved from those exclusively of a civil servant, and he began to think of himself more as a folklorist, archaeologist and anthropologist. He was encouraged in this by the director of the Raffles Museum (now the National Museum of Singapore) and by his election to Fellowship of the Royal Anthropological Institute in 1936. En route back to London in 1932 Gardner stopped off in Egypt and, armed with a letter of introduction, joined Sir Flinders Petrie who was excavating the site of Tall al-Ajjul in Palestine. Arriving in London in August 1932 he attended a conference on prehistory and protohistory at King's College London, attending at least two lectures which described the cult of the Mother Goddess. He also befriended the archaeologist and practising Pagan Alexander Keiller, known for his excavations at Avebury, who would encourage Gardner to join in with the excavations at Hembury Hill in Devon, also attended by Aileen Fox and Mary Leakey.
Returning to East Asia, he took a ship from Singapore to Saigon in French Indo-China, from where he travelled to Phnom Penh, visiting the Silver Pagoda. He then took a train to Hangzhou in China, before continuing onto Shanghai; because of the ongoing Chinese Civil War, the train did not stop throughout the entire journey, something that annoyed the passengers. In 1935, Gardner attended the Second Congress for Prehistoric Research in the Far East in Manila, Philippines, acquainting himself with several experts in the field. His main research interest lay in the Malay "kris" blade, which he unusually chose to spell "keris"; he eventually collected 400 examples and talked to natives about their magico-religious uses. Deciding to author a book on the subject, he wrote "Keris and Other Malay Weapons", being encouraged to do so by anthropologist friends; it would subsequently edited into a readable form by Betty Lumsden Milne and published by the Singapore-based Progressive Publishing Company in 1936. It was well received by literary and academic circles in Malaya. In 1935, Gardner heard that his father had died, leaving him a bequest of £3,000. This assurance of financial independence may have led him to consider retirement, and as he was due for a long leave in 1936 the Johore Civil Service allowed him to retire slightly early, in January 1936. Gardner wanted to stay in Malaya, but he conceded to his wife Donna, who insisted that they return to England.
In 1936, Gardner and Donna left Malaya and headed for Europe. She proceeded straight to London, renting them a flat at 26 Charing Cross Road. Gardner visited Palestine, becoming involved in the archaeological excavations run by J.L. Starkey at Lachish. Here he grew particularly interested in a temple containing statues to both the male deity of Judeo-Christian theology and the pagan goddess Ashtoreth. From Palestine, Gardner went to Turkey, Greece, Hungary, and Germany. He eventually reached England, but soon went on a visit to Denmark to attend a conference on weaponry at the Christiansborg Palace, Copenhagen, during which he gave a talk on the "kris".
Returning to Britain, he found that the climate made him sick, leading him to register with a doctor, Edward A.Gregg, who recommended that he try nudism. Hesitant at first, Gardner first attended an in-door nudist club, the Lotus League in Finchley, North London, where he made several new friends and felt that the nudity cured his ailment. When summer came, he decided to visit an outdoor nudist club, that of Fouracres near the town of Bricket Wood in Hertfordshire, which he soon began to frequent. Through nudism, Gardner made a number of notable friends, including James Laver (1899–1975), who became the Keeper of Prints and Drawings at the Victoria and Albert Museum, and Cottie Arthur Burland (1905–1983), who was the Curator of the Department of Ethnography at the British Museum. Biographer Philip Heselton suggested that through the nudist scene Gardner may have also met Dion Byngham (1896–1990), a senior member of the Order of Woodcraft Chivalry who propounded a Contemporary Pagan religion known as Dionysianism. By the end of 1936, Gardner was finding his Charing Cross Road flat to be cramped, and moved into the block of flats at 32a Buckingham Palace Mansions.
Fearing the cold of the English winter, Gardner decided to sail to Cyprus in late 1936, remaining there into the following year. Visiting the Museum in Nicosia, he studied the Bronze Age swords of the island, successfully hafting one of them, on the basis of which he wrote a paper entitled "The Problem of the Cypriot Bronze Dagger Hilt", which would subsequently be translated into both French and Danish, being published in the journals of the Société Préhistorique Française and the Vaabenhistorisk Selskab respectively. Back in London, in September 1937, Gardner applied for and received a Doctorate of Philosophy from the Meta Collegiate Extension of the National Electronic Institute, an organisation based in Nevada that was widely recognised by academic institutions as offering invalid academic degrees via post for a fee. He would subsequently style himself as "Dr. Gardner", despite the fact that academic institutions would not recognise his qualifications.
Planning to return to the Palestinian excavations the following winter, he was prevented from doing so when Starkey was murdered. Instead he decided to return to Cyprus. A believer in reincarnation, Gardner came to believe that he had lived on the island once before, in a previous life, subsequently buying a plot of land in Famagusta, planning to build a house on it, although this never came about. Influenced by his dreams, he wrote his first novel, "A Goddess Arrives", over the next few years. Revolving around an Englishman living in 1930s London named Robert Denvers who has recollections of a previous life as a Bronze Age Cypriot – an allusion to Gardner himself – the primary plot of "A Goddess Arrives" is set in ancient Cyprus and featured a queen, Dayonis, who practices sorcery in an attempt to help her people defend themselves from invading Egyptians. Published in late 1939, biographer Philip Heselton noted that the book was "a very competent first work of fiction", with strong allusions to the build-up which proceeded World War II. Returning to London, he helped to dig shelter trenches in Hyde Park as a part of the build-up to the war, also volunteering for the Air Raid Wardens' Service. Fearing the bombing of the city, Gardner and his wife soon moved to Highcliffe, just south of the New Forest in Hampshire. Here, they purchased a house built in 1923 named Southridge, situated on the corner of Highland Avenue and Elphinstone Road.
In Highcliffe, Gardner came across a building describing itself as the "First Rosicrucian Theatre in England". Having an interest in Rosicrucianism, a prominent magico-religious tradition within Western esotericism, Gardner decided to attend one of the plays performed by the group; in August 1939, Gardner took his wife to a theatrical performance based on the life of Pythagoras. An amateur thespian, she hated the performance, thinking the quality of both actors and script terrible, and she refused to go again. Unperturbed and hoping to learn more of Rosicrucianism, Gardner joined the group in charge of running the theatre, the Rosicrucian Order Crotona Fellowship, and began attending meetings held in their local "ashram". Founded in 1920 by George Alexander Sullivan, the Fellowship had been based upon a blend of Rosicrucianism, Theosophy, Freemasonry and his own personal innovation, and had moved to Christchurch in 1930.
As time went by, Gardner became critical of many of the Rosicrucian Order's practices; Sullivan's followers claimed that he was immortal, having formerly been the famous historical figures Pythagoras, Cornelius Agrippa and Francis Bacon. Gardner facetiously asked if he was also the Wandering Jew, much to the annoyance of Sullivan himself. Another belief held by the group that Gardner found amusing was that a lamp hanging from one of the ceilings was the disguised holy grail of Arthurian legend. Gardner's dissatisfaction with the group grew, particularly when in 1939, one of the group's leaders sent a letter out to all members in which she stated that war would not come. The very next day, Britain declared war on Germany, greatly unimpressing the increasingly cynical Gardner.
Alongside Rosicrucianism, Gardner had also been pursuing other interests. In 1939, Gardner joined the Folk-Lore Society; his first contribution to its journal "Folk-Lore", appeared in the June 1939 issue and described a box of witchcraft relics that he believed had belonged to the 17th century "Witch-Finder General", Matthew Hopkins. Subsequently, in 1946 he would go on to become a member of the society's governing council, although most other members of the society were wary of him and his academic credentials. Gardner would also join the Historical Association, being elected Co-President of its Bournemouth and Christchurch branch in June 1944, following which he became a vocal supporter for the construction of a local museum for the Christchurch borough. He also involved himself in preparations for the impending war, joining the Air Raid Precautions (ARP) as a warden, where he soon rose to a position of local seniority, with his own house being assigned as the ARP post. In 1940, following the outbreak of conflict, he also tried to sign up for the Local Defence Volunteers, or "Home Guard", but was turned away because he was already an ARP warden. He managed to circumvent this restriction by joining his local Home Guard in the capacity as armourer, which was officially classified as technical staff. Gardner took a strong interest in the Home Guard, helping to arm his fellows from his own personal weaponry collection and personally manufacturing molotov cocktails.
Although sceptical of the Rosicrucian Order, Gardner got on well with a group of individuals inside the group who were "rather brow-beaten by the others, kept themselves to themselves." Gardner's biographer Philip Heselton theorised that this group consisted of Edith Woodford-Grimes (1887–1975), Susie Mason, her brother Ernie Mason, and their sister Rosetta Fudge, all of whom had originally come from Southampton before moving to the area around Highcliffe, where they joined the Order. According to Gardner, "unlike many of the others [in the Order], [they] had to earn their livings, were cheerful and optimistic and had a real interest in the occult". Gardner became "really very fond of them", remarking that he "would have gone through hell and high water even then for any of them." In particular he grew close to Woodford-Grimes, being invited over to her home to meet her daughter, and the two helped each other with their writing, Woodford-Grimes probably assisting Gardner edit "A Goddess Arrives" prior to publication. Gardner would subsequently give her the nickname "Dafo", for which she would become better known.
According to Gardner's later account, one night in September 1939 they took him to a large house owned by "Old Dorothy" Clutterbuck, a wealthy local woman, where he was made to strip naked and taken through an initiation ceremony. Halfway through the ceremony, he heard the word "Wica", and he recognised it as an Old English word for "witch". He was already acquainted with Margaret Murray's theory of the Witch-cult, and that "I then knew then that which I had thought burnt out hundreds of years ago still survived." This group, he claimed, were the New Forest coven, and he believed them to be one of the few surviving covens of the ancient, pre-Christian Witch-Cult religion. Subsequent research by the likes of Hutton and Heselton has shown that in fact the New Forest coven was probably only formed in the mid-1930s, based upon such sources as folk magic and the theories of Margaret Murray.
Gardner only ever described one of their rituals in depth, and this was an event that he termed "Operation Cone of Power". According to his own account, it took place in 1940 in a part of the New Forest and was designed to ward off the Nazis from invading Britain by magical means. Gardner claimed that a "Great Circle" was erected at night, with a "great cone of power" – a form of magical energy – being raised and sent to Berlin with the command of "you cannot cross the sea, you cannot cross the sea, you cannot come, you cannot come".
Throughout his time in the New Forest, Gardner had regularly travelled to London, keeping his flat at Buckingham Palace Mansions until mid-1939 and regularly visiting the Spielplatz nudist club there. At Spielplatz he befriended Ross Nichols, whom he would later introduce to the Pagan religion of Druidry; Nichols would become enamoured with this faith, eventually founding the Order of Bards, Ovates and Druids. However, following the war, Gardner decided to return to London, moving into 47 Ridgemount Gardens, Bloomsbury in late 1944 or early 1945. Continuing his interest in nudism, in 1945 he purchased a plot of land in Fouracres, a nudist colony near to the village of Bricket Wood in Hertfordshire that would soon be renamed Five Acres. As a result, he would become one of the major shareholders at the club, exercising a significant level of power over any administrative decisions and was involved in a recruitment drive to obtain more members.
Between 1936 and 1939, Gardner befriended the Christian mystic J.S.M. Ward, proprietor of the Abbey Folk Park, Britain's oldest open-air museum. One of the exhibits was a 16th-century cottage that Ward had found near to Ledbury, Herefordshire and had transported to his park, where he exhibited it as a "witch's cottage". Gardner made a deal with Ward exchanging the cottage for Gardner's piece of land near to Famagusta in Cyprus. The witch's cottage was dismantled and the parts transported to Bricket Wood, where they were reassembled on Gardner's land at Five Acres. In Midsummer 1947 he held a ceremony in the cottage as a form of house-warming, which Heselton speculated was probably based upon the ceremonial magic rites featured in "The Key of Solomon" grimoire.
Furthering his interest in esoteric Christianity, in August 1946 Gardner was ordained as a priest in the Ancient British Church, a fellowship open to anyone who considered themselves a monotheist. Gardner also took an interest in Druidry, joining the Ancient Druid Order (ADO) and attending its annual Midsummer rituals at Stonehenge. He also joined the Folk-Lore Society, being elected to their council in 1946, and that same year giving a talk on "Art Magic and Talismans". Nevertheless, many fellows – including Katherine Briggs – were dismissive of Gardner's ideas and his fraudulent academic credentials. In 1946 he also joined the Society for Psychical Research.
On May Day 1947, Gardner's friend Arnold Crowther introduced him to Aleister Crowley, the ceremonial magician who had founded the religion of Thelema in 1904. Shortly before his death, Crowley elevated Gardner to the IV° of Ordo Templi Orientis (O.T.O.) and issued a charter decreeing that Gardner could admit people into its Minerval degree. The charter itself was written in Gardner's handwriting and only signed by Crowley. From November 1947 to March 1948, Gardner and his wife toured the United States visiting relatives in Memphis, also visiting New Orleans, where Gardner hoped to learn about Voodoo. During his voyage, Crowley had died, and as a result Gardner considered himself the head of the O.T.O. in Europe, (a position accepted by Lady Frieda Harris). He met Crowley's successor, Karl Germer, in New York though Gardner would soon lose interest in leading the O.T.O., and in 1951 he was replaced by Frederic Mellinger as the O.T.O.'s European representative.
Gardner hoped to spread Wicca, and described some of its practices in a fictional form as "High Magic's Aid". Set in the twelfth-century, Gardner included scenes of ceremonial magic based on "The Key of Solomon". Published by the Atlantis Bookshop in July 1949, Gardner's manuscript had been edited into a publishable form by astrologer Madeline Montalban. Privately, he had also begun work on a scrapbook known as "Ye Bok of Ye Art Magical", in which he wrote down a number of Wiccan rituals and spells. This would prove to be the prototype for what he later termed a Book of Shadows. He also gained some of his first initiates, Barbara and Gilbert Vickers, who were initiated at some point between autumn 1949 and autumn 1950.
Gardner also came into contact with Cecil Williamson, who was intent on opening his own museum devoted to witchcraft; the result would be the Folk-lore Centre of Superstition and Witchcraft, opened in Castletown on the Isle of Man in 1951. Gardner and his wife moved to the island, where he took up the position of "resident witch". On 29 July, the "Sunday Pictorial" published an article about the museum in which Gardner declared "Of course I'm a witch. And I get great fun out of it." The museum was not a financial success, and the relationship between Gardner and Williamson deteriorated. In 1954, Gardner bought the museum from Williamson, who returned to England to found the rival Museum of Witchcraft, eventually settling it in Boscastle, Cornwall. Gardner renamed his exhibition the Museum of Magic and Witchcraft and continued running it up until his death. He also acquired a flat at 145 Holland Road, near Shepherd's Bush in West London, but nevertheless fled to warmer climates during the winter, where his asthma would not be so badly affected, for instance spending time in France, Italy, and the Gold Coast. From his base in London, he would frequent Atlantis bookshop, thereby encountering a number of other occultists, including Austin Osman Spare and Kenneth Grant, and he also continued his communication with Karl Germer until 1956.
In 1952, Gardner had begun to correspond with a young woman named Doreen Valiente. She eventually requested initiation into the Craft, and though Gardner was hesitant at first, he agreed that they could meet during the winter at the home of Edith Woodford-Grimes. Valiente got on well with both Gardner and Woodford-Grimes, and having no objections to either ritual nudity or scourging (which she had read about in a copy of Gardner's novel "High Magic's Aid" that he had given to her), she was initiated by Gardner into Wicca on Midsummer 1953. Valiente went on to join the Bricket Wood Coven. She soon rose to become the High Priestess of the coven, and helped Gardner to revise his Book of Shadows, and attempting to cut out most of Crowley's influence.
In 1954, Gardner published a non-fiction book, "Witchcraft Today", containing a preface by Margaret Murray, who had published her theory of a surviving Witch-Cult in her 1921 book, "The Witch-Cult in Western Europe". In his book, Gardner not only espoused the survival of the Witch-Cult, but also his theory that a belief in faeries in Europe was due to a secretive pygmy race that lived alongside other communities, and that the Knights Templar had been initiates of the Craft. Alongside this book, Gardner began to increasingly court publicity, going so far as to invite the press to write articles about the religion. Many of these turned out very negatively for the cult; one declared "Witches Devil-Worship in London!", and another accused him of whitewashing witchcraft in his luring of people into covens. Gardner continued courting publicity, despite the negative articles that many tabloids were producing, and believed that only through publicity could more people become interested in witchcraft, so preventing the "Old Religion", as he called it, from dying out.
In 1960, Gardner's official biography, entitled "Gerald Gardner: Witch", was published. It was written by a friend of his, the Sufi mystic Idries Shah, but used the name of one of Gardner's High Priests, Jack L. Bracelin, because Shah was wary about being associated with Witchcraft. In May of that year, Gardner travelled to Buckingham Palace, where he enjoyed a garden party in recognition of his years of service to the Empire in the Far East. Soon after his trip, Gardner's wife Donna died, and Gardner himself once again began to suffer badly from asthma. The following year he, along with Shah and Lois Bourne, travelled to the island of Majorca to holiday with the poet Robert Graves, whose "The White Goddess" would play a significant part in the burgeoning Wiccan religion. In 1963, Gardner decided to go to Lebanon over the winter. Whilst returning home on the ship, "The Scottish Prince" on 12 February 1964, he suffered a fatal heart attack at the breakfast table. He was buried in Tunisia, the ship's next port of call, and his funeral was attended only by the ship's captain. He was 79 years old.
Though having bequeathed the museum, all his artifacts, and the copyright to his books in his will to one of his High Priestesses, Monique Wilson, she and her husband sold off the artefact collection to the American Ripley's Believe It or Not! organisation several years later. Ripley's took the collection to America, where it was displayed in two museums before being sold off during the 1980s. Gardner had also left parts of his inheritance to Patricia Crowther, Doreen Valiente, Lois Bourne and Jack Bracelin, the latter inheriting the Fiveacres Nudist Club and taking over as full-time High Priest of the Bricket Wood coven.
Several years after Gardner's death, the Wiccan High Priestess Eleanor Bone visited North Africa and went looking for Gardner's grave. She discovered that the cemetery he was interred in was to be redeveloped, and so she raised enough money for his body to be moved to another cemetery in Tunis, where it currently remains. In 2007, a new plaque was attached to his grave, describing him as being "Father of Modern Wicca. Beloved of the Great Goddess".
Gardner only married once, to Donna, and several who knew him made the claim that he was devoted to her. Indeed, after her death in 1960, he began to again suffer serious asthma attacks. Despite this, as many coven members slept over at his cottage due to living too far away to travel home safely, he was known to cuddle up to his young High Priestess, Dayonis, after rituals. The author Philip Heselton, who largely researched Wicca's origins, came to the conclusion that Gardner had held a long-term affair with Dafo, a theory expanded upon by Adrian Bott. Those who knew him within the modern witchcraft movement recalled how he was a firm believer in the therapeutic benefits of sunbathing. He also had several tattoos on his body, depicting magical symbols such as a snake, dragon, anchor and dagger. In his later life he wore a "heavy bronze bracelet... denoting the three degrees... of witchcraft" as well as a "large silver ring with... signs on it, which... represented his witch-name 'Scire', in the letters of the magical Theban alphabet."
According to Bricket Wood coven member Frederic Lamond, Gardner also used to comb his beard into a narrow barbiche and his hair into two horn like peaks, giving him "a somewhat demonic appearance". Lamond thought that Gardner was "surprisingly lacking in charisma" for someone at the forefront of a religious movement.
Gardner was a supporter of the right wing Conservative Party, and for several years had been a member of the Highcliffe Conservative Association, as well as being an avid reader of the pro-Conservative newspaper, "The Daily Telegraph".
In a 1951 interview with a journalist from the "Sunday Pictorial" newspaper, Gardner said he was a doctor of philosophy from Singapore and also to have a doctorate in literature from Toulouse. Later investigation by Doreen Valiente suggested that these claims were false. The University of Singapore did not exist at that time and the University of Toulouse had no record of his receiving a doctorate. Valiente suggests that these claims may have been a form of compensation for his lack of formal education.
Valiente further criticises Gardner for his publicity-seeking – or at least his indiscretion. After a series of tabloid exposés, some members of his coven proposed some rules limiting what members of the Craft should say to non-members. Valiente reports that Gardner responded with a set of Wiccan laws of his own, which he claimed were original but others suspected he had made up on the spot. This led to a split in the coven, with Valiente and others leaving.
Commenting on Gardner, Pagan studies scholar Ethan Doyle White commented that "There are few figures in esoteric history who can rival him for
his dominating place in the pantheon of Pagan pioneers."
In 2012, Philip Heselton published a two-volume biography of Gardner, titled "Witchfather". The biography was reviewed by Pagan studies scholar Ethan Doyle White in "The Pomegranate" journal, where he commented that it was "more exhaustive with greater detail" than Heselton's prior tomes and was "excellent in most respects". | https://en.wikipedia.org/wiki?curid=12789 |
Gavin MacLeod
Gavin MacLeod (born Allan George See; February 28, 1931) is an American film and television character actor, and Christian activist and author, whose career spans six decades. He has also appeared as a guest on several talk, variety, and religious programs.
MacLeod's career began in films in 1957, playing opposite Peter Mann in "The Sword of Ali Baba" (1965). He went on to play opposite Anthony Franciosa in "A Man Called Gannon" (1968), opposite Christopher George in "The Thousand Plane Raid", and opposite Clint Eastwood, Telly Savalas, and Carroll O'Connor in "Kelly's Heroes" (1970).
MacLeod achieved continuing television success co-starring opposite Ernest Borgnine on "McHale's Navy" (1962–1964), as Joseph "Happy" Haines, and on "The Mary Tyler Moore Show" (1970–1977) as Murray Slaughter. He is best known for his starring role on ABC's "The Love Boat" (1977–1986), in which he was cast as ship's Captain Merrill Stubing.
MacLeod was born Allan George See in Mount Kisco, New York, the elder of two children. His mother, Margaret (née Shea) See (1906–2004), a middle school dropout, worked for "Reader's Digest". His father, George See (1906–1945), an electrician, was part Chippewa (Ojibwa). He grew up in Pleasantville, New York, and studied acting at Ithaca College, from which he graduated in 1952 with a Bachelor's degree in fine arts.
After serving in the United States Air Force, he moved to New York City and worked at Radio City Music Hall while looking for acting work. At about this time he changed his name, drawing "Gavin" from a physically disabled victim in a television drama, and "MacLeod" from his Ithaca drama coach, Beatrice MacLeod. MacLeod said in a 2013 interview with "Parade" about his stage name, he "felt as if my name was getting in the way of my success." Allan, he wrote, "just wasn’t strong enough," and See was "too confusing."
MacLeod made his television debut in 1957 on "The Walter Winchell File" at the age of 26. His first movie appearance was a small, uncredited role in "The True Story of Lynn Stuart" in 1958. Soon thereafter, he landed a credited role in "I Want to Live!", a 1958 prison drama starring Susan Hayward. He was soon noticed by Blake Edwards, who in 1958 cast him in the pilot episode of his NBC series "Peter Gunn", two guest roles on the Edwards CBS series "Mr. Lucky" in 1959, and as a nervous harried navy yeoman in "Operation Petticoat", with Cary Grant and Tony Curtis. "Operation Petticoat" proved to be a breakout role for MacLeod, and he was soon cast in two other Edwards comedies, "High Time", with Bing Crosby and "The Party" with Peter Sellers.
Between 1957 and 1961, MacLeod made several television appearances. He was cast as the devious Dandy Martin in the 1960 episode, "Yankee Confederate", of the syndicated anthology series, "Death Valley Days", hosted by Stanley Andrews and starring alongside Tod Andrews and Elaine Devry.
In December 1961, he landed a guest role on "The Dick Van Dyke Show" as Mel's cousin Maxwell Cooley, a wholesale jeweler. This was his first time working with Mary Tyler Moore. MacLeod had three guest appearances on "Perry Mason": in 1961 he played Lawrence Comminger in "The Case of the Grumbling Grandfather", and in 1965 he played Mortimer Hershey in "The Case of the Grinning Gorilla", and Dan Platte in "The Case of the Runaway Racer". He played the role of a drug pusher, "Big Chicken", in two episodes of the first season of "Hawaii Five-O". Other guest roles include "The Untouchables", "Dr. Kildare", "Rawhide", "Gomer Pyle, U.S.M.C.", "The Man from U.N.C.L.E.", "My Favorite Martian", "Hogan's Heroes", "Combat!", "The Big Valley", "The Andy Griffith Show", "It Takes a Thief", "The Flying Nun", "The King of Queens", and "That '70s Show."
His first regular television role began in 1962 as Joseph "Happy" Haines on "McHale's Navy"; he left this role after two seasons to appear in the motion picture, "The Sand Pebbles" with Steve McQueen. Between the years of 1965 – 69, MacLeod appeared in many weekly episodes in multiple roles on the TV series "Hogan's Heroes" including Major Zolle (season 1, episode 19), General Metzger (season 3, episode 27) Major Kiegel (season 4, episode 1) and General von Rauscher (season 4, episode 23). Each role was usually a stern and discerning officer of the Schutzstaffel (SS), Luftwaffe or Geheime Staatspolizei (Gestapo) which are vastly different from the lovable characters he portrayed in his subsequent TV roles.
MacLeod's breakout role as Murray Slaughter on CBS' "The Mary Tyler Moore Show" won him lasting fame, and two Golden Globe nominations. His starring role as Captain Stubing on "The Love Boat", his next TV series, brought laughter to 90 countries worldwide, between 1977 and 1986 (9 seasons). His work on that show earned him three Golden Globe nominations. Co-starring with him was familiar actor and best friend Bernie Kopell as Dr. Adam Bricker and unfamiliar actor and best friend Ted Lange as bartender Isaac Washington. Lange said in a 2017 interview with The Wiseguyz Show of MacLeod that "Oh yeah, sure, Gavin was wonderful. Gavin lives down here in Palm Springs and we're still tight, all of us, Gavin and Bernie and Jill; we still see each other. Fred lives in a different state, we're still close, we're still good friends."
MacLeod became the global ambassador for Princess Cruises in 1986. He has played a role in ceremonies launching many of the line's new ships. In 1997, MacLeod joined the "Love Boat" cast on "The Oprah Winfrey Show". After "The Love Boat", MacLeod toured with Michael Learned of "The Waltons" in "Love Letters". He made several appearances in musicals such as "Gigi" and "Copacabana" between 1997 and 2003. In December 2008, he appeared with the Colorado Symphony in Denver.
MacLeod and his wife have been hosts on the Trinity Broadcasting Network for 17 years, primarily hosting a show about marriage called "Back on Course". MacLeod appeared in Rich Christiano's "Time Changer," a movie about time travel and how the morals of society have moved away from the Bible. He also plays the lead role in Christiano's 2009 film "The Secrets of Jonathan Sperry".
In April 2010, the entire cast of "The Love Boat" attended the TV Land Awards with the exception of MacLeod, due to a back operation to repair a couple of injured discs. Former co-star and long-term friend Ted Lange contacted him and received word MacLeod was doing well. In December, MacLeod appeared as a guest narrator with the Florida Orchestra and Master Chorale of Tampa Bay for three concerts.
MacLeod served as the honorary Mayor of Pacific Palisades for five years, until Sugar Ray Leonard succeeded him in 2011. On February 28, 2011, MacLeod celebrated his 80th birthday aboard "Golden Princess" on Princess Cruises in Los Angeles, California. His friends and family, wished him a Happy Birthday, and presented him with a 5 ft. long, 3D replica in cake of "Pacific Princess," the original "Love Boat".
MacLeod appeared on the special for Betty White's 90th birthday on January 17, 2012. He reunited with White to film "Safety Old School Style", an in-flight safety video for Air New Zealand in 2013. By January 2013, the video had been viewed two million times on YouTube. In October 2013, MacLeod appeared on "Today" to begin the promotional tour for his new book "This Is Your Captain Speaking: My Fantastic Voyage Through Hollywood, Faith & Life". This appearance included a special set change to honor MacLeod's appearance on the show. In addition to television appearances, his book tour continued in New York, Los Angeles, and Central Florida. Loretta Swit and Ted Lange were both present at MacLeod's first Barnes & Noble book signing in New York City. This signing was the largest such event held at that particular location in three years. He continued his book tour throughout 2014.
On November 5, 2013, MacLeod joined his "Love Boat" cast mates live on the CBS daytime show "The Talk". A full one-hour episode was dedicated to the cast reunion. The "Talk" co-hosts dressed in costumes to commemorate their special guests' arrivals. Spanish-American actress Charo also appeared on the reunion show. Charo guest starred in eight episodes of "The Love Boat". Jack Jones performed the "Love Boat" theme song, which he introduced in 1977.
In December 2013, MacLeod appeared on "The 700 Club" to discuss his life and career.
On February 1, 2014, MacLeod was honored with a star on the Palm Springs Walk of Stars in downtown Palm Springs, California.
In January 2015, MacLeod appeared in the Rose Parade along with several members of the original cast of "The Love Boat".
In 2017, MacLeod starred in the play "Happy Hour" at the Coachella Valley Repertory Theatre (CVRep) in Rancho Mirage, California. He received a best acting award for his work on the project.
In 1987, following MacLeod's conversion and remarriage, he and his wife, Patti, wrote about struggles with divorce and alcoholism in "Back On Course: The Remarkable Story of a Divorce That Ended in Remarriage".
In 2013, MacLeod released his memoir, "This Is Your Captain Speaking: My Fantastic Voyage Through Hollywood, Faith & Life". The book recalls his upbringing in upstate New York during the Great Depression, as well as his life after more than fifty years in Hollywood. He said, "...all my living has been based on what other people have written...I hope it can help others, how I overcame and never gave up. There are so many lessons in life." In the book, MacLeod recounts his stories as a young actor trying to make a name for himself in Hollywood, the lifelong friends he has made, his bout with alcoholism and divorce and his journey through faith and Christianity.
MacLeod married his current wife Patti in 1974. Both were previously divorced. The couple divorced in 1982, but remarried in 1985. During the mid-1980s, MacLeod and Patti became Evangelical Protestants and credit their faith for bringing them back together.
During his time as the Captain on "The Love Boat", MacLeod "very selfishly" (his words) divorced his wife Patti. She then spent the next three years seeking help from psychiatrists on both the West and the East coasts. Then one day, his wife received a telephone call from Patti Lewis, first wife of Jerry Lewis, inviting her to a Christian prayer group which contained a number of famous actresses who started to pray for Gavin. MacLeod later said, "From that day, I started to think about her. Something told me to call Patti. I called Patti. I went back to see her the following Monday and things haven't been the same since." MacLeod asked her what had happened. She then explained everything to him including that she had given her life to Jesus Christ.
On September 20, 2009, MacLeod discussed his conversion to Christianity at The Rock Church in Anaheim, California with further guest appearances in 2012. | https://en.wikipedia.org/wiki?curid=12792 |
Gopher (protocol)
The Gopher protocol is a communications protocol designed for distributing, searching, and retrieving documents in Internet Protocol networks. The design of the Gopher protocol and user interface is menu-driven, and presented an alternative to the World Wide Web in its early stages, but ultimately fell into disfavor, yielding to the Hypertext Transfer Protocol (HTTP). The Gopher ecosystem is often regarded as the effective predecessor of the World Wide Web.
The protocol was invented by a team led by Mark P. McCahill at the University of Minnesota. It offers some features not natively supported by the Web and imposes a much stronger hierarchy on the documents it stores. Its text menu interface is well-suited to computing environments that rely heavily on remote text-oriented computer terminals, which were still common at the time of its creation in 1991, and the simplicity of its protocol facilitated a wide variety of client implementations. More recent Gopher revisions and graphical clients added support for multimedia. Gopher was preferred by many network administrators for using fewer network resources than Web services.
Gopher's hierarchical structure provided a platform for the first large-scale electronic library connections. The Gopher protocol is still in use by enthusiasts, and although it has been almost entirely supplanted by the Web, a small population of actively-maintained servers remains.
Gopher system was released in mid-1991 by Mark P. McCahill, Farhad Anklesaria, Paul Lindner, Daniel Torrey, and Bob Alberti of the University of Minnesota in the United States. Its central goals were, as stated in RFC 1436:
Gopher combines document hierarchies with collections of services, including WAIS, the Archie and Veronica search engines, and gateways to other information systems such as File Transfer Protocol (FTP) and Usenet.
The general interest in campus-wide information systems (CWISs) in higher education at the time, and the ease of setup of Gopher servers to create an instant CWIS with links to other sites' online directories and resources were the factors contributing to Gopher's rapid adoption.
The name was coined by Anklesaria as a play on several meanings of the word "gopher". The University of Minnesota mascot is the gopher, a gofer is an assistant who "goes for" things, and a gopher burrows through the ground to reach a desired location.
The World Wide Web was in its infancy in 1991, and Gopher services quickly became established. By the late 1990s, Gopher had ceased expanding. Several factors contributed to Gopher's stagnation:
Gopher remains in active use by its enthusiasts, and there have been attempts to revive Gopher on modern platforms and mobile devices. One attempt is The Overbite Project, which hosts various browser extensions and modern clients.
The conceptualization of knowledge in "Gopher space" or a "cloud" as specific information in a particular file, and the prominence of the FTP, influenced the technology and the resulting functionality of Gopher.
Gopher is designed to function and to appear much like a mountable read-only global network file system (and software, such as gopherfs, is available that can actually mount a Gopher server as a FUSE resource). At a minimum, whatever a person can do with data files on a CD-ROM, one can do on Gopher.
A Gopher system consists of a series of hierarchical hyperlinkable menus. The choice of menu items and titles is controlled by the administrator of the server.
Similar to a file on a Web server, a file on a Gopher server can be linked to as a menu item from any other Gopher server. Many servers take advantage of this inter-server linking to provide a directory of other servers that the user can access.
The Gopher protocol was first described in RFC 1436. IANA has assigned TCP port 70 to the Gopher protocol.
The protocol is simple to negotiate, making it possible to browse without using a client. A standard gopher session may therefore appear as follows:
/Reference
1CIA World Factbook /Archives/mirrors/textfiles.com/politics/CIA gopher.quux.org 70
0Jargon 4.2.0 /Reference/Jargon 4.2.0 gopher.quux.org 70 +
1Online Libraries /Reference/Online Libraries gopher.quux.org 70 +
1RFCs: Internet Standards /Computers/Standards and Specs/RFC gopher.quux.org 70
1U.S. Gazetteer /Reference/U.S. Gazetteer gopher.quux.org 70 +
iThis file contains information on United States fake (NULL) 0
icities, counties, and geographical areas. It has fake (NULL) 0
ilatitude/longitude, population, land and water area, fake (NULL) 0
iand ZIP codes. fake (NULL) 0
i fake (NULL) 0
iTo search for a city, enter the city's name. To search fake (NULL) 0
ifor a county, use the name plus County -- for instance, fake (NULL) 0
iDallas County. fake (NULL) 0
Here, the client has established a TCP connection with the server on port 70, the standard gopher port. The client then sends a string followed by a carriage return followed by a line feed (a "CR + LF" sequence). This is the selector, which identifies the document to be retrieved. If the item selector were an empty line, the default directory would be selected. The server then replies with the requested item and closes the connection. According to the protocol, before the connection is closed, the server should send a full-stop (i.e., a period character) on a line by itself. However, as is the case here, not all servers conform to this part of the protocol and the server may close the connection without returning the final full-stop.
In this example, the item sent back is a gopher menu, a directory consisting of a sequence of lines each of which describes an item that can be retrieved. Most clients will display these as hypertext links, and so allow the user to navigate through gopherspace by following the links.
All lines in a gopher menu are terminated by "CR + LF", and consist of five fields: the "item type" as the very first character (see below), the "display string" (i.e., the description text to display), a "selector" (i.e., a file-system pathname), "host name" (i.e., the domain name of the server on which the item resides), and "port" (i.e., the port number used by that server). The item type and display string are joined without a space; the other fields are separated by the tab character.
Because of the simplicity of the Gopher protocol, tools such as netcat make it possible to download Gopher content easily from the command line:
The protocol is also supported by cURL as of 7.21.2-DEV.
The selector string in the request can optionally be followed by a tab character and a search string. This is used by item type 7.
Gopher menu items are defined by lines of tab-separated values in a text file. This file is sometimes called a "gophermap". As the source code to a gopher menu, a gophermap is roughly analogous to an HTML file for a web page. Each tab-separated line (called a "selector line") gives the client software a description of the menu item: what it is, what it's called, and where it leads. The client displays the menu items in the order that they appear in the gophermap.
The first character in a selector line indicates the "item type", which tells the client what kind of file or protocol the menu item points to. This helps the client decide what to do with it. Gopher's item types are a more basic precursor to the media type system used by the Web and email attachments.
The item type is followed by the "user display string" (a description or label that represents the item in the menu); the selector (a path or other string for the resource on the server); the "hostname" (the domain name or IP address of the server), and the network port.
For example: The following selector line generates a link to the "/home" directory at the subdomain gopher.floodgap.com, on port 70. The item type of indicates that the resource is a Gopher menu. The string "Floodgap Home" is what the user sees in the menu.
In addition to selector lines, a gophermap may contain "comment lines". Comment lines are not for code comments; rather, they are lines of text sent to the client to display alongside the menu items, such as for a menu description or welcome message. A comment line contains no tab characters.
In a Gopher menu's source code, a one-character code indicates what kind of content the client should expect. This code may either be a digit or a letter of the alphabet; letters are case-sensitive.
The technical specification for Gopher, RFC 1436, defines 14 item types. A one-character code indicates what kind of content the client should expect. Item type is an error code for exception handling. Gopher client authors improvised item types (HTML), (informational message), and (sound file) after the publication of RFC 1436. Browsers like Netscape Navigator and early versions of Microsoft Internet Explorer would prepend the item type code to the selector as described in RFC 4266, so that the type of the gopher item could be determined by the url itself. Most gopher browsers still available, use these prefixes in their urls.
Historically, to create a link to a Web server, "GET /" was used as a pseudo-selector to emulate an HTTP GET request. John Goerzen created an addition to the Gopher protocol, commonly referred to as "URL links", that allows links to any protocol that supports URLs. For example, to create a link to http://gopher.quux.org/, the item type is , the display string is the title of the link, the item selector is "URL:http://gopher.quux.org/", and the domain and port are that of the originating Gopher server (so that clients that do not support URL links will query the server and receive an HTML redirection page).
The master Gopherspace search engine is Veronica. Veronica offers a keyword search of all the public Internet Gopher server menu titles. A Veronica search produces a menu of Gopher items, each of which is a direct pointer to a Gopher data source. Individual Gopher servers may also use localized search engines specific to their content such as Jughead and Jugtail.
GopherVR is a 3D virtual reality variant of the original Gopher system.
Browsers that do not natively support Gopher can still access servers using one of the available Gopher to HTTP gateways.
Gopher support was disabled in Internet Explorer versions 5.x and 6 for Windows in August 2002 by a patch meant to fix a security vulnerability in the browser's Gopher protocol handler to reduce the attack surface which was included in IE6 SP1; however, it can be re-enabled by editing the Windows registry. In Internet Explorer 7, Gopher support was removed on the WinINET level.
For Mozilla Firefox and SeaMonkey, Overbite extensions extend Gopher browsing and support the current versions of the browsers (Firefox Quantum v ≥57 and equivalent versions of SeaMonkey):
OverbiteWX includes support for accessing Gopher servers not on port 70 using a whitelist and for CSO/ph queries. OverbiteFF always uses port 70.
For Chromium and Google Chrome, Burrow is available. It redirects gopher:// URLs to a proxy. In the past an Overbite proxy-based extension for these browsers was available but is no longer maintained and does not work with the current (>23) releases.
For Konqueror, Kio gopher is available.
Some have suggested that the bandwidth-sparing simple interface of Gopher would be a good match for mobile phones and personal digital assistants (PDAs), but so far, mobile adaptations of HTML and XML and other simplified content have proven more popular. The PyGopherd server provides a built-in WML front-end to Gopher sites served with it.
The early 2010s saw a renewed interest in native Gopher clients for popular smartphones: Overbite, an open source client for Android 1.5+ was released in alpha stage in 2010. PocketGopher was also released in 2010, along with its source code, for several Java ME compatible devices. Gopher Client was released in 2016 as a proprietary client for iPhone and iPad devices and is currently maintained.
Gopher popularity was at its height at a time when there were still many equally competing computer architectures and operating systems. As a result, there are several Gopher clients available for Acorn RISC OS, AmigaOS, Atari MiNT, CMS, DOS, classic Mac OS, MVS, NeXT, OS/2 Warp, most UNIX-like operating systems, VMS, Windows 3.x, and Windows 9x. GopherVR was a client designed for 3D visualization, and there is even a Gopher client in MOO. The majority of these clients are hard-coded to work on TCP port 70.
Users of Web browsers that have incomplete or no support for Gopher can access content on Gopher servers via a server gateway or proxy server that converts Gopher menus into HTML; known proxies are the Floodgap Public Gopher proxy and Gopher Proxy. Similarly, certain server packages such as GN and PyGopherd have built-in Gopher to HTTP interfaces. Squid Proxy software gateways any gopher:// URL to HTTP content, enabling any browser or web agent to access gopher content easily.
Because the protocol is trivial to implement in a basic fashion, there are many server packages still available, and some are still maintained. | https://en.wikipedia.org/wiki?curid=12794 |
Genotype
A genotype is an organism’s complete set of heritable genes, or genes that can be passed down from parents to offspring. These genes help encode the characteristics that are physically expressed (phenotype) in an organism, such as hair color, height, etc. The term was coined by the Danish botanist, plant physiologist and geneticist Wilhelm Johannsen in 1903.
The genotype is one of three factors that determine phenotype, along with inherited epigenetic factors and non-inherited environmental factors. Not all organisms with the same genotype look or act the same way because appearance and behavior are modified by environmental and growing conditions. Likewise, not all organisms that look alike necessarily have the same genotype.
One's genotype differs subtly from one's genomic flash card sequence, because it refers to how an individual "differs" or is specialized within a group of individuals or a species. So, typically, one refers to an individual's genotype with regard to a particular gene of interest and the combination of alleles the individual carries (see homozygous, heterozygous). Genotypes are often denoted with letters, for example "Bb", where "B" stands for one allele and "b" for another.
Somatic mutations that are acquired rather than inherited, such as those in cancers, are not part of the individual's genotype. Hence, scientists and physicians sometimes talk about the genotype of a particular cancer, that is, of the disease as distinct from the diseased.
An example of a characteristic determined by a genotype is the petal color in a pea plant. The collection of all genetic possibilities for a single trait are called alleles; two alleles for petal color are purple and white.
Any given gene will usually cause an observable change in an organism, known as the phenotype. The terms genotype and phenotype are distinct for at least two reasons:
A simple example to illustrate genotype as distinct from phenotype is the flower colour in pea plants (see Gregor Mendel). There are three available genotypes, PP (homozygous dominant ), Pp (heterozygous), and pp (homozygous recessive). All three have different genotypes but the first two have the same phenotype (purple) as distinct from the third (white).
A more technical example to illustrate genotype is the single-nucleotide polymorphism or SNP. A SNP occurs when corresponding sequences of DNA from different individuals differ at one DNA base, for example where the sequence AAGCCTA changes to AAGCTTA. This contains two alleles : C and T. SNPs typically have three genotypes, denoted generically AA Aa and aa. In the example above, the three genotypes would be CC, CT and TT. Other types of genetic marker, such as microsatellites, can have more than two alleles, and thus many different genotypes.
Penetrance is the proportion of individuals showing a specified genotype in their phenotype under a given set of environmental conditions.
The distinction between genotype and phenotype is commonly experienced when studying family patterns for certain hereditary diseases or conditions, for example, hemophilia. Humans and most animals are diploid; thus there are two alleles for any given gene. These alleles can be the same (homozygous) or different (heterozygous), depending on the individual (see zygote). With a dominant allele, the offspring is guaranteed to inherit the trait in question irrespective of the second allele.
In the case of an albino with a recessive allele (aa), the phenotype depends upon the other allele (Aa, aA, aa or AA). An affected person mating with a heterozygous individual (Aa or aA, also carrier) there is a 50-50 chance the offspring will be albino's phenotype. If a heterozygote mates with another heterozygote, there is 75% chance passing the gene on and only a 25% chance that the gene will be displayed. A homozygous dominant (AA) individual has a normal phenotype and no risk of abnormal offspring. A homozygous recessive individual has an abnormal phenotype and is guaranteed to pass the abnormal gene onto offspring.
In the case of hemophilia, it is sex-linked thus only carried on the X chromosome. Only females can be a carrier in which the abnormality is not displayed. This woman has a normal phenotype, but runs a 50-50 chance, with an unaffected partner, of passing her abnormal gene on to her offspring. If she mated with a man with haemophilia (another carrier) there would be a 75% chance of passing on the gene.
"Genotyping" is the process of elucidating the genotype of an individual with a biological assay. Also known as a "genotypic assay", techniques include PCR, DNA fragment analysis, allele specific oligonucleotide (ASO) probes, DNA sequencing, and nucleic acid hybridization to DNA microarrays or beads. Several common genotyping techniques include restriction fragment length polymorphism ("RFLP"), terminal restriction fragment length polymorphism ("t-RFLP"), amplified fragment length polymorphism ("AFLP"), and multiplex ligation-dependent probe amplification ("MLPA").
DNA fragment analysis can also be used to determine such disease causing genetics aberrations as microsatellite instability ("MSI"), "trisomy" or aneuploidy, and loss of heterozygosity ("LOH"). MSI and LOH in particular have been associated with cancer cell genotypes for colon, breast and cervical cancer.
The most common chromosomal aneuploidy is a trisomy of chromosome 21 which manifests itself as Down syndrome. Current technological limitations typically allow only a fraction of an individual's genotype to be determined efficiently. | https://en.wikipedia.org/wiki?curid=12796 |
Graphic design
Graphic design is the process of visual communication and problem-solving through the use of typography, photography, iconography and illustration. The field is considered a subset of visual communication and communication design, but sometimes the term "graphic design" is used synonymously. Graphic designers create and combine symbols, images and text to form visual representations of ideas and messages.
They use typography, visual arts, and page layout techniques to create visual compositions. Common applications of graphic design include corporate design (logos and branding), editorial design (magazines, newspapers and books), wayfinding or environmental design, advertising, web design, communication design, product packaging, and signage.
The origins of graphic design can be traced from the origins of human existence, from the caves of Lascaux, to Rome's Trajan's Column to the illuminated manuscripts of the Middle Ages, to the neon lights of Ginza, Tokyo. In "Babylon, artisans pressed cuneiform inscriptions into clay bricks or tablets which were used for construction. The bricks gave information such as the name of the reigning monarch, the builder, or some other dignitary". This was the first known road sign announcing the name of the governor of a state or mayor of the city. The Egyptians developed communication by hieroglyphics that used picture symbols dating as far back as 136 B.C. found on the Rosetta Stone. "The Rosetta stone, found by one of Napoleon's engineers was an advertisement for the Egyptian ruler, Ptolemy as the "true Son of the Sun, the Father of the Moon, and the Keeper of the Happiness of Men" The Egyptians also invented papyrus, paper made from reeds found along the Nile, on which they transcribed advertisements more common among their people at the time. During the "Dark Ages", from 500 AD to 1450 AD, monks created elaborate, illustrated manuscripts.
In both its lengthy history and in the relatively recent explosion of visual communication in the 20th and 21st centuries, the distinction between advertising, art, graphic design and fine art has disappeared. They share many elements, theories, principles, practices, languages and sometimes the same benefactor or client. In advertising, the ultimate objective is the sale of goods and services. In graphic design, "the essence is to give order to information, form to ideas, expression, and feeling to artifacts that document human experience."
Graphic design in the United States began with Benjamin Franklin who used his newspaper "The Pennsylvania Gazette", to master the art of publicity to promote his own books and to influence the masses. "Benjamin Franklin's ingenuity gained in strength as did his cunning and in 1737 he had replaced his counterpart in Pennsylvania, Andrew Bradford as postmaster and printer after a competition he instituted and won. He showed his prowess by running an ad in his "General Magazine and the Historical Chronicle of British Plantations in America" (the precursor to the Saturday Evening Post) that stressed the benefits offered by a stove he invented, named the "Pennsylvania Fireplace". His invention is still sold today and is known as "the Franklin stove". "
American advertising initially imitated British newspapers and magazines. Advertisements were printed in scrambled type and uneven lines that made it difficult to read. Franklin better organized this by adding a 14-point type for the first line of the advertisement; although later shortened and centered it, making "headlines". Franklin added illustrations, something that London printers had not attempted. Franklin was the first to utilize logos, which were early symbols that announced such services as opticians by displaying golden spectacles. Franklin taught advertisers that the use of detail was important in marketing their products. Some advertisements ran for 10-20 lines, including color, names, varieties, and sizes of the goods that were offered.
During the Tang Dynasty (618–907) wood blocks were cut to print on textiles and later to reproduce Buddhist texts. A Buddhist scripture printed in 868 is the earliest known printed book. Beginning in the 11th century, longer scrolls and books were produced using movable type printing, making books widely available during the Song dynasty (960–1279).
During the 17th-18th century movable type was used for handbills or trade cards which were printed from wood or copper engravings. These documents announced a business and its location. English painter William Hogarth used his skill in engraving was one of the first to design for business trade.
In Mainz Germany, in 1448, Johann Gutenberg introduced movable type using a new metal alloy for use in a printing press and opened a new era of commerce. This made graphics more readily available since mass printing dropped the price of printing material significantly. Previously, most advertising was word of mouth. In France and England, for example, criers announced products for sale just as ancient Romans had done.
The printing press made books more widely available. Aldus Manutius developed the book structure that became the foundation of western publication design. This era of graphic design is called Humanist or Old Style. Additionally, William Caxton, England's first printer produced religious books, but had trouble selling them. He discovered the use of leftover pages and used them to announce the books and post them on church doors. This practice was termed ""squis"" or ""pin up" posters," in approximately 1612, becoming the first form of print advertising in Europe"." The term "Siquis" came from the Roman era when public notices were posted stating ""if anybody..."", which in Latin is ""si quis"". These printed announcements were followed by later public registers of "wants" called "want ads" and in some areas such as the first periodical in Paris advertising was termed "advices". The "Advices" were what we know today as "want ad media" or "advice columns".
In 1638 Harvard University received a printing press from England. More than 52 years passed before London bookseller Benjamin Harris received another printing press in Boston. Harris published a newspaper in serial form, "Publick Occurrences Both Foreign and Domestick". It was four pages long and suppressed by the government after its first edition.
John Campbell is credited for the first newspaper, the "Boston News-Letter", which appeared in 1704. The paper was known during the revolution as ""Weeklies"". The name came from the 13 hours required for the ink to dry on each side of the paper. The solution was to first, print the ads and then to print the news on the other side the day before publication. The paper was four pages long having ads on at least 20%-30% of the total paper, (pages one and four) the hot news was located on the inside. The initial use of the "Boston News-Letter" carried Campbell's own solicitations for advertising from his readers. Campbell's first paid advertisement was in his third edition, May 7 or 8th, 1704. Two of the first ads were for stolen anvils. The third was for real estate in Oyster Bay, owned by William Bradford, a pioneer printer in New York, and the first to sell something of value. Bradford published his first newspaper in 1725, New York's first, the "New-York Gazette". Bradford's son preceded him in Philadelphia publishing the "American Weekly Mercury", 1719. "The Mercury" and William Brooker's "Massachusetts Gazette", first published a day earlier.
In 1849, Henry Cole became one of the major forces in design education in Great Britain, informing the government of the importance of design in his "Journal of Design and Manufactures". He organized the Great Exhibition as a celebration of modern industrial technology and Victorian design.
From 1891 to 1896, William Morris' Kelmscott Press published some of the most significant of the graphic design products of the Arts and Crafts movement, and made a lucrative business of creating and selling stylish books. Morris created a market for works of graphic design in their own right and a profession for this new type of art. The Kelmscott Press is characterized by an obsession with historical styles. This historicism was the first significant reaction to the state of nineteenth-century graphic design. Morris' work, along with the rest of the Private Press movement, directly influenced Art Nouveau.
In 1917, Frederick H. Meyer, director and instructor at the California School of Arts and Crafts, taught a class entitled "Graphic Design and Lettering". Raffe's "Graphic Design", published in 1927, was the first book to use "Graphic Design" in its title.
The signage in the London Underground is a classic design example of the modern era. Frank Pick led the Underground Group design and publicity movement, even though he lacked artistic training. The first Underground station signs were introduced in 1908 with a design of a solid red disk with a blue bar in the center and the name of the station. The station name was in white sans-serif letters. It was in 1916 when Pick used the expertise of Edward Johnston to design a new typeface for the Underground. Johnston redesigned the Underground sign and logo to include his typeface on the blue bar in the center of a red circle.
In the 1920s, Soviet constructivism applied 'intellectual production' in different spheres of production. The movement saw individualistic art as useless in revolutionary Russia and thus moved towards creating objects for utilitarian purposes. They designed buildings, theater sets, posters, fabrics, clothing, furniture, logos, menus, etc.
Jan Tschichold codified the principles of modern typography in his 1928 book, "New Typography". He later repudiated the philosophy he espoused in this book as fascistic, but it remained influential. Tschichold, Bauhaus typographers such as Herbert Bayer and László Moholy-Nagy and El Lissitzky greatly influenced graphic design. They pioneered production techniques and stylistic devices used throughout the twentieth century. The following years saw graphic design in the modern style gain widespread acceptance and application.
The post-World War II American economy revealed a greater need for graphic design, mainly in advertising and packaging. The spread of the German Bauhaus school of design to Chicago in 1937 brought a "mass-produced" minimalism to America; sparking "modern" architecture and design. Notable names in mid-century modern design include Adrian Frutiger, designer of the typefaces Univers and Frutiger; Paul Rand, who took the principles of the Bauhaus and applied them to popular advertising and logo design, helping to create a uniquely American approach to European minimalism while becoming one of the principal pioneers of corporate identity, a subset of graphic design. Alex Steinweiss is credited with the invention of the album cover; and Josef Müller-Brockmann, who designed posters in a severe yet accessible manner typical of the 1950s and 1970s era.
The professional graphic design industry grew in parallel with consumerism. This raised concerns and criticisms, notably from within the graphic design community with the First Things First manifesto. First launched by Ken Garland in 1964, it was re-published as the First Things First 2000 manifesto in 1999 in the magazine "Emigre" 51 stating "We propose a reversal of priorities in favor of more useful, lasting and democratic forms of communication - a mindshift away from product marketing and toward the exploration and production of a new kind of meaning. The scope of debate is shrinking; it must expand. Consumerism is running uncontested; it must be challenged by other perspectives expressed, in part, through the visual languages and resources of design." Both editions attracted signatures from practitioners and thinkers such as Rudy VanderLans, Erik Spiekermann, Ellen Lupton and Rick Poynor. The 2000 manifesto was also published in Adbusters, known for its strong critiques of visual culture.
Graphic design is applied to everything visual, from road signs to technical schematics, from interoffice memorandums to reference manuals.
Design can aid in selling a product or idea. It is applied to products and elements of company identity such as logos, colors, packaging and text as part of branding (see also advertising). Branding has become increasingly more important in the range of services offered by graphic designers. Graphic designers often form part of a branding team.
Graphic design is applied in the entertainment industry in decoration, scenery and visual story telling. Other examples of design for entertainment purposes include novels, vinyl album covers, comic books, DVD covers, opening credits and closing credits in filmmaking, and programs and props on stage. This could also include artwork used for T-shirts and other items screenprinted for sale.
From scientific journals to news reporting, the presentation of opinion and facts is often improved with graphics and thoughtful compositions of visual information - known as information design. Newspapers, magazines, blogs, television and film documentaries may use graphic design. With the advent of the web, information designers with experience in interactive tools are increasingly used to illustrate the background to news stories. Information design can include data visualization, which involves using programs to interpret and form data into a visually compelling presentation, and can be tied in with information graphics.
A graphic design project may involve the stylization and presentation of existing text and either preexisting imagery or images developed by the graphic designer. Elements can be incorporated in both traditional and digital form, which involves the use of visual arts, typography, and page layout techniques. Graphic designers organize pages and optionally add graphic elements. Graphic designers can commission photographers or illustrators to create original pieces. Designers use digital tools, often referred to as interactive design, or multimedia design. Designers need communication skills to convince an audience and sell their designs.
The "process school" is concerned with communication; it highlights the channels and media through which messages are transmitted and by which senders and receivers encode and decode these messages. The semiotic school treats a message as a construction of signs which through interaction with receivers, produces meaning; communication as an agent.
Typography includes type design, modifying type glyphs and arranging type. Type glyphs (characters) are created and modified using illustration techniques. Type arrangement is the selection of typefaces, point size, tracking (the space between all characters used), kerning (the space between two specific characters) and leading (line spacing).
Typography is performed by typesetters, compositors, typographers, graphic artists, art directors, and clerical workers. Until the digital age, typography was a specialized occupation. Certain fonts communicate or resemble stereotypical notions. For example, 1942 Report is a font which types text akin to a typewriter or a vintage report.
Page layout deals with the arrangement of elements (content) on a page, such as image placement, text layout and style. Page design has always been a consideration in printed material and more recently extended to displays such as web pages. Elements typically consist of type (text), images (pictures), and (with print media) occasionally place-holder graphics such as a dieline for elements that are not printed with ink such as die/laser cutting, foil stamping or blind embossing.
Printmaking is the process of making artworks by printing on paper and other materials or surfaces. The process is capable of producing multiples of the same work, each called a print. Each print is an original, technically known as an impression. Prints are created from a single original surface, technically a matrix. Common types of matrices include: plates of metal, usually copper or zinc for engraving or etching; stone, used for lithography; blocks of wood for woodcuts, linoleum for linocuts and fabric plates for screen-printing. Works printed from a single plate create an edition, in modern times usually each signed and numbered to form a limited edition. Prints may be published in book form, as artist's books. A single print could be the product of one or multiple techniques.
Aside from technology, graphic design requires judgment and creativity. Critical, observational, quantitative and analytic thinking are required for design layouts and rendering. If the executor is merely following a solution (e.g. sketch, script or instructions) provided by another designer (such as an art director), then the executor is not usually considered the designer.
Strategy is becoming more and more essential to effective graphic design. The main distinction between graphic design and art is that graphic design solves a problem as well as being aesthetically pleasing. This balance is where strategy comes in. It is important for a graphic designer to understand their clients' needs, as well as the needs of the people who will be interacting with the design. It is the designer's job to combine business and creative objectives to elevate the design beyond purely aesthetic means.
The method of presentation (e.g. Arrangements, style, medium) is important to the design. The development and presentation tools can change how an audience perceives a project. The image or layout is produced using traditional media and guides, or digital image editing tools on computers. Tools in computer graphics often take on traditional names such as "scissors" or "pen". Some graphic design tools such as a grid are used in both traditional and digital form.
In the mid-1980s desktop publishing and graphic art software applications introduced computer image manipulation and creation capabilities that had previously been manually executed. Computers enabled designers to instantly see the effects of layout or typographic changes, and to simulate the effects of traditional media. Traditional tools such as pencils can be useful even when computers are used for finalization; a designer or art director may sketch numerous concepts as part of the creative process. Styluses can be used with tablet computers to capture hand drawings digitally.
Designers disagree whether computers enhance the creative process. Some designers argue that computers allow them to explore multiple ideas quickly and in more detail than can be achieved by hand-rendering or paste-up. While other designers find the limitless choices from digital design can lead to paralysis or endless iterations with no clear outcome.
Most designers use a hybrid process that combines traditional and computer-based technologies. First, hand-rendered layouts are used to get approval to execute an idea, then the polished visual product is produced on a computer.
Graphic designers are expected to be proficient in software programs for image-making, typography and layout. Nearly all of the popular and "industry standard" software programs used by graphic designers since the early 1990s are products of Adobe Systems Incorporated. Adobe Photoshop (a raster-based program for photo editing) and Adobe Illustrator (a vector-based program for drawing) are often used in the final stage. Some designers across the world use CorelDraw. CorelDraw is a vector graphics editor software developed and marketed by Corel Corporation. Open source software used to edit the vector graphis is Inkscape. Primary file format used in Inkscape is Scalable Vector Graphics (SVG). You can import or export the file in any other vector format. Designers often use pre-designed raster images and vector graphics in their work from online design databases. Raster images may be edited in Adobe Photoshop, logos and illustrations in Adobe Illustrator and CorelDraw, and the final product assembled in one of the major page layout programs, such as Adobe InDesign, Serif PagePlus and QuarkXpress.
Powerful open-source programs (which are free) are also used by both professionals and casual users for graphic design, these include Inkscape (for vector graphics), GIMP (for photo-editing and image manipulation), Krita (for painting), and Scribus (for page layout).
Since the advent of personal computers, many graphic designers have become involved in interface design, in an environment commonly referred to as a Graphical User Interface (GUI). This has included web design and software design when end user-interactivity is a design consideration of the layout or interface. Combining visual communication skills with an understanding of user interaction and online branding, graphic designers often work with software developers and web developers to create the look and feel of a web site or software application. An important aspect of interface design is icon design.
User experience design (UX) is the study, analysis, and development of creating products that provide meaningful and relevant experiences to users. This involves the creation of the entire process of acquiring and integrating the product, including aspects of branding, design, usability, and function.
Experiential graphic design is the application of communication skills to the built environment. This area of graphic design requires practitioners to understand physical installations that have to be manufactured and withstand the same environmental conditions as buildings. As such, it is a cross-disciplinary collaborative process involving designers, fabricators, city planners, architects, manufacturers and construction teams.
Experiential graphic designers try to solve problems that people encounter while interacting with buildings and space (also called environmental graphic design). Examples of practice areas for environmental graphic designers are wayfinding, placemaking, branded environments, exhibitions and museum displays, public installations and digital environments.
Graphic design career paths cover all parts of the creative spectrum and often overlap. Workers perform specialized tasks, such as design services, publishing, advertising and public relations. As of 2017, median pay was $48,700 per year. The main job titles within the industry are often country specific. They can include graphic designer, art director, creative director, animator and entry level production artist. Depending on the industry served, the responsibilities may have different titles such as "DTP Associate" or "Graphic Artist". The responsibilities may involve specialized skills such as illustration, photography, animation or interactive design.
Employment in design of online projects was expected to increase by 35% by 2026, while employment in traditional media, such as newspaper and book design, expect to go down by 22%. Graphic designers will be expected to constantly learn new techniques, programs, and methods.
Graphic designers can work within companies devoted specifically to the industry, such as design consultancies or branding agencies, others may work within publishing, marketing or other communications companies. Especially since the introduction of personal computers, many graphic designers work as in-house designers in non-design oriented organizations. Graphic designers may also work freelance, working on their own terms, prices, ideas, etc.
A graphic designer typically reports to the art director, creative director or senior media creative. As a designer becomes more senior, they spend less time designing and more time leading and directing other designers on broader creative activities, such as brand development and corporate identity development. They are often expected to interact more directly with clients, for example taking and interpreting briefs.
Jeff Howe of "Wired Magazine" first used the term "crowdsourcing" in his 2006 article, "The Rise of Crowdsourcing." It spans such creative domains as graphic design, architecture, apparel design, writing, illustration etc. Tasks may be assigned to individuals or a group and may be categorized as convergent or divergent. An example of a divergent task is generating alternative designs for a poster. An example of a convergent task is selecting one poster design. Companies, Startups, Small businesses & Entrepreneurs have all benefitted a lot from design crowdsourcing since it helps them source great graphic designs at a fraction of the budget they used to spend before. Getting a logo design through crowdsourcing being one of the most common. Major companies who operate in the design crowdsourcing space are generally referred to as design contest sites. | https://en.wikipedia.org/wiki?curid=12799 |
Great Rift Valley
The Great Rift Valley is a series of contiguous geographic trenches, approximately in total length, that runs from the Beqaa Valley in Lebanon which is in Asia to Mozambique in Southeast Africa. While the name continues in some usages, it is rarely used in geology as it is considered an imprecise merging of separate though related rift and fault systems.
Today, the term is most often used to refer to the valley of the East African Rift, the divergent plate boundary which extends from the Afar Triple Junction southward across eastern Africa, and is in the process of splitting the African Plate into two new separate plates. Geologists generally refer to these incipient plates as the Nubian Plate and the Somali Plate.
Today these rifts and faults are seen as distinct, although connected, but originally, the Great Rift Valley was thought to be a single feature that extended from Lebanon in the north to Mozambique in the south, where it constitutes one of two distinct physiographic provinces of the East African mountains. It included what today is called the Lebanese section of the Dead Sea Transform, the Jordan Rift Valley, Red Sea Rift and the East African Rift. These rifts and faults were formed 35 million years ago.
The northernmost parts of the Rift corresponds to the central section of what is called today the Dead Sea Transform (DST) or Rift. This midsection of the DST forms the Beqaa Valley in Lebanon, separating the Lebanon from the Anti-Lebanon Mountains. Further south it is known as the Hula Valley separating the Galilee mountains and the Golan Heights.
The Jordan River begins here and flows southward through Lake Hula into the Sea of Galilee in Israel. The Rift then continues south through the Jordan Rift Valley into the Dead Sea on the Israeli-Jordanian border. From the Dead Sea southwards, the Rift is occupied by the Wadi Arabah, then the Gulf of Aqaba, and then the Red Sea.
Off the southern tip of Sinai in the Red Sea, the Dead Sea Transform meets the Red Sea Rift which runs the length of the Red Sea. The Red Sea Rift comes ashore to meet the East African Rift and the Aden Ridge in the Afar Depression of East Africa. The junction of these three rifts is called the Afar Triple Junction.
The East African rift has two branches, the Western Rift Valley and the Eastern Rift Valley.
The Western Rift, also called the Albertine Rift, is bordered by some of the highest mountains in Africa, including the Virunga Mountains, Mitumba Mountains, and Ruwenzori Range. It contains the Rift Valley lakes, which include some of the deepest lakes in the world (up to deep at Lake Tanganyika).
Much of this area lies within the boundaries of national parks such as Virunga National Park in the Democratic Republic of Congo, Rwenzori National Park and Queen Elizabeth National Park in Uganda, and Volcanoes National Park in Rwanda. Lake Victoria is considered to be part of the rift valley system although it actually lies between the two branches. All of the African Great Lakes were formed as the result of the rift, and most lie within its rift valley.
In Kenya, the valley is deepest to the north of Nairobi. As the lakes in the Eastern Rift have no outlet to the sea and tend to be shallow, they have a high mineral content as the evaporation of water leaves the salts behind. For example, Lake Magadi has high concentrations of soda (sodium carbonate) and Lake Elmenteita, Lake Bogoria, and Lake Nakuru are all strongly alkaline, while the freshwater springs supplying Lake Naivasha are essential to support its current biological variety.
The southern section of the Rift Valley includes Lake Malawi, the third-deepest freshwater body in the world, reaching in depth and separating the Nyassa plateau of Northern Mozambique from Malawi; it ends in the Zambezi valley. | https://en.wikipedia.org/wiki?curid=12800 |
Grigori Rasputin
Grigori Yefimovich Rasputin (; ; – ) was a Russian mystic and self-proclaimed holy man who befriended the family of Emperor Nicholas II, the last monarch of Russia, and gained considerable influence in late imperial Russia.
Rasputin was born to a peasant family in the Siberian village of Pokrovskoye in the Tyumensky Uyezd of Tobolsk Governorate (now Yarkovsky District of Tyumen Oblast). He had a religious conversion experience after taking a pilgrimage to a monastery in 1897. He has been described as a monk or as a "strannik" (wanderer or pilgrim), though he held no official position in the Russian Orthodox Church. He traveled to St. Petersburg in 1903 or the winter of 1904–05, where he captivated some church and social leaders. He became a society figure and met the tsar and Tsarina Alexandra in November 1905.
In late 1906, Rasputin began acting as a healer for the only son of Tsar Nicholas II, Alexei, who suffered from hemophilia. He was a divisive figure at court, seen by some Russians as a mystic, visionary, and prophet, and by others as a religious charlatan. The high point of Rasputin's power was in 1915 when Nicholas II left St. Petersburg to oversee Russian armies fighting World War I, increasing both Alexandra and Rasputin's influence. Russian defeats mounted during the war, however, and both Rasputin and Alexandra became increasingly unpopular. In the early morning of , Rasputin was assassinated by a group of conservative noblemen who opposed his influence over Alexandra and the tsar.
Historians often suggest that Rasputin's terrible reputation helped discredit the tsarist government and thus helped precipitate the overthrow of the Romanov dynasty which happened a few weeks after he was assassinated. Accounts of his life and influence were often based on hearsay and rumor.
Rasputin was born a peasant in the small village of Pokrovskoye, along the Tura River in the Tobolsk Governorate (now Tyumen Oblast) in Siberia. According to official records, he was born on and christened the following day. He was named for St. Gregory of Nyssa, whose feast was celebrated on 10 January.
There are few records of Rasputin's parents. His father, Yefim, was a peasant farmer and church elder who had been born in Pokrovskoye in 1842, and married Rasputin's mother, Anna Parshukova, in 1863. Yefim also worked as a government courier, ferrying people and goods between Tobolsk and Tyumen The couple had seven other children, all of whom died in infancy and early childhood; there may have been a ninth child, Feodosiya. According to historian Joseph T. Fuhrmann, Rasputin was certainly close to Feodosiya and was godfather to her children, but "the records that have survived do not permit us to say more than that".
According to historian Douglas Smith, Rasputin's youth and early adulthood are "a black hole about which we know almost nothing", though the lack of reliable sources and information did not stop others from fabricating stories about his parents and his youth after Rasputin's rise to fame. Historians agree, however, that like most Siberian peasants, including his mother and father, Rasputin was not formally educated and remained illiterate well into his early adulthood. Local archival records suggest that he had a somewhat unruly youth – possibly involving drinking, small thefts, and disrespect for local authorities – but contain no evidence of his being charged with stealing horses, blasphemy, or bearing false witness, all major crimes that he was later rumored to have committed as a young man.
In 1886, Rasputin travelled to Abalak, Russia, some 250 km east-northeast of Tyumen and 2,800 km east of Moscow, where he met a peasant girl named Praskovya Dubrovina. After a courtship of several months, they married in February 1887. Praskovya remained in Pokrovskoye throughout Rasputin's later travels and rise to prominence and remained devoted to him until his death. The couple had seven children, though only three survived to adulthood: Dmitry (b. 1895), Maria (b. 1898), and Varvara (b. 1900).
In 1897, Rasputin developed a renewed interest in religion and left Pokrovskoye to go on a pilgrimage. His reasons for doing so are unclear; according to some sources, Rasputin left the village to escape punishment for his role in a horse theft. Other sources suggest that he had a vision – either of the Virgin Mary or of St. Simeon of Verkhoturye – while still others suggest that Rasputin's pilgrimage was inspired by his interactions with a young theological student, Melity Zaborovsky. Whatever his reasons, Rasputin's departure was a radical life change: he was twenty-eight, had been married ten years, and had an infant son with another child on the way. According to Douglas Smith, his decision "could only have been occasioned by some sort of emotional or spiritual crisis".
Rasputin had undertaken earlier, shorter pilgrimages to the Holy Znamensky Monastery at Abalak and to Tobolsk's cathedral, but his visit to the St. Nicholas Monastery at Verkhoturye in 1897 was transformative. There, he met and was "profoundly humbled" by a "starets" (elder) known as Makary. Rasputin may have spent several months at Verkhoturye, and it was perhaps here that he learned to read and write, but he later complained about the monastery, claiming that some of the monks engaged in homosexuality and criticizing monastic life as too coercive. He returned to Pokrovskoye a changed man, looking disheveled and behaving differently than he had before. He became a vegetarian, swore off alcohol, and prayed and sang much more fervently than he had in the past.
Rasputin spent the years that followed living as a "Strannik" (a holy wanderer, or pilgrim), leaving Pokrovskoye for months or even years at a time to wander the country and visited a variety of holy sites. It is possible that Rasputin wandered as far as Athos, Greece – the center of Eastern Orthodox monastic life – in 1900.
By the early 1900s, Rasputin had developed a small circle of acolytes, primarily family members and other local peasants, who prayed with him on Sundays and other holy days when he was in Pokrovskoye. Building a makeshift chapel in Efim's root cellar – Rasputin was still living within his father's household at the time – the group held secret prayer meetings there. These meetings were the subject of some suspicion and hostility from the village priest and other villagers. It was rumored that female followers were ceremonially washing him before each meeting, that the group sang strange songs that the villagers had not heard before, and even that Rasputin had joined the Khlysty, a religious sect whose ecstatic rituals were rumored to include self-flagellation and sexual orgies. According to historian Joseph Fuhrmann, however, "repeated investigations failed to establish that Rasputin was ever a member of the sect", and rumors that he was a Khlyst appear to have been unfounded.
Word of Rasputin's activity and charisma began to spread in Siberia during the early 1900s. Sometime between 1902 and 1904, he travelled to the city of Kazan on the Volga river, where he acquired a reputation as a wise and perceptive "starets," or holy man, who could help people resolve their spiritual crises and anxieties. Despite rumors that Rasputin was having sex with some of his female followers, he won over the father superior of the Seven Lakes Monastery outside Kazan, as well as a local church officials Archimandrite Andrei and Bishop Chrysthanos, who gave him a letter of recommendation to Bishop Sergei, the rector of the St. Petersburg Theological Seminary at the Alexander Nevsky Monastery, and arranged for him to travel to St. Petersburg, either in 1903 or in the winter of 1904–05.
Upon meeting Sergei at the Nevsky Monastery, Rasputin was introduced to church leaders, including Archimandrite Feofan, who was the inspector of the theological seminary, was well-connected in St. Petersburg society, and later served as confessor to the tsar and his wife. Feofan was so impressed with Rasputin that he invited him to stay in his home and became one of Rasputin's most important and influential friends in St. Petersburg.
According to Joseph T. Fuhrmann, Rasputin stayed in St. Petersburg for only a few months on his first visit and returned to Pokrovskoye in the fall of 1903. Historian Douglas Smith, however, argues that it is impossible to know whether Rasputin stayed in St. Petersburg or returned to Pokrovskoye at some point between his first arrival there and 1905. Regardless, by 1905 Rasputin had formed friendships with several members of the aristocracy, including the "Black Princesses", Militsa and Anastasia of Montenegro, who had married the tsar's cousins (Grand Duke Peter Nikolaevich and Prince George Maximilianovich Romanowsky), and were instrumental in introducing Rasputin to the tsar and his family.
Rasputin first met the tsar on 1 November 1905, at the Peterhof Palace. The tsar recorded the event in his diary, writing that he and Alexandra had "made the acquaintance of a man of God – Grigory, from Tobolsk province". Rasputin returned to Pokrovskoye shortly after their first meeting and did not return to St. Petersburg until July 1906. On his return, Rasputin sent Nicholas a telegram asking to present the tsar with an icon of Simeon of Verkhoturye. He met with Nicholas and Alexandra on 18 July and again in October, when he first met their children. At some point, the royal family became convinced that Rasputin possessed the power to heal Alexei, but historians disagree over when: according to Orlando Figes, Rasputin was first introduced to the tsar and tsarina as a healer who could help their son in November 1905, while Joseph Fuhrmann has speculated that it was in October 1906 that Rasputin was first asked to pray for the health of Alexei.
Much of Rasputin's influence with the royal family stemmed from the belief by Alexandra and others that he had eased the pain and stopped the bleeding of the tsarevich – who suffered from hemophilia – on several occasions. According to historian Marc Ferro, the tsarina had a "passionate attachment" to Rasputin as a result of her belief that he could heal her son's affliction. Harold Shukman wrote that Rasputin became "an indispensable member of the royal entourage" as a result. It is unclear when Rasputin first learned of Alexei's hemophilia, or when he first acted as a healer for Alexei. He may have been aware of Alexei's condition as early as October 1906, and was summoned by Alexandra to pray for Alexei when he had an internal hemorrhage in the spring of 1907. Alexei recovered the next morning. Rasputin had been rumored to be capable of faith-healing since his arrival in St. Petersburg, and the tsarina's friend Anna Vyrubova became convinced that Rasputin had miraculous powers shortly thereafter. Vyrubova would become one of Rasputin's most influential advocates.
During the summer of 1912, Alexei developed a hemorrhage in his thigh and groin after a jolting carriage ride near the royal hunting grounds at Spala, which caused a large hematoma. In severe pain and delirious with fever, the tsarevich appeared to be close to death. In desperation, the tsarina asked Vyrubova to send Rasputin (who was in Siberia) a telegram, asking him to pray for Alexei. Rasputin wrote back quickly, telling the tsarina that "God has seen your tears and heard your prayers. Do not grieve. The Little One will not die. Do not allow the doctors to bother him too much." The next morning, Alexei's condition was unchanged, but Alexandra was encouraged by the message and regained some hope that Alexei would survive. Alexei's bleeding stopped the following day.
Historian Robert K. Massie has calls Alexei's recovery "one of the most mysterious episodes of the whole Rasputin legend". The cause of his recovery is unclear: Massie speculated that Rasputin's suggestion not to let doctors disturb Alexei had aided his recovery by allowing him to rest and heal, or that his message may have aided Alexei's recovery by calming Alexandra and reducing the emotional stress on Alexei. Alexandra, however, believed that Rasputin had performed a miracle, and concluded that he was essential to Alexei's survival. Some writers and historians, such as Ferro, claim that Rasputin stopped Alexei's bleeding on other occasions through hypnosis.
The royal family's belief that Rasputin possessed the power to heal Alexei brought him considerable status and power at court. The tsar appointed Rasputin his "lampadnik" (lamplighter) who was charged with keeping the lamps lit that burned in front of religious icons in the palace, and he thus had regular access to the palace and royal family. By December 1906, Rasputin had become close enough to the royal family to ask a special favor of the tsar: that he be permitted to change his surname to Rasputin-Novyi (Rasputin-New). Nicholas granted the request and the name change was speedily processed, suggesting that the tsar viewed and treated Rasputin favorably at that time. Rasputin used his status and power to full effect, accepting bribes and sexual favors from admirers and working diligently to expand his influence. He soon became a controversial figure; he was accused by his enemies of religious heresy and rape, was suspected of exerting undue political influence over the tsar, and was even rumored to be having an affair with the tsarina.
Alternative religious movements had become popular among the city's aristocracy before Rasputin's arrival in St. Petersburg in 1903, such as spiritualism and theosophy, and many of the aristocracy were intensely curious about the occult and the supernatural. The Saint Petersburg elite were fascinated by Rasputin but did not widely accept him. He did not fit in with the royal family, and he had a very strained relationship with the Russian Orthodox Church. The Holy Synod frequently attacked Rasputin, accusing him of a variety of immoral or evil practices.
World War I, the disappearance of feudalism, and a meddling government bureaucracy all contributed to Russia's declining economy at a very rapid rate. Many laid the blame with Alexandra and with Rasputin, because of his influence over her. Here is an example:
Vladimir Purishkevich was an outspoken member of the Duma. On 19 November 1916, Purishkevich made a rousing speech in the Duma, in which he stated, "The tsar's ministers who have been turned into marionettes, marionettes whose threads have been taken firmly in hand by Rasputin and the Empress Alexandra Fyodorovna – the evil genius of Russia and the Tsarina… who has remained a German on the Russian throne and alien to the country and its people." Felix Yusupov attended the speech and afterwards contacted Purishkevich, who quickly agreed to participate in the murder of Rasputin.
Rasputin's influence over the royal family was used against him and the Romanovs by politicians and journalists who wanted to weaken the integrity of the dynasty, force the tsar to give up his absolute political power, and separate the Russian Orthodox Church from the state.
On a 33-year-old peasant woman named Chionya Guseva attempted to assassinate Rasputin by stabbing him in the stomach outside his home in Pokrovskoye. Rasputin was seriously wounded, and for a time it was not clear that he would survive. After surgery and some time in a hospital in Tyumen, he recovered.
Guseva was a follower of Iliodor, a former priest who had supported Rasputin before denouncing his sexual escapades and self-aggrandizement in December 1911. A radical conservative and anti-semite, Iliodor had been part of a group of establishment figures who had attempted to drive a wedge between the royal family and Rasputin in 1911. When this effort failed, Iliodor was banished from Saint Petersburg and was ultimately defrocked. Guseva claimed to have acted alone, having read about Rasputin in the newspapers and believing him to be a "false prophet and even an Antichrist". Both the police and Rasputin, however, believed that Iliodor had played some role in the attempt on Rasputin's life. Iliodor fled the country before he could be questioned about the assassination attempt, and Guseva was found to be not responsible for her actions by reason of insanity.
A group of nobles led by Prince Felix Yusupov, Grand Duke Dmitri Pavlovich, and right-wing politician Vladimir Purishkevich decided that Rasputin's influence over the tsarina had made him a threat to the empire, and they concocted a plan in December 1916 to kill him, apparently by luring him to the Yusupovs' Moika Palace.
Rasputin was murdered during the early morning on at the home of Felix Yusupov. He died of three gunshot wounds, one of which was a close-range shot to his forehead. Little is certain about his death beyond this, and the circumstances of his death have been the subject of considerable speculation. According to historian Douglas Smith, "what really happened at the Yusupov home on 17 December will never be known". The story that Yusupov recounted in his memoirs, however, has become the most frequently told version of events.
Yusupov claimed that he invited Rasputin to his home shortly after midnight and ushered him into the basement. Yusupov offered Rasputin tea and cakes which had been laced with cyanide. Rasputin initially refused the cakes but then began to eat them and, to Yusupov's surprise, he did not appear to be affected by the poison. Rasputin then asked for some Madeira wine (which had also been poisoned) and drank three glasses, but still showed no sign of distress. At around 2:30 am, Yusupov excused himself to go upstairs, where his fellow conspirators were waiting. He took a revolver from Dmitry Pavlovich, then returned to the basement and told Rasputin that he'd "better look at the crucifix and say a prayer", referring to a crucifix in the room, then shot him once in the chest. The conspirators then drove to Rasputin's apartment, with Sukhotin wearing Rasputin's coat and hat in an attempt to make it look as though Rasputin had returned home that night. They then returned to the Moika Palace and Yusupov went back to the basement to ensure that Rasputin was dead. Suddenly, Rasputin leapt up and attacked Yusupov, who freed himself with some effort and fled upstairs. Rasputin followed and made it into the palace's courtyard before being shot by Purishkevich and collapsing into a snowbank. The conspirators then wrapped his body in cloth, drove it to the Petrovsky Bridge, and dropped it into the Malaya Nevka River.
News of Rasputin's murder spread quickly, even before his body was found. According to Douglas Smith, Purishkevich spoke openly about Rasputin's murder to two soldiers and to a policeman who was investigating reports of shots shortly after the event, but he urged them not to tell anyone else. An investigation was launched the next morning. The "Stock Exchange Gazette" ran a report of Rasputin's death "after a party in one of the most aristocratic homes in the center of the city" on the afternoon of .
Two workmen noticed blood on the railing of the Petrovsky Bridge and found a boot on the ice below, and police began searching the area. Rasputin's body was found under the river ice on 1 January (O.S. 19 December) approximately 200 meters downstream from the bridge. Dr. Dmitry Kosorotov, the city's senior autopsy surgeon, conducted an autopsy. Kosorotov's report was lost, but he later stated that Rasputin's body had shown signs of severe trauma, including three gunshot wounds (one at close range to the forehead), a slice wound to his left side, and many other injuries, many of which Kosorotov felt had been sustained post-mortem. Kosorotov found a single bullet in Rasputin's body but stated that it was too badly deformed and of a type too widely used to trace. He found no evidence that Rasputin had been poisoned. According to both Douglas Smith and Joseph Fuhrmann, Kosorotov found no water in Rasputin's lungs, and reports were incorrect that Rasputin had been thrown into the water alive. Some later accounts claimed that Rasputin's penis had been severed, but Kosorotov found his genitals intact.
Rasputin was buried on 2 January (O.S. 21 December) at a small church that Anna Vyrubova had been building at Tsarskoye Selo. The funeral was attended only by the royal family and a few of their intimates. Rasputin's wife, mistress, and children were not invited, although his daughters met with the royal family at Vyrubova's home later that day. His body was exhumed and burned by a detachment of soldiers shortly after the tsar abdicated the throne in March 1917, in order to prevent his burial site from becoming a rallying point for supporters of the old regime.
Some writers have suggested that agents of the British Secret Intelligence Service (BSIS) were involved in Rasputin's assassination. According to this theory, British agents were concerned that Rasputin was urging the tsar to make a separate peace with Germany, which would allow Germany to concentrate its military efforts on the Western Front. There are several variants of this theory, but they generally suggest that British intelligence agents were directly involved in planning and carrying out the assassination under the command of Samuel Hoare and Oswald Rayner, who had attended Oxford University with Yusopov, or that Rayner had personally shot Rasputin. However, historians do not seriously consider this theory. According to historian Douglas Smith, "there is no convincing evidence that places any British agents at the murder scene". Historian Keith Jeffery states that if British Intelligence agents had been involved in the assassination of Rasputin, "I would have expected to find some trace of that" in the Secret Intelligence Service archives, but no such evidence exists.
Rasputin's daughter, Maria Rasputin (born Matryona Rasputina) (1898–1977), emigrated to France after the October Revolution and then to the United States. There, she worked as a dancer and then a lion tamer in a circus. | https://en.wikipedia.org/wiki?curid=12804 |
Gemstone
A gemstone (also called a gem, fine gem, jewel, precious stone, or semi-precious stone) is a piece of mineral crystal which, in cut and polished form, is used to make jewelry or other adornments. However, certain rocks (such as lapis lazuli and opal) and occasionally organic materials that are not minerals (such as amber, jet, and pearl) are also used for jewelry and are therefore often considered to be gemstones as well. Most gemstones are hard, but some soft minerals are used in jewelry because of their luster or other physical properties that have aesthetic value. Rarity is another characteristic that lends value to a gemstone.
Apart from jewelry, from earliest antiquity engraved gems and hardstone carvings, such as cups, were major luxury art forms. A gem maker is called a lapidary or gemcutter; a diamond cutter is called a diamantaire.
The traditional classification in the West, which goes back to the ancient Greeks, begins with a distinction between "precious" and "semi-precious"; similar distinctions are made in other cultures. In modern use the precious stones are diamond, ruby, sapphire and emerald, with all other gemstones being semi-precious. This distinction reflects the rarity of the respective stones in ancient times, as well as their quality: all are translucent with fine color in their purest forms, except for the colorless diamond, and very hard, with hardnesses of 8 to 10 on the Mohs scale. Other stones are classified by their color, translucency and hardness. The traditional distinction does not necessarily reflect modern values, for example, while garnets are relatively inexpensive, a green garnet called tsavorite can be far more valuable than a mid-quality emerald. Another unscientific term for semi-precious gemstones used in art history and archaeology is hardstone. Use of the terms 'precious' and 'semi-precious' in a commercial context is, arguably, misleading in that it deceptively implies certain stones are intrinsically more valuable than others, which is not necessarily the case.
In modern times gemstones are identified by gemologists, who describe gems and their characteristics using technical terminology specific to the field of gemology. The first characteristic a gemologist uses to identify a gemstone is its chemical composition. For example, diamonds are made of carbon (C) and rubies of aluminium oxide (). Many gems are crystals which are classified by their crystal system such as cubic or trigonal or monoclinic. Another term used is habit, the form the gem is usually found in. For example, diamonds, which have a cubic crystal system, are often found as octahedrons.
Gemstones are classified into different "groups", "species", and "varieties". For example, ruby is the red variety of the species corundum, while any other color of corundum is considered sapphire. Other examples are the emerald (green), aquamarine (blue), red beryl (red), goshenite (colorless), heliodor (yellow) and morganite (pink), which are all varieties of the mineral species beryl.
Gems are characterized in terms of refractive index, dispersion, specific gravity, hardness, cleavage, fracture and luster. They may exhibit pleochroism or double refraction. They may have luminescence and a distinctive absorption spectrum.
Material or flaws within a stone may be present as inclusions.
Gemstones may also be classified in terms of their "water". This is a recognized grading of the gem's luster, transparency, or "brilliance". Very transparent gems are considered "first water", while "second" or "third water" gems are those of a lesser transparency.
Gemstones have no universally accepted grading system. Diamonds are graded using a system developed by the Gemological Institute of America (GIA) in the early 1950s. Historically, all gemstones were graded using the naked eye. The GIA system included a major innovation: the introduction of 10x magnification as the standard for grading clarity. Other gemstones are still graded using the naked eye (assuming 20/20 vision).
A mnemonic device, the "four Cs" (color, cut, clarity, and carats), has been introduced to help the consumer understand the factors used to grade a diamond. With modification, these categories can be useful in understanding the grading of all gemstones. The four criteria carry different weight depending upon whether they are applied to colored gemstones or to colorless diamonds. In diamonds, cut is the primary determinant of value, followed by clarity and color. The ideal cut diamond will sparkle, to break down light into its constituent rainbow colors (dispersion), chop it up into bright little pieces (scintillation), and deliver it to the eye (brilliance). In its rough crystalline form, a diamond will do none of these things; it requires proper fashioning and this is called "cut". In gemstones that have color, including colored diamonds, the purity and beauty of that color is the primary determinant of quality.
Physical characteristics that make a colored stone valuable are color, clarity to a lesser extent (emeralds will always have a number of inclusions), cut, unusual optical phenomena within the stone such as color zoning (the uneven distribution of coloring within a gem) and asteria (star effects). Ancient Greeks, for example, greatly valued asteria gemstones, which they regarded as powerful love charms, and Helen of Troy was known to have worn star-corundum.
Aside from the diamond, the ruby, sapphire, emerald, pearl (not, strictly speaking, a gemstone), and opal have also been considered to be precious. Up to the discoveries of bulk amethyst in Brazil in the 19th century, amethyst was considered a "precious stone" as well, going back to ancient Greece. Even in the last century certain stones such as aquamarine, peridot and cat's eye (cymophane) have been popular and hence been regarded as precious.
Today the gemstone trade no longer makes such a distinction. Many gemstones are used in even the most expensive jewelry, depending on the brand-name of the designer, fashion trends, market supply, treatments, etc. Nevertheless, diamonds, rubies, sapphires, and emeralds still have a reputation that exceeds those of other gemstones.
Rare or unusual gemstones, generally understood to include those gemstones which occur so infrequently in gem quality that they are scarcely known except to connoisseurs, include andalusite, axinite, cassiterite, clinohumite and red beryl.
Gemstone pricing and value are governed by factors and characteristics in the quality of the stone. These characteristics include clarity, rarity, freedom from defects, beauty of the stone, as well as the demand for such stones. There are different pricing influencers for both colored gemstones, and for diamonds. The pricing on colored stones is determined by market supply-and-demand, but diamonds are more intricate. Diamond value can change based on location, time, and on the evaluations of diamond vendors.
Proponents of energy medicine also value gemstones on the basis of alleged healing powers.
There are a number of laboratories which grade and provide reports on gemstones.
Each laboratory has its own methodology to evaluate gemstones. A stone can be called "pink" by one lab while another lab calls it "padparadscha". One lab can conclude a stone is untreated, while another lab might conclude that it is heat-treated. To minimise such differences, seven of the most respected labs, AGTA-GTL (New York), CISGEM (Milano), GAAJ-ZENHOKYO (Tokyo), GIA (Carlsbad), GIT (Bangkok), Gübelin (Lucerne) and SSEF (Basel), have established the Laboratory Manual Harmonisation Committee (LMHC), for the standardization of wording reports, promotion of certain analytical methods and interpretation of results. Country of origin has sometimes been difficult to determine, due to the constant discovery of new source locations. Determining a "country of origin" is thus much more difficult than determining other aspects of a gem (such as cut, clarity, etc.).
Gem dealers are aware of the differences between gem laboratories and will make use of the discrepancies to obtain the best possible certificate.
A few gemstones are used as gems in the crystal or other form in which they are found. Most however, are cut and polished for usage as jewelry. The two main classifications are stones cut as smooth, dome shaped stones called cabochons, and stones which are cut with a faceting machine by polishing small flat windows called facets at regular intervals at exact angles.
Stones which are opaque or semi-opaque such as opal, turquoise, variscite, etc. are commonly cut as cabochons. These gems are designed to show the stone's color or surface properties as in opal and star sapphires. Grinding wheels and polishing agents are used to grind, shape and polish the smooth dome shape of the stones.
Gems which are transparent are normally faceted, a method which shows the optical properties of the stone's interior to its best advantage by maximizing reflected light which is perceived by the viewer as sparkle. There are many commonly used shapes for faceted stones. The facets must be cut at the proper angles, which varies depending on the optical properties of the gem. If the angles are too steep or too shallow, the light will pass through and not be reflected back toward the viewer. The faceting machine is used to hold the stone onto a flat lap for cutting and polishing the flat facets. Rarely, some cutters use special curved laps to cut and polish curved facets.
The color of any material is due to the nature of light itself. Daylight, often called white light, is all of the colors of the spectrum combined. When light strikes a material, most of the light is absorbed while a smaller amount of a particular frequency or wavelength is reflected. The part that is reflected reaches the eye as the perceived color. A ruby appears red because it absorbs all the other colors of white light, while reflecting the red.
A material which is mostly the same can exhibit different colors. For example, ruby and sapphire have the same primary chemical composition (both are corundum) but exhibit different colors because of impurities. Even the same named gemstone can occur in many different colors: sapphires show different shades of blue and pink and "fancy sapphires" exhibit a whole range of other colors from yellow to orange-pink, the latter called "padparadscha sapphire".
This difference in color is based on the atomic structure of the stone. Although the different stones formally have the same chemical composition and structure, they are not exactly the same. Every now and then an atom is replaced by a completely different atom, sometimes as few as one in a million atoms. These so-called impurities are sufficient to absorb certain colors and leave the other colors unaffected.
For example, beryl, which is colorless in its pure mineral form, becomes emerald with chromium impurities. If manganese is added instead of chromium, beryl becomes pink morganite. With iron, it becomes aquamarine.
Some gemstone treatments make use of the fact that these impurities can be "manipulated", thus changing the color of the gem.
Gemstones are often treated to enhance the color or clarity of the stone. Depending on the type and extent of treatment, they can affect the value of the stone. Some treatments are used widely because the resulting gem is stable, while others are not accepted most commonly because the gem color is unstable and may revert to the original tone.
Heat can either improve or spoil gemstone color or clarity. The heating process has been well known to gem miners and cutters for centuries, and in many stone types heating is a common practice. Most citrine is made by heating amethyst, and partial heating with a strong gradient results in “ametrine” – a stone partly amethyst and partly citrine. Aquamarine is often heated to remove yellow tones, or to change green colors into the more desirable blue, or enhance its existing blue color to a deeper blue.
Nearly all tanzanite is heated at low temperatures to remove brown undertones and give a more desirable blue / purple color. A considerable portion of all sapphire and ruby is treated with a variety of heat treatments to improve both color and clarity.
When jewelry containing diamonds is heated (for repairs) the diamond should be protected with boric acid; otherwise the diamond (which is pure carbon) could be burned on the surface or even burned completely up. When jewelry containing sapphires or rubies is heated, those stones should not be coated with boracic acid (which can etch the surface) or any other substance. They do not have to be protected from burning, like a diamond (although the stones "do" need to be protected from heat stress fracture by immersing the part of the jewelry with stones in water when metal parts are heated).
Virtually all blue topaz, both the lighter and the darker blue shades such as "London" blue, has been irradiated to change the color from white to blue. Most greened quartz (Oro Verde) is also irradiated to achieve the yellow-green color. Diamonds are irradiated to produce fancy-color diamonds (which can occur naturally, though rarely in gem quality).
Emeralds containing natural fissures are sometimes filled with wax or oil to disguise them. This wax or oil is also colored to make the emerald appear of better color as well as clarity. Turquoise is also commonly treated in a similar manner.
Fracture filling has been in use with different gemstones such as diamonds, emeralds and sapphires. In 2006 "glass filled rubies" received publicity. Rubies over 10 carats (2 g) with large fractures were filled with lead glass, thus dramatically improving the appearance (of larger rubies in particular). Such treatments are fairly easy to detect.
Synthetic gemstones are distinct from imitation or simulated gems.
Synthetic gems are physically, optically, and chemically identical to the natural stone, but are created in a laboratory. Imitation or simulated stones are chemically different from the natural stone, but may appear quite similar to it; they can be more easily-manufactured synthetic gemstones of a different mineral (spinel), glass, plastic, resins, or other compounds.
Examples of simulated or imitation stones include cubic zirconia, composed of zirconium oxide, synthetic moissanite, and un-colored, synthetic corrundum or spinels; all of which are diamond simulants. The simulants imitate the look and color of the real stone, but possess neither their chemical nor physical characteristics. In general, all are less hard than diamond. Moissanite actually has a "higher" refractive index than diamond, and when presented beside an equivalently sized and cut diamond will show more "fire".
Cultured, synthetic, or "lab-created" gemstones are not imitations: The bulk mineral and trace coloring elements are the same in both. For example, diamonds, rubies, sapphires, and emeralds have been manufactured in labs that possess chemical and physical characteristics identical to the naturally occurring variety. Synthetic (lab created) corrundum, including ruby and sapphire, is very common and costs much less than the natural stones. Small synthetic diamonds have been manufactured in large quantities as industrial abrasives, although larger gem-quality synthetic diamonds are becoming available in multiple carats.
Whether a gemstone is a natural stone or synthetic, the chemical, physical, and optical characteristics are the same: They are composed of the same mineral and are colored by the same trace materials, have the same hardness and density and strength, and show the same color spectrum, refractive index, and birefringence (if any). Lab-created stones tend to have a more vivid color, since impurities common in natural stones are not present in a synthetic stone. Synthetics are made free of common naturally occurring impurities that reduce gem clarity or color, unless intentionally added in order to provide a more drab, natural appearance, or to deceive an assayer. On the other hand, synthetics often show flaws not seen in natural stones, such as minute particles of corroded metal from lab trays used during synthesis. | https://en.wikipedia.org/wiki?curid=12806 |
Gerard David
Gerard David (c. 1460 – 13 August 1523) was an Early Netherlandish painter and manuscript illuminator known for his brilliant use of color. Only a bare outline of his life survives, although some facts are known. He may have been the Meester gheraet van brugghe who became a master of the Antwerp guild in 1515. He was very successful in his lifetime and probably ran two workshops, in Antwerp and Bruges. Like many painters of his period, his reputation diminished in the 17th century until he was rediscovered in the 19th century.
He was born in Oudewater, now located in the province of Utrecht. His year of birth is approximated as c. 1460 on the basis that he looks to be around 50 years in the 1509 self-portrait found in his "Virgin among the Virgins". He spent his mature career in Bruges, where he was a member of the painters' guild. Upon the death of Hans Memling in 1494, David became Bruges' leading painter. He moved to Bruges in 1483, presumably from Haarlem, where he had formed his early style under Albert van Oudewater, and joined the Guild of Saint Luke at Bruges in 1484. He became dean of the guild in 1501, and in 1496 married Cornelia Cnoop, daughter of the dean of the goldsmiths' guild. David was one of the town's leading citizens.
Ambrosius Benson served his apprenticeship with David, but they came into dispute around 1519 over a number of paintings and drawings Benson had collected from other artists. Because of a large debt owed to him by Benson, David had refused to return the material. Benson pursued the matter legally and won, leading to David serving time in prison.
He died on 13 August 1523 and was buried in the Church of Our Lady at Bruges.
David had been completely forgotten when in the early 1860s he was rescued from oblivion by William Henry James Weale, whose researches in the archives of Bruges brought to light the main facts of the painter's life and led to the reconstruction of David's artistic personality, beginning with the recognition of David's only documented work, the "Virgin Among Virgins" at Rouen.
David's surviving work mainly consists of religious scenes. They are characterised by an atmospheric, timeless, and almost dream like serenity, achieved through soft, warm and subtle colourisation, and masterful handling of light and shadow. He is innovative in his recasting of traditional themes and in his approach to landscape, which was then only an emerging genre in northern European painting. His ability with landscape can be seen in the detailed foliage of his "Triptych of the Baptism" and the forest scene in the New York "Nativity".
Although many of the art historians of the early 20th century, including Erwin Panofsky and Max Jakob Friedländer saw him as a painter who did little but distill the style of others and painted in an archaic and unimaginative style. However today most view him as a master colourist, and a painter who according to the Metropolitan Museum of Art, worked in a "progressive, even enterprising, mode, casting off his late medieval heritage and proceeding with a certain purity of vision in an age of transition."
In his early work David followed Haarlem artists such as Dirk Bouts, Albert van Oudewater and Geertgen tot Sint Jans, though he had already given evidence of superior power as a colourist. To this early period belong the "St John" of the collection in Berlin and the Salting's "St Jerome". In Bruges came directly under the influence of Memling, the master whom he followed most closely. It was from him that David acquired a solemnity of treatment, greater realism in the rendering of human form, and an orderly arrangement of figures.
He visited Antwerp in 1515 and was impressed with the work of Quentin Matsys, who had introduced a greater vitality and intimacy in the conception of sacred themes.
The works for which David is best known are the altarpieces painted before his visit to Antwerp: the "Marriage of St Catherine" at the National Gallery, London; the triptych of the "Madonna Enthroned and Saints" of the Brignole-Sale collection in Genoa; the "Annunciation" of the Sigmaringen collection; and above all, the "Madonna with Angels and Saints" (usually titled "The Virgin among the Virgins"), which he donated to the Carmelite Nuns of Sion at Bruges, and which is now in the Rouen museum.
Only a few of his works have remained in Bruges: "The Judgment of Cambyses", "The Flaying of Sisamnes" and the "Baptism of Christ" in the Groeningemuseum, and the "Transfiguration" in the Church of Our Lady.
The rest were scattered around the world, and to this may be due the oblivion into which his very name had fallen; this, and the fact that, some believed that for all the beauty and the soulfulness of his work, he had nothing innovative to add to the history of art.
Even in his best work he had only given newer variations of the art of his predecessors and contemporaries. His rank among the masters was renewed, however, when a number of his paintings were assembled at the seminal 1902 Gruuthusemuseum, Bruges exhibition of early Flemish painters.
He also worked closely with the leading manuscript illuminators of the day, and seems to have been brought in to paint specific important miniatures himself, among them a "Virgin among the Virgins" in the Morgan Library, a "Virgin and Child on a Crescent Moon" in the "Rothschild Prayerbook", and a portrait of the Emperor Maximilian in Vienna. Several of his drawings also survive, and elements from these appear in the works of other painters and illuminators for several decades after his death.
Less known but also of high quality are the works of David found in Spanish public collections. The Prado Museum in Madrid owns a table "Rest on the flight into Egypt" resembling the one in the Royal Museum of Fine Arts in Antwerp. The Prado also holds another two Works by the painter, one of them only attributed. Another one of the Spanish capital's Museums, The Thyssen-Bornemisza holds a "Crucifixión" from 1475.
At the time of David's death, the glory of Bruges and its painters was on the wane: Antwerp had become the leader in art as well as in political and commercial importance. Of David's pupils in Bruges, only Isenbrant, Albert Cornelis and Ambrosius Benson achieved importance. Among other Flemish painters, Joachim Patinir and Jan Mabuse were to some degree influenced by him. | https://en.wikipedia.org/wiki?curid=12807 |
GSM
The Global System for Mobile Communications (GSM) is a standard developed by the European Telecommunications Standards Institute (ETSI) to describe the protocols for second-generation (2G) digital cellular networks used by mobile devices such as mobile phones and tablets. It was first deployed in Finland in December 1991. By the mid-2010s, it became a global standard for mobile communications achieving over 90% market share, and operating in over 193 countries and territories.
2G networks developed as a replacement for first generation (1G) analog cellular networks. The GSM standard originally described a digital, circuit-switched network optimized for full duplex voice telephony. This expanded over time to include data communications, first by circuit-switched transport, then by packet data transport via General Packet Radio Service (GPRS), and Enhanced Data Rates for GSM Evolution (EDGE).
Subsequently, the 3GPP developed third-generation (3G) UMTS standards, followed by fourth-generation (4G) LTE Advanced standards, which do not form part of the ETSI GSM standard.
"GSM" is a trade mark owned by the GSM Association. It may also refer to the (initially) most common voice codec used, Full Rate.
In 1983, work began to develop a European standard for digital cellular voice telecommunications when the European Conference of Postal and Telecommunications Administrations (CEPT) set up the "Groupe Spécial Mobile" (GSM) committee and later provided a permanent technical-support group based in Paris. Five years later, in 1987, 15 representatives from 13 European countries signed a memorandum of understanding in Copenhagen to develop and deploy a common cellular telephone system across Europe, and EU rules were passed to make GSM a mandatory standard. The decision to develop a continental standard eventually resulted in a unified, open, standard-based network which was larger than that in the United States.
In February 1987 Europe produced the very first agreed GSM Technical Specification. Ministers from the four big EU countries cemented their political support for GSM with the Bonn Declaration on Global Information Networks in May and the GSM MoU was tabled for signature in September. The MoU drew in mobile operators from across Europe to pledge to invest in new GSM networks to an ambitious common date.
In this short 38-week period the whole of Europe (countries and industries) had been brought behind GSM in a rare unity and speed guided by four public officials: Armin Silberhorn (Germany), Stephen Temple (UK), Philippe Dupuis (France), and Renzo Failli (Italy). In 1989 the Groupe Spécial Mobile committee was transferred from CEPT to the European Telecommunications Standards Institute (ETSI).
In parallel France and Germany signed a joint development agreement in 1984 and were joined by Italy and the UK in 1986. In 1986, the European Commission proposed reserving the 900 MHz spectrum band for GSM. The former Finnish prime minister Harri Holkeri made the world's first GSM call on July 1, 1991, calling Kaarina Suonio (deputy mayor of the city of Tampere) using a network built by Nokia and Siemens and operated by Radiolinja. The following year saw the sending of the first short messaging service (SMS or "text message") message, and Vodafone UK and Telecom Finland signed the first international roaming agreement.
Work began in 1991 to expand the GSM standard to the 1800 MHz frequency band and the first 1800 MHz network became operational in the UK by 1993, called and DCS 1800. Also that year, Telecom Australia became the first network operator to deploy a GSM network outside Europe and the first practical hand-held GSM mobile phone became available.
In 1995 fax, data and SMS messaging services were launched commercially, the first 1900 MHz GSM network became operational in the United States and GSM subscribers worldwide exceeded 10 million. In the same year, the GSM Association formed. Pre-paid GSM SIM cards were launched in 1996 and worldwide GSM subscribers passed 100 million in 1998.
In 2000 the first commercial GPRS services were launched and the first GPRS-compatible handsets became available for sale. In 2001, the first UMTS (W-CDMA) network was launched, a 3G technology that is not part of GSM. Worldwide GSM subscribers exceeded 500 million. In 2002, the first Multimedia Messaging Service (MMS) was introduced and the first GSM network in the 800 MHz frequency band became operational. EDGE services first became operational in a network in 2003, and the number of worldwide GSM subscribers exceeded 1 billion in 2004.
By 2005 GSM networks accounted for more than 75% of the worldwide cellular network market, serving 1.5 billion subscribers. In 2005, the first HSDPA-capable network also became operational. The first HSUPA network launched in 2007. (High-Speed Packet Access (HSPA) and its uplink and downlink versions are 3G technologies, not part of GSM.) Worldwide GSM subscribers exceeded three billion in 2008.
The GSM Association estimated in 2011 that technologies defined in the GSM standard served 80% of the mobile market, encompassing more than 5 billion people across more than 212 countries and territories, making GSM the most ubiquitous of the many standards for cellular networks.
GSM is a second-generation (2G) standard employing time-division multiple-Access (TDMA) spectrum-sharing, issued by the European Telecommunications Standards Institute (ETSI). The GSM standard does not include the 3G Universal Mobile Telecommunications System (UMTS) code division multiple access (CDMA) technology nor the 4G LTE orthogonal frequency-division multiple access (OFDMA) technology standards issued by the 3GPP.
GSM, for the first time, set a common standard for Europe for wireless networks. It was also adopted by many countries outside Europe. This allowed subscribers to use other GSM networks that have roaming agreements with each other. The common standard reduced research and development costs, since hardware and software could be sold with only minor adaptations for the local market.
Telstra in Australia shut down its 2G GSM network on December 1, 2016, the first mobile network operator to decommission a GSM network. The second mobile provider to shut down its GSM network (on January 1, 2017) was AT&T Mobility from the United States.
Optus in Australia completed the shut down its 2G GSM network on August 1, 2017, part of the Optus GSM network covering Western Australia and the Northern Territory had earlier in the year been shut down in April 2017.
Singapore shut down 2G services entirely in April 2017.
The network is structured into several discrete sections:
GSM utilizes a cellular network, meaning that cell phones connect to it by searching for cells in the immediate vicinity. There are five different cell sizes in a GSM network:
The coverage area of each cell varies according to the implementation environment. Macro cells can be regarded as cells where the base-station antenna is installed on a mast or a building above average rooftop level. Micro cells are cells whose antenna height is under average rooftop level; they are typically deployed in urban areas. Picocells are small cells whose coverage diameter is a few dozen meters; they are mainly used indoors. Femtocells are cells designed for use in residential or small-business environments and connect to a telecommunications service provider's network via a broadband-internet connection. Umbrella cells are used to cover shadowed regions of smaller cells and to fill in gaps in coverage between those cells.
Cell horizontal radius varies – depending on antenna height, antenna gain, and propagation conditions – from a couple of hundred meters to several tens of kilometers. The longest distance the GSM specification supports in practical use is . There are also several implementations of the concept of an extended cell, where the cell radius could be double or even more, depending on the antenna system, the type of terrain, and the timing advance.
GSM supports indoor coverage – achievable by using an indoor picocell base station, or an indoor repeater with distributed indoor antennas fed through power splitters – to deliver the radio signals from an antenna outdoors to the separate indoor distributed antenna system. Picocells are typically deployed when significant call capacity is needed indoors, as in shopping centers or airports. However, this is not a prerequisite, since indoor coverage is also provided by in-building penetration of radio signals from any nearby cell.
GSM networks operate in a number of different carrier frequency ranges (separated into GSM frequency ranges for 2G and UMTS frequency bands for 3G), with most 2G GSM networks operating in the 900 MHz or 1800 MHz bands. Where these bands were already allocated, the 850 MHz and 1900 MHz bands were used instead (for example in Canada and the United States). In rare cases the 400 and 450 MHz frequency bands are assigned in some countries because they were previously used for first-generation systems.
For comparison, most 3G networks in Europe operate in the 2100 MHz frequency band. For more information on worldwide GSM frequency usage, see GSM frequency bands.
Regardless of the frequency selected by an operator, it is divided into timeslots for individual phones. This allows eight full-rate or sixteen half-rate speech channels per radio frequency. These eight radio timeslots (or burst periods) are grouped into a TDMA frame. Half-rate channels use alternate frames in the same timeslot. The channel data rate for all is and the frame duration is
The transmission power in the handset is limited to a maximum of 2 watts in and in .
GSM has used a variety of voice codecs to squeeze 3.1 kHz audio into between 7 and 13 kbit/s. Originally, two codecs, named after the types of data channel they were allocated, were used, called Half Rate (6.5 kbit/s) and Full Rate (13 kbit/s). These used a system based on linear predictive coding (LPC). In addition to being efficient with bitrates, these codecs also made it easier to identify more important parts of the audio, allowing the air interface layer to prioritize and better protect these parts of the signal. GSM was further enhanced in 1997
with the enhanced full rate (EFR) codec, a 12.2 kbit/s codec that uses a full-rate channel. Finally, with the development of UMTS, EFR was refactored into a variable-rate codec called AMR-Narrowband, which is high quality and robust against interference when used on full-rate channels, or less robust but still relatively high quality when used in good radio conditions on half-rate channel.
One of the key features of GSM is the Subscriber Identity Module, commonly known as a SIM card. The SIM is a detachable smart card containing the user's subscription information and phone book. This allows the user to retain his or her information after switching handsets. Alternatively, the user can change operators while retaining the handset simply by changing the SIM.
Sometimes mobile network operators restrict handsets that they sell for exclusive use in their own network. This is called SIM locking and is implemented by a software feature of the phone. A subscriber may usually contact the provider to remove the lock for a fee, utilize private services to remove the lock, or use software and websites to unlock the handset themselves. It is possible to hack past a phone locked by a network operator.
In some countries and regions (e.g., Bangladesh, Belgium, Brazil, Canada, Chile, Germany, Hong Kong, India, Iran, Lebanon, Malaysia, Nepal, Norway, Pakistan, Poland, Singapore, South Africa, Sri Lanka, Thailand) all phones are sold unlocked due to the abundance of dual SIM handsets and operators.
GSM was intended to be a secure wireless system. It has considered the user authentication using a pre-shared key and challenge-response, and over-the-air encryption. However, GSM is vulnerable to different types of attack, each of them aimed at a different part of the network.
The development of UMTS introduced an optional Universal Subscriber Identity Module (USIM), that uses a longer authentication key to give greater security, as well as mutually authenticating the network and the user, whereas GSM only authenticates the user to the network (and not vice versa). The security model therefore offers confidentiality and authentication, but limited authorization capabilities, and no non-repudiation.
GSM uses several cryptographic algorithms for security. The A5/1, A5/2, and A5/3 stream ciphers are used for ensuring over-the-air voice privacy. A5/1 was developed first and is a stronger algorithm used within Europe and the United States; A5/2 is weaker and used in other countries. Serious weaknesses have been found in both algorithms: it is possible to break A5/2 in real-time with a ciphertext-only attack, and in January 2007, The Hacker's Choice started the A5/1 cracking project with plans to use FPGAs that allow A5/1 to be broken with a rainbow table attack. The system supports multiple algorithms so operators may replace that cipher with a stronger one.
Since 2000 different efforts have been made in order to crack the A5 encryption algorithms. Both A5/1 and A5/2 algorithms have been broken, and their cryptanalysis has been revealed in the literature. As an example, Karsten Nohl developed a number of rainbow tables (static values which reduce the time needed to carry out an attack) and have found new sources for known plaintext attacks. He said that it is possible to build "a full GSM interceptor...from open-source components" but that they had not done so because of legal concerns. Nohl claimed that he was able to intercept voice and text conversations by impersonating another user to listen to voicemail, make calls, or send text messages using a seven-year-old Motorola cellphone and decryption software available for free online.
GSM uses General Packet Radio Service (GPRS) for data transmissions like browsing the web. The most commonly deployed GPRS ciphers were publicly broken in 2011.
The researchers revealed flaws in the commonly used GEA/1 and GEA/2 ciphers and published the open-source "gprsdecode" software for sniffing GPRS networks. They also noted that some carriers do not encrypt the data (i.e., using GEA/0) in order to detect the use of traffic or protocols they do not like (e.g., Skype), leaving customers unprotected. GEA/3 seems to remain relatively hard to break and is said to be in use on some more modern networks. If used with USIM to prevent connections to fake base stations and downgrade attacks, users will be protected in the medium term, though migration to 128-bit GEA/4 is still recommended.
The GSM systems and services are described in a set of standards governed by ETSI, where a full list is maintained.
Several open source software projects exist that provide certain GSM features:
Patents remain a problem for any open-source GSM implementation, because it is not possible for GNU or any other free software distributor to guarantee immunity from all lawsuits by the patent holders against the users. Furthermore, new features are being added to the standard all the time which means they have patent protection for a number of years.
The original GSM implementations from 1991 may now be entirely free of patent encumbrances, however patent freedom is not certain due to the United States' "first to invent" system that was in place until 2012. The "first to invent" system, coupled with "patent term adjustment" can extend the life of a U.S. patent far beyond 20 years from its priority date. It is unclear at this time whether OpenBTS will be able to implement features of that initial specification without limit. As patents subsequently expire, however, those features can be added into the open-source version. , there have been no lawsuits against users of OpenBTS over GSM use. | https://en.wikipedia.org/wiki?curid=12808 |
Garry Kasparov
Garry Kimovich Kasparov (, ; born Garik Kimovich Weinstein, 13 April 1963) is a Russian chess grandmaster, former World Chess Champion, writer, and political activist, whom many consider to be the greatest chess player of all time. From 1986 until his retirement in 2005, Kasparov was ranked world No. 1 for 225 out of 228 months and 255 months overall for his career. His peak rating of 2851, achieved in 1999, was the highest recorded until being surpassed by Magnus Carlsen in 2013. Kasparov also holds records for consecutive professional tournament victories (15) and Chess Oscars (11).
Kasparov became the youngest ever undisputed World Chess Champion in 1985 at age 22 by defeating then-champion Anatoly Karpov. He held the official FIDE world title until 1993, when a dispute with FIDE led him to set up a rival organization, the Professional Chess Association. In 1997 he became the first world champion to lose a match to a computer under standard time controls, when he lost to the IBM supercomputer Deep Blue in a highly publicized match. He continued to hold the "Classical" World Chess Championship until his defeat by Vladimir Kramnik in 2000. In spite of losing the title, he continued winning tournaments and was the world's highest-rated player when he retired from professional chess in 2005.
After Kasparov retired, he devoted his time to politics and writing. He formed the United Civil Front movement, and joined as a member of The Other Russia, a coalition opposing the administration and policies of Vladimir Putin. In 2008, he announced an intention to run as a candidate in that year's Russian presidential race, but failure to find a sufficiently large rental space to assemble the number of supporters that is legally required to endorse such a candidacy led him to withdraw. Kasparov blamed "official obstruction" for the lack of available space. He is widely regarded in the West as a symbol of opposition to Putin, and he was barred from the presidential ballot as the political climate in Russia makes it difficult for opposition candidates to organize.
Kasparov is currently chairman for the Human Rights Foundation and chairs its International Council. In 2017, he founded the Renew Democracy Initiative (RDI), an American political organization promoting and defending liberal democracy in the U.S. and abroad. He also serves as chairman of the group.
Kasparov was born Garik Kimovich Weinstein (Russian: Гарик Вайнштейн) in Baku, Azerbaijan SSR (now Azerbaijan), Soviet Union. His father, Kim Moiseyevich Weinstein, was Jewish, and his mother, Klara Shagenovna Kasparova, was Armenian. Kasparov has described himself as a "self-appointed Christian", although "very indifferent" and identifies as Russian.
Kasparov began the serious study of chess after he came across a chess problem set up by his parents and proposed a solution. His father died of leukemia when Garry was seven years old. At the age of twelve, Garry, upon request of his mother Klara and with the consent of the family, adopted Klara's surname Kasparov, which was done to avoid possible antisemitic tensions, which were common in the USSR at the time.
From age 7, Kasparov attended the Young Pioneer Palace in Baku and, at 10 began training at Mikhail Botvinnik's chess school under noted coach Vladimir Makogonov. Makogonov helped develop Kasparov's positional skills and taught him to play the Caro-Kann Defence and the Tartakower System of the Queen's Gambit Declined. Kasparov won the Soviet Junior Championship in Tbilisi in 1976, scoring 7 points of 9, at age 13. He repeated the feat the following year, winning with a score of 8.5 of 9. He was being trained by Alexander Shakarov during this time.
In 1978, Kasparov participated in the Sokolsky Memorial tournament in Minsk. He had been invited as an exception but took first place and became a chess master. Kasparov has repeatedly said that this event was a turning point in his life, and that it convinced him to choose chess as his career. "I will remember the Sokolsky Memorial as long as I live", he wrote. He has also said that after the victory, he thought he had a very good shot at the World Championship.
He first qualified for the Soviet Chess Championship at age 15 in 1978, the youngest ever player at that level. He won the 64-player Swiss system tournament at Daugavpils on tiebreak over Igor V. Ivanov to capture the sole qualifying place.
Kasparov rose quickly through the FIDE world rankings. Starting with an oversight by the Russian Chess Federation, he participated in a grandmaster tournament in Banja Luka, Bosnia and Herzegovina (then part of Yugoslavia), in 1979 while still unrated (he was a replacement for the Soviet defector Viktor Korchnoi, who was originally invited but withdrew due to the threat of a boycott from the Soviets). Kasparov won this high-class tournament, emerging with a provisional rating of 2595, enough to catapult him to the top group of chess players (at the time, number 15 in the world). The next year, 1980, he won the World Junior Chess Championship in Dortmund, West Germany. Later that year, he made his debut as second reserve for the Soviet Union at the Chess Olympiad at Valletta, Malta, and became a Grandmaster.
As a teenager, Kasparov tied for first place in the USSR Chess Championship in 1981–82. His first win in a superclass-level international tournament was scored at Bugojno, Yugoslavia in 1982. He earned a place in the 1982 Moscow Interzonal tournament, which he won, to qualify for the Candidates Tournament. At age 19, he was the youngest Candidate since Bobby Fischer, who was 15 when he qualified in 1958. At this stage, he was already the No. 2-rated player in the world, trailing only World Chess Champion Anatoly Karpov on the January 1983 list.
Kasparov's first (quarter-final) Candidates match was against Alexander Beliavsky, whom he defeated 6–3 (four wins, one loss). Politics threatened Kasparov's semi-final against Viktor Korchnoi, which was scheduled to be played in Pasadena, California. Korchnoi had defected from the Soviet Union in 1976, and was at that time the strongest active non-Soviet player. Various political maneuvers prevented Kasparov from playing Korchnoi, and Kasparov forfeited the match. This was resolved by Korchnoi allowing the match to be replayed in London, along with the previously scheduled match between Vasily Smyslov and Zoltán Ribli. The Kasparov-Korchnoi match was put together on short notice by Raymond Keene. Kasparov lost the first game but won the match 7–4 (four wins, one loss).
In January 1984, Kasparov became the No. 1 ranked player in the world, with a FIDE rating of 2710. He became the youngest ever world No. 1, a record that lasted 12 years until being broken by Vladimir Kramnik in January 1996; the record is currently held by Magnus Carlsen.
Later in 1984, he won the Candidates' final 8½–4½ (four wins, no losses) against the resurgent former world champion Vasily Smyslov, at Vilnius, thus qualifying to play Anatoly Karpov for the World Championship. That year he joined the Communist Party of the Soviet Union (CPSU), as a member of which he was elected to the Central Committee of Komsomol in 1987.
The World Chess Championship 1984 match between Anatoly Karpov and Garry Kasparov had many ups and downs, and a very controversial finish. Karpov started in very good form, and after nine games Kasparov was down 4–0 in a "first to six wins" match. Fellow players predicted he would be whitewashed 6–0 within 18 games.
In an unexpected turn of events, there followed a series of 17 successive draws, some relatively short, and others drawn in unsettled positions. Kasparov lost game 27 (5–0), then fought back with another series of draws until game 32 (5–1), earning his first-ever win against the World Champion. Another 14 successive draws followed, through game 46; the previous record length for a world title match had been 34 games, the match of José Raúl Capablanca vs. Alexander Alekhine in 1927.
Kasparov won games 47 and 48 to bring the scores to 5–3 in Karpov's favour. Then the match was ended without result by Florencio Campomanes, the President of the Fédération Internationale des Échecs (FIDE), and a new match was announced to start a few months later. The termination was controversial, as both players stated that they preferred the match to continue. Announcing his decision at a press conference, Campomanes cited the health of the players, which had been strained by the length of the match.
The match became the first, and so far only, world championship match to be abandoned without result. Kasparov's relations with Campomanes and FIDE were greatly strained, and the feud between them finally came to a head in 1993 with Kasparov's complete break-away from FIDE.
The second Karpov-Kasparov match in 1985 was organized in Moscow as the best of 24 games where the first player to win 12½ points would claim the World Champion title. The scores from the terminated match would not carry over; however, in the event of a 12–12 draw, the title would remain with Karpov. On 9 November 1985, Kasparov secured the title by a score of 13–11, winning the 24th game with Black, using a Sicilian defense. He was 22 years old at the time, making him the youngest ever World Champion, and breaking the record held by Mikhail Tal for over 20 years. Kasparov's win as Black in the 16th game has been recognized as one of the all-time masterpieces in chess history.
As part of the arrangements following the aborted 1984 match, Karpov had been granted (in the event of his defeat) a right to rematch. Another match took place in 1986, hosted jointly in London and Leningrad, with each city hosting 12 games. At one point in the match, Kasparov opened a three-point lead and looked well on his way to a decisive match victory. But Karpov fought back by winning three consecutive games to level the score late in the match. At this point, Kasparov dismissed one of his seconds, grandmaster Evgeny Vladimirov, accusing him of selling his opening preparation to the Karpov team (as described in Kasparov's autobiography "Unlimited Challenge", chapter Stab in the Back). Kasparov scored one more win and kept his title by a final score of 12½–11½.
A fourth match for the world title took place in 1987 in Seville, as Karpov had qualified through the Candidates' Matches to again become the official challenger. This match was very close, with neither player holding more than a one-point lead at any time during the contest. Kasparov was down one full point at the time of the final game, and needed a win to draw the match and retain his title. A long tense game ensued in which Karpov blundered away a pawn just before the first time control, and Kasparov eventually won a long ending. Kasparov retained his title as the match was drawn by a score of 12–12. (All this meant that Kasparov had played Karpov four times in the period 1984–87, a statistic unprecedented in chess. Matches organized by FIDE had taken place every three years since 1948, and only Botvinnik had a right to a rematch before Karpov.)
A fifth match between Kasparov and Karpov was held in New York and Lyon in 1990, with each city hosting 12 games. Again, the result was a close one with Kasparov winning by a margin of 12½–11½. In their five world championship matches, Kasparov had 21 wins, 19 losses, and 104 draws in 144 games.
With the World Champion title in hand, Kasparov began opposing FIDE. In November 1986, he created the Grandmasters Association (GMA), an organization to represent professional chess players and give them more say in FIDE's activities. Kasparov assumed a leadership role. GMA's major achievement was in organizing a series of six World Cup tournaments for the world's top players. A somewhat uneasy relationship developed with FIDE, and a sort of truce was brokered by Bessel Kok, a Dutch businessman.
This stand-off lasted until 1993, by which time a new challenger had qualified through the Candidates cycle for Kasparov's next World Championship defense: Nigel Short, a British grandmaster who had defeated Anatoly Karpov in a qualifying match, and then Jan Timman in the finals held in early 1993. After a confusing and compressed bidding process produced lower financial estimates than expected, the world champion and his challenger decided to play outside FIDE's jurisdiction, under another organization created by Kasparov called the Professional Chess Association (PCA). At this point, a great fracture occurred in the lineage of the FIDE World Championship. In an interview in 2007, Kasparov called the break with FIDE the worst mistake of his career, as it hurt the game in the long run.
Kasparov and Short were ejected from FIDE, and played their well-sponsored match in London in 1993. Kasparov won convincingly by a score of 12½–7½. The match considerably raised the profile of chess in the UK, with an unprecedented level of coverage on Channel 4. Meanwhile, FIDE organized a World Championship match between Jan Timman (the defeated Candidates finalist) and former World Champion Karpov (a defeated Candidates semi-finalist), which Karpov won.
FIDE removed Kasparov and Short from the FIDE rating lists. Until this happened, there was a parallel rating list presented by PCA which featured all world top players, regardless of their relation to FIDE. There were now two World Champions: PCA champion Kasparov, and FIDE champion Karpov. The title remained split for 13 years.
Kasparov defended his title in a 1995 match against Viswanathan Anand at the World Trade Center in New York City. Kasparov won the match by four wins to one, with thirteen draws. It was the last World Championship to be held under the auspices of the PCA, which collapsed when Intel, one of its major backers, withdrew its sponsorship.
Kasparov tried to organize another World Championship match, under another organization, the World Chess Association (WCA) with Linares organizer Luis Rentero. Alexei Shirov and Vladimir Kramnik played a candidates match to decide the challenger, which Shirov won in a surprising upset. But when Rentero admitted that the funds required and promised had never materialized, the WCA collapsed. This left Kasparov stranded, and yet another organization stepped in: BrainGames.com, headed by Raymond Keene. No match against Shirov was arranged, and talks with Anand collapsed, so a match was instead arranged against Kramnik.
During this period, Kasparov was approached by Oakham School in the United Kingdom, at the time the only school in the country with a full-time chess coach, and developed an interest in the use of chess in education. In 1997, Kasparov supported a scholarship programme at the school. Kasparov also won the Marca Leyenda trophy that year.
The Kasparov-Kramnik match took place in London during the latter half of 2000. Kramnik had been a student of Kasparov's at the famous Botvinnik/Kasparov chess school in Russia, and had served on Kasparov's team for the 1995 match against Viswanathan Anand.
The better-prepared Kramnik won game 2 against Kasparov's Grünfeld Defence and achieved winning positions in Games 4 and 6, although Kasparov held the draw in both games. Kasparov made a critical error in Game 10 with the Nimzo-Indian Defence, which Kramnik exploited to win in 25 moves. As White, Kasparov could not crack the passive but solid Berlin Defence in the Ruy Lopez, and Kramnik successfully drew all his games as Black. Kramnik won the match 8½–6½. Kasparov became the first player to lose a world championship match without winning a game since Emanuel Lasker's loss to José Raúl Capablanca in 1921.
After losing the title, Kasparov won a series of major tournaments, and remained the top rated player in the world, ahead of both Kramnik and the FIDE World Champions. In 2001 he refused an invitation to the 2002 Dortmund Candidates Tournament for the Classical title, claiming his results had earned him a rematch with Kramnik.
Kasparov and Karpov played a four-game match with rapid time controls over two days in December 2002 in New York City. Karpov surprised the experts and emerged victorious, winning two games and drawing one.
Due to Kasparov's continuing strong results, and status as world No. 1 in much of the public eye, he was included in the so-called "Prague Agreement", masterminded by Yasser Seirawan and intended to reunite the two World Championships. Kasparov was to play a match against the FIDE World Champion Ruslan Ponomariov in September 2003. But this match was called off after Ponomariov refused to sign his contract for it without reservation. In its place, there were plans for a match against Rustam Kasimdzhanov, winner of the FIDE World Chess Championship 2004, to be held in January 2005 in the United Arab Emirates. These also fell through due to lack of funding. Plans to hold the match in Turkey instead came too late. Kasparov announced in January 2005 that he was tired of waiting for FIDE to organize a match and so had decided to stop all efforts to regain the World Championship title.
After winning the prestigious Linares tournament for the ninth time, Kasparov announced on 10 March 2005 that he would retire from serious competitive chess. He cited as the reason a lack of personal goals in the chess world (he commented when winning the Russian championship in 2004 that it had been the last major title he had never won outright) and expressed frustration at the failure to reunify the world championship.
Kasparov said he may play in some rapid chess events for fun, but intends to spend more time on his books, including both the "My Great Predecessors" series (see below) and a work on the links between decision-making in chess and in other areas of life, and will continue to involve himself in Russian politics, which he views as "headed down the wrong path".
Kasparov has been married three times: to Masha, with whom he had a daughter before divorcing; to Yulia, with whom he had a son before their 2005 divorce; and to Daria (Dasha), with whom he has two children, a daughter born in 2006 and a son born in 2015. They live in New York City. Kasparov's wife manages his business activities worldwide as the founder of Kasparov International Management Inc.
On 22 August 2006, in his first public chess games since his retirement, Kasparov played in the Lichthof Chess Champions Tournament, a blitz event played at the time control of 5 minutes per side and 3-second increments per move. Kasparov tied for first with Anatoly Karpov, scoring 4½/6.
Kasparov and Anatoly Karpov played a 12-game match from 21–24 September 2009, in Valencia, Spain. It consisted of four rapid (or semi rapid) games, in which Kasparov won 3–1, and eight blitz games, in which Kasparov won 6–2, winning the match with total result 9–3. The event took place exactly 25 years after the two players' legendary encounter at World Chess Championship 1984.
Kasparov actively coached Magnus Carlsen for approximately one year beginning in February 2009. The collaboration remained secret until September 2009. Under Kasparov's tutelage, Carlsen in October 2009 became the youngest ever to achieve a FIDE rating higher than 2,800, and rose from world number four to world number one. While the pair initially planned to work together throughout 2010, in March of that year it was announced that Carlsen had split from Kasparov and would no longer be using him as a trainer. According to an interview with the German magazine "Der Spiegel", Carlsen indicated that he would remain in contact and that he would continue to attend training sessions with Kasparov, but in fact no further training sessions were held and the cooperation gradually fizzled out over the course of the spring.
In May 2010 he played 30 games simultaneously, winning each one, against players at Tel Aviv University in Israel. In the same month it was revealed that Kasparov had aided Viswanathan Anand in preparation for the World Chess Championship 2010 against challenger Veselin Topalov. Anand won the match 6½–5½ to retain the title.
In January 2011, Kasparov began training the U.S. grandmaster Hikaru Nakamura. The first of several training sessions was held in New York just prior to Nakamura's participation in the Tata Steel Chess tournament in Wijk aan Zee, the Netherlands. In December 2011, it was announced that the cooperation had come to an end.
Kasparov played two blitz exhibition matches in the autumn of 2011. The first was in September against French grandmaster Maxime Vachier-Lagrave, in Clichy (France), which Kasparov won 1½–½. The second was a longer match consisting of eight blitz games played on 9 October, against English grandmaster Nigel Short. Kasparov won again by a score of 4½–3½.
A little after that, in October 2011, Kasparov played and defeated fourteen opponents in a simultaneous exhibition that took place in Bratislava.
On 25 and 26 April 2015, Kasparov played a mini-match against Nigel Short. The match consisted of two rapid games and eight blitz games. Kasparov won the match decisively with a score of 8½–1½, winning all five games on the second day.
On Wednesday 19 August 2015 he played and won the 19 games of a simultaneous exhibition in Pula, Croatia.
On Thursday 28 April and Friday 29 April 2016 at the Chess Club and Scholastic Center of Saint Louis, Kasparov played a 6-round exhibition blitz round-robin tournament with Fabiano Caruana, Wesley So, and Hikaru Nakamura in an event called the Ultimate Blitz Challenge. He finished the tournament third with 9.5/18, behind Hikaru Nakamura (11/18) and Wesley So (10/18). At the post-tournament interview, he considered the possibility of playing future top-level blitz exhibition matches.
On 2 June 2016, Kasparov played against fifteen chess players in a simultaneous exhibition in the of Mönchengladbach. He won all games.
On 7 October 2013, Kasparov announced his candidacy for World Chess Federation president during a reception in Tallinn, Estonia, where the 84th FIDE Congress took place. Kasparov's candidacy was supported by his former student, reigning World Chess Champion and FIDE#1 ranked player Magnus Carlsen. At the FIDE General Assembly in August 2014, Kasparov lost the presidential election to incumbent FIDE president Kirsan Ilyumzhinov, with a vote of 110–61.
A few days before the election took place, the "New York Times Magazine" had published a lengthy report on the viciously fought campaign. Included was information about a leaked contract between Kasparov and former FIDE Secretary General Ignatius Leong from Singapore, in which the Kasparov campaign reportedly "offered to pay Leong $500,000 and to pay $250,000 a year for four years to the Asean Chess Academy, an organization Leong helped create to teach the game, specifying that Leong would be responsible for delivering 11 votes from his region [...]". In September 2015, the FIDE Ethics Commission found Kasparov and Leong guilty of violating its Code of Ethics and later suspended them for two years from all FIDE functions and meetings.
In 2017, Kasparov came out of retirement to participate in the inaugural St. Louis Rapid and Blitz tournament from 14–19 August, scoring 3.5/9 in the rapid and 9/18 in the blitz, finishing 8th out of 10 participants, which included Nakamura, Caruana, former world champion Anand, and the eventual winner, Aronian. Any tournament money that he earned would go towards charities to promote chess in Africa.
Kasparov's grandfather was a staunch communist but Kasparov gradually began to have doubts about the Soviet Union's political system at age 13 when he traveled abroad for the first time to Paris for a chess tournament. In 1981, at age 18 he read Solzhenitsyn’s "The Gulag Archipelago", a copy of which he bought while abroad.
Kasparov joined the Communist Party of the Soviet Union (CPSU) in 1984, and in 1987 was elected to the Central Committee of Komsomol. However, in 1990, he left the party.
After onset of pogroms against Armenians in Baku in January 1990, which left hundreds dead and injured and caused thousands of ethnic Armenians to flee Azerbaijan Kasparov together with his family fled from Baku to Moscow on a chartered plane. Later, Kasparov sold world champion's gold crown he won to support Armenian refugees from Baku.
In May 1990, Kasparov took part in the creation of the Democratic Party of Russia, which at first was a liberal anti-communist party, later shifting to centrism. Kasparov left the party on 28 April 1991, after its conference.
In 1991, Kasparov received the Keeper of the Flame award from the Center for Security Policy for "propagation of democracy and the respect for individual rights throughout the world". In his acceptance speech Kasparov lauded the defeat of communism while also urging the United States to give no financial assistance to central Soviet leaders.
In June 1993, Kasparov was involved with the creation of the "Choice of Russia" bloc of parties and in 1996 took part in the election campaign of Boris Yeltsin. In 2001 he voiced his support for the Russian television channel NTV.
After his retirement from chess in 2005, Kasparov turned to politics and created the United Civil Front, a social movement whose main goal is to "work to preserve electoral democracy in Russia". He has vowed to "restore democracy" to Russia by restoring the rule of law.
Kasparov was instrumental in setting up The Other Russia, a coalition which opposes Putin's government. The Other Russia has been boycotted by the leaders of Russia's mainstream opposition parties, Yabloko and Union of Right Forces due to its inclusion of nationalist and radical groups. Kasparov has criticized these groups as being secretly under the auspices of the Kremlin.
On 10 April 2005, Kasparov was in Moscow at a promotional event when he was struck over the head with a chessboard he had just signed. The assailant was reported to have said "I admired you as a chess player, but you gave that up for politics" immediately before the attack. Kasparov has been the subject of a number of other episodes since, including police brutality and alleged harassment from the Russian secret service.
Kasparov helped organize the Saint Petersburg Dissenters' March on 3 March 2007 and The March of the Dissenters on 24 March 2007, both involving several thousand people rallying against Putin and Saint Petersburg Governor Valentina Matviyenko's policies.
On 14 April 2007, Kasparov led a pro-democracy demonstration in Moscow. Soon after the demonstration's start, however, over 9,000 police descended on the group and seized almost everyone. Kasparov, who was briefly arrested by the Moscow police, was warned by the prosecution office on the eve of the march that anyone participating risked being detained. He was held for some 10 hours and then fined and released. He was later summoned by the FSB for violations of Russian anti-extremism laws.
Speaking about Kasparov, former KGB defector Oleg Kalugin in 2007 remarked, "I do not talk in details – people who knew them are all dead now because they were vocal, they were open. I am quiet. There is only one man who is vocal and he may be in trouble: [former] world chess champion [Garry] Kasparov. He has been very outspoken in his attacks on Putin and I believe that he is probably next on the list."
In April 2007, it was asserted that Kasparov was a board member of the National Security Advisory Council of Center for Security Policy, a "non-profit, non-partisan national security [think tank in Washington, DC], which specializes in identifying policies, actions, and resource needs that are vital to American security". Kasparov confirmed this and added that he had his name removed shortly after he became aware of it. He noted that he did not know about his "membership", suggesting that he was included on the board by accident, due to having received the 1991 Keeper of the Flame award from this organization. However, Kasparov maintained his association with the leadership by giving speeches at think tanks such as the Hoover Institution.
On 30 September 2007, Kasparov entered the Russian Presidential race, receiving 379 of 498 votes at a congress held in Moscow by The Other Russia. In October 2007, Kasparov announced his intention of standing for the Russian presidency as the candidate of the "Other Russia" coalition and vowed to fight for a "democratic and just Russia". Later that month he traveled to the United States, where he appeared on several popular television programs, which were hosted by Stephen Colbert, Wolf Blitzer, Bill Maher, and Chris Matthews.
On 24 November 2007, Kasparov and other protesters were detained by police at an Other Russia rally in Moscow. 3,000 demonstrators arrived to allege the rigging of upcoming elections. Following an attempt by about 100 protesters to march through police lines to the electoral commission, which had barred Other Russia candidates from parliamentary elections, arrests were made. The Russian authorities stated a rally had been approved but not any marches, resulting in several detained demonstrators. He was subsequently charged with resisting arrest and organizing an unauthorized protest and given a jail sentence of five days. Kasparov appealed the charges, citing that he had been following orders given by the police, although it was denied. He was released from jail on 29 November. Putin criticized Kasparov at the rally for his use of English when speaking rather than Russian.
On 12 December 2007, Kasparov announced that he had to withdraw his presidential candidacy due to inability to rent a meeting hall where at least 500 of his supporters could assemble. With the deadline expiring on that date, he explained it was impossible for him to run. Russian election laws required sufficient meeting hall space for assembling supporters. Kasparov's spokeswoman accused the government of using pressure to deter anyone from renting a hall for the gathering and said that the electoral commission had rejected a proposal that would have allowed for smaller gathering sizes rather than one large gathering at a meeting hall.
Kasparov was among the 34 first signatories and a key organizer of the online anti-Putin campaign "Putin must go", started on 10 March 2010. The campaign was begun by a coalition of opposition to Putin who regard his rule as lacking any rule of law. Within the text is a call to Russian law enforcement to ignore Putin's orders. By June 2011, there were 90,000 signatures. While the identity of the petition author remained anonymous, there was wide speculation that it was indeed Kasparov.
On 17 August 2012, Kasparov was arrested and beaten outside of the Moscow court while attending the verdict reading in the case involving the all-female punk band Pussy Riot. On 24 August, he was cleared of charges that he took part in an unauthorized protest against the conviction of three members of Pussy Riot. Judge Yekaterina Veklich said there were "no grounds to believe the testimony of the police". He could still face criminal charges over a police officer's claims that the opposition leader bit his finger while he was being detained. He later thanked all the bloggers and reporters who provided video evidence that contradicted the testimony of the police.
Kasparov wrote in February 2013 that "fascism has come to Russia. ... Project Putin, just like the old Project Hitler, is but the fruit of a conspiracy by the ruling elite. Fascist rule was never the result of the free will of the people. It was always the fruit of a conspiracy by the ruling elites!"
In April 2013, Kasparov joined in an HRF condemnation of Kanye West for having performed for the leader of Kazakhstan in exchange for a $3 million paycheck, saying that West "has entertained a brutal killer and his entourage" and that his fee "came from the loot stolen from the Kazakhstan treasury".
Kasparov denied rumors in April 2013 that he planned to leave Russia for good. "I found these rumors to be deeply saddening and, moreover, surprising," he wrote. "I was unable to respond immediately because I was in such a state of shock that such an incredibly inaccurate statement, the likes of which is constantly distributed by the Kremlin's propagandists, came this time from Ilya Yashin, a fellow member of the Opposition Coordination Council (KSO) and my former colleague from the Solidarity movement."
In an April 2013 op-ed piece, Kasparov accused prominent Russian journalist Vladimir Posner of failing to stand up to Putin and to earlier Russian and Soviet leaders.
Kasparov was presented with the Morris B. Abram Human Rights Award, UN Watch's annual human-rights prize, in 2013. The organization, a lobby group with strong ties to Israel, praised him as "not only one of the world's smartest men" but "also among its bravest".
At the 2013 Women in the World conference, Kasparov told "The Daily Beast"s Michael Moynihan that democracy no longer existed in what he called Russia's "dictatorship".
Kasparov said at a press conference in June 2013 that if he returned to Russia he doubted he would be allowed to leave again, given Putin's ongoing crackdown against dissenters. "So for the time being," he said, "I refrain from returning to Russia." He explained shortly thereafter in an article for "The Daily Beast" that this had not been intended as "a declaration of leaving my home country, permanently or otherwise", but merely an expression of "the dark reality of the situation in Russia today, where nearly half the members of the opposition's Coordinating Council are under criminal investigation on concocted charges". He noted that the Moscow prosecutor's office was "opening an investigation that would limit my ability to travel", making it impossible for him to fulfill "professional speaking engagements" and hindering his "work for the nonprofit Kasparov Chess Foundation, which has centers in New York City, Brussels, and Johannesburg to promote chess in education".
Kasparov further wrote in his June 2013 "Daily Beast" article that the mass protests in Moscow 18 months earlier against fraudulent Russian elections had been "a proud moment for me". He recalled that after joining the opposition movement in March 2005, he had been criticized for seeking to unite "every anti-Putin element in the country to march together regardless of ideology". Therefore, the sight of "hundreds of flags representing every group from liberals to nationalists all marching together for 'Russia Without Putin' was the fulfillment of a dream." Yet most Russians, he lamented, had continued to "slumber" even as Putin had "taken off the flimsy mask of democracy to reveal himself in full as the would-be KGB dictator he has always been".
Kasparov responded with several sardonic Twitter postings to a September 2013 "The New York Times" op-ed by Putin. "I hope Putin has taken adequate protections," he tweeted. "Now that he is a Russian journalist his life may be in grave danger!" Also: "Now we can expect NY Times op-eds by Mugabe on fair elections, Castro on free speech, & Kim Jong-un on prison reform. The Axis of Hypocrisy."
In a 12 May 2013 op-ed for "The Wall Street Journal", Kasparov questioned reports that the Russian security agency, the FSB, had fully cooperated with the FBI in the matter of the Boston bombers. He noted that the elder bomber, Tamerlan Tsarnaev, had reportedly met in Russia with two known jihadists who "were killed in Dagestan by the Russian military just days before Tamerlan left Russia for the U.S." Kasparov argued, "If no intelligence was sent from Moscow to Washington" about this meeting, "all this talk of FSB cooperation cannot be taken seriously." He further observed, "This would not be the first time Russian security forces seemed strangely impotent in the face of an impending terror attack," pointing out that in both the 2002 Moscow theater siege and the 2004 Beslan school attack, "there were FSB informants in both terror groups – yet the attacks went ahead unimpeded." Given this history, he wrote, "it is impossible to overlook that the Boston bombing took place just days after the U.S. Magnitsky List was published, creating the first serious external threat to the Putin power structure by penalizing Russian officials complicit in human-rights crimes." In sum, Putin's "dubious record on counterterrorism and its continued support of terror sponsors Iran and Syria mean only one thing: common ground zero".
Kasparov wrote in July 2013 about the trial in Kirov of fellow opposition leader Alexei Navalny, who had been convicted "on concocted embezzlement charges", only to see the prosecutor, surprisingly, ask for his release the next day pending appeal. "The judicial process and the democratic process in Russia," wrote Kasparov, "are both elaborate mockeries created to distract the citizenry at home and to help Western leaders avoid confronting the awkward fact that Russia has returned to a police state". Still, Kasparov felt that whatever had caused the Kirov prosecutor's about-face, "my optimism tells me it was a positive sign. After more than 13 years of predictable repression under Putin, anything different is good."
Kasparov maintains a summer home in the Croatian city of Makarska. In early February 2014, Kasparov applied for citizenship by naturalisation in Croatia, adding that he was finding it increasingly difficult to live in Russia. According to an article in "The Guardian", Kasparov is "widely perceived" as having been a vocal supporter of Croatian independence during the early 1990s. On 28 February 2014, his application for naturalisation was approved, and he is now a Croatian passport holder.
Kasparov wrote in "Time" on 18 September 2013 that he considered the "chess metaphors thrown around during the world's response to the civil war in Syria" to be "trite" and rejected what he called "all the nonsense about 'Putin is playing chess and Obama is playing checkers,' or tic-tac-toe or whatever." Putin, argued Kasparov, "did not have to outplay or outthink anyone. He and Bashar Assad won by forfeit when President Obama, Prime Minister Cameron and the rest of the so-called leaders of the free world walked away from the table." There is, he lamented, "a new game at the negotiating table where Putin and Assad set the rules and will run the show under the protection of the U.N." Kasparov said in September 2013 that Russia was now a dictatorship. In the same month he told an interviewer that "Obama going to Russia now is dead wrong, morally and politically," because Putin's regime "is behind Assad".
Kasparov spoke out several times about Putin's antigay laws and the proposed Sochi Olympics boycott. He explained in August 2013 that he had opposed Russia's bid from the outset, since hosting the Olympics would "allow Vladimir Putin's cronies to embezzle hundreds of millions of dollars" and "lend prestige to Putin's authoritarian regime". Kasparov added that Putin's anti-gay law was "only the most recent encroachment on the freedom of speech and association of Russia's citizens", which the international community had largely ignored. Instead of supporting a games boycott, which would "unfairly punish athletes", Kasparov called for athletes and others to "transform Putin's self-congratulatory pet project into a spotlight that exposes his authoritarian rule for the entire world to see". In September, Kasparov expanded on his remarks, saying that "forcing athletes to play a political role against their will is not fair" and that politicians should not "hide behind athletes". Instead of boycotting Sochi, he suggested, politicians should refuse to attend the games and the public should "put pressure on the sponsors and the media". Coca-Cola, for example, could put "a rainbow flag on each Coca-Cola can" and NBC could "do interviews with Russian gay activists or with Russian political activists". Kasparov also emphasized that although he was "still a Russian citizen", he had "good reason to be concerned about my ability to leave Russia if I returned to Moscow".
Kasparov has spoken out against the 2014 Russian annexation of Crimea and has stated that control of Crimea should be returned to Ukraine after the overthrow of Vladimir Putin without additional conditions.
Kasparov's website was blocked by the Russian federative regulator, Roskomnadzor, at the behest of the public prosecutor, allegedly due to Kasparov's opinions of the Crimean crisis. Kasparov's block was made in unison with several other notable Russian sites that were accused of inciting public outrage. Reportedly, several of the blocked sites received an affidavit noting their violations. However, Kasparov stated that his site had received no such notice of violations after its block. In 2015 a whole note on Kasparov was removed from a Russian language encyclopedia of greatest Soviet players after an intervention from "senior leadership".
In October 2015, Kasparov published a book titled "Winter Is Coming: Why Vladimir Putin and the Enemies of the Free World Must Be Stopped". In the book, Kasparov likens Putin to Hitler, and explains the need for the west to oppose Putin sooner, rather than appeasing him and postponing the eventual confrontation. According to his publisher, "Kasparov wants this book out fast, in a way that has potential to influence the discussion during the primary season."
In the 2016 United States presidential election, Kasparov described Republican front-runner Donald Trump as "a celebrity showman with racist leanings and authoritarian tendencies", and criticised Trump for calling for closer ties with Vladimir Putin, and responded to Trump's running mate, Mike Pence, calling Putin a strong leader, that Putin is a strong leader "in the same way arsenic is a strong drink". He also criticised the economic policies of Democratic primary candidate Bernie Sanders, but showed respect for Sanders as "a charismatic speaker and a passionate believer in his cause".
In 2017, he condemned the violence unleashed by the Spanish police against the independence referendum in Catalonia on 1 October. He criticized the Spanish PM Mariano Rajoy and accused him of "betraying" the European promise of peace. Also, after the Catalan regional election held the same year on 21 December, he called on the European Union to intervene in the conflict to find a negotiated solution. He wrote on Twitter: "Despite unprecedented pressure from Madrid, Catalonian separatists won a majority. Europe must speak and help find a peaceful path toward resolution and avoid more violence".
Kasparov remembers victims of Armenian Genocide and calls on Turkey for its recognition and full investigation.
He welcomed Velvet Revolution in Armenia in 2018, just few days after it happened.
Kasparov was named Chairman of the Human Rights Foundation in 2011, succeeding the recently deceased author, activist, and former Czech president Václav Havel. On 31 January 2012, Kasparov hosted a meeting of opposition leaders planning a mass march on 4 February 2012, the third major opposition rally held since the disputed State Duma elections of December 2011. Among other opposition leaders attending were Alexey Navalny and Yevgenia Chirikova.
Kasparov's attacking style of play has been compared by many to Alekhine's. Kasparov has described his style as being influenced chiefly by Alekhine, Tal and Fischer. Kramnik has opined that "[Kasparov's] capacity for study is second to none", and said "There is nothing in chess he has been unable to deal with." Magnus Carlsen, whom Kasparov coached from 2009 to 2010, said of Kasparov, "I've never seen someone with such a feel for dynamics in complex positions." Kasparov was known for his extensive opening preparation and aggressive play in the opening.
Kasparov played in a total of eight Chess Olympiads. He represented the Soviet Union four times and Russia four times, following the breakup of the Soviet Union in 1991. In his 1980 Olympiad debut, he became, at age 17, the youngest player to represent the Soviet Union or Russia at that level, a record which was broken by Vladimir Kramnik in 1992. In 82 games, he has scored (+50−3=29), for 78.7% and won a total of 19 medals, including team gold medals all eight times he competed. For the 1994 Moscow Olympiad, he had a significant organizational role, in helping to put together the event on short notice, after Thessaloniki canceled its offer to host, a few weeks before the scheduled dates. Kasparov's detailed Olympiad record follows:
Kasparov made his international teams debut for the USSR at age 16 in the 1980 European Team Championship and played for Russia in the 1992 edition of that championship. He won a total of five medals. His detailed Euroteams record, from, follows.
Kasparov also represented the USSR once in Youth Olympiad competition, but the detailed data at Olimpbase is incomplete; the Chessmetrics Garry Kasparov player file has his individual score from that event.
Kasparov holds the record for the longest time as the No. 1 rated player in the worldfrom 1986 to 2005 (Vladimir Kramnik shared the No. 1 ranking with him once, in the January 1996 FIDE rating list). He was also briefly ejected from the list following his split from FIDE in 1993, but during that time he headed the rating list of the rival PCA. At the time of his retirement, he was still ranked No. 1 in the world, with a rating of 2812. His rating has fallen inactive since the January 2006 rating list.
In January 1990, Kasparov achieved the (then) highest FIDE rating ever, passing 2800 and breaking Bobby Fischer's old record of 2785. By the July 1999 and January 2000 FIDE rating lists, Kasparov had reached a 2851 Elo rating, at that time the highest rating ever achieved. He held that record for the highest rating ever achieved until Magnus Carlsen attained a new record high rating of 2861 in January 2013.
Kasparov holds the record for most consecutive professional tournament victories, placing first or equal first in 15 individual tournaments from 1981 to 1990. The streak was broken by Vasily Ivanchuk at Linares 1991, where Kasparov placed 2nd, half a point behind him after losing their individual game. The details of this record winning streak follow:
Kasparov won the Chess Oscar a record eleven times.
In 1983, Acorn Computers acted as one of the sponsors for Kasparov's Candidates semi-final match against Viktor Korchnoi. Kasparov was awarded a BBC Micro which he took back with him to Baku, making it perhaps the first western-made microcomputer to reach Baku at that time. In 1985, computer chess magazine editor Frederic Friedel invited Kasparov to his house, and the two of them discussed how a chess database program would be useful for preparation. Two years later, Friedel founded Chessbase, and gave a copy of the program to Kasparov who started using it in his preparation.
In 1985, Kasparov played against thirty-two different chess computers in Hamburg, winning all games, but with some difficulty.
On 22 October 1989, Kasparov defeated the chess computer Deep Thought in both games of a two-game match.
In December 1992, Kasparov visited Frederic Friedel in his hotel room in Cologne, and played 37 blitz games against Fritz 2 winning 24, drawing 4 and losing 9.
Kasparov cooperated in producing video material for the computer game Kasparov's Gambit released by Electronic Arts in November 1993. In April 1994, Intel acted as a sponsor for the first Professional Chess Association Grand Prix event in Moscow played a time control of 25 minutes per game. In May, Chessbase's Fritz 3 running on an Intel Pentium PC defeated Kasparov in their first in the Intel Express blitz tournament in Munich, but Kasparov managed to tie it for first, and then win the playoff with 3 wins and 2 draws. The next day, Kasparov lost to Fritz 3 again in a game on ZDF TV. In August, Kasparov was knocked out of the London Intel Grand Prix by Richard Lang's ChessGenius 2 program in the first round.
In 1995, during Kasparov's world title match with Viswanathan Anand, he unveiled an opening novelty that had been checked with a chess engine, an approach that would become increasingly common in subsequent years.
Kasparov played in a pair of six-game chess matches with an IBM supercomputer called Deep Blue. The first match was played in Philadelphia in 1996 and won by Kasparov. The second was played in New York City in 1997 and won by Deep Blue. The 1997 match was the first defeat of a reigning world chess champion by a computer under tournament conditions.
In May 1997, an updated version of Deep Blue defeated Kasparov 3½–2½ in a highly publicized six-game match. The match was even after five games but Kasparov lost quickly in Game 6. This was the first time a computer had ever defeated a world champion in a match. A documentary film was made about this famous match entitled "".
Kasparov said that he was "not well prepared" to face Deep Blue in 1997. He said that based on his "objective strengths" his play was stronger than that of Deep Blue. Kasparov claimed that several factors weighed against him in this match. In particular, he was denied access to Deep Blue's recent games, in contrast to the computer's team, which could study hundreds of Kasparov's.
After the loss, Kasparov said that he sometimes saw deep intelligence and creativity in the machine's moves, suggesting that during the second game, human chess players, in contravention of the rules, intervened. IBM denied that it cheated, saying the only human intervention occurred between games. The rules provided for the developers to modify the program between games, an opportunity they said they used to shore up weaknesses in the computer's play revealed during the course of the match. Kasparov requested printouts of the machine's log files but IBM refused, although the company later published the logs on the Internet. Much later, it was suggested that the behavior Kasparov noted may have resulted from a glitch in the computer program. Although Kasparov wanted another rematch, IBM declined and ended their Deep Blue program.
Kasparov's loss to Deep Blue inspired the creation of the game Arimaa.
In January 2003, he engaged in a six-game classical time control match with a $1 million prize fund which was billed as the FIDE "Man vs. Machine" World Championship, against Deep Junior. The engine evaluated three million positions per second. After one win each and three draws, it was all up to the final game. After reaching a decent position Kasparov offered a draw, which was soon accepted by the Deep Junior team. Asked why he offered the draw, Kasparov said he feared making a blunder. Originally planned as an annual event, the match was not repeated.
Deep Junior was the first machine to beat Kasparov with black and at a standard time control.
In June 2003, Mindscape released the computer game Kasparov Chessmate with Kasparov himself listed as a co-designer.
In November 2003, he engaged in a four-game match against the computer program X3D Fritz, using a virtual board, 3D glasses and a speech recognition system. After two draws and one win apiece, the X3D Man–Machine match ended in a draw. Kasparov received $175,000 for the result and took home the golden trophy. Kasparov continued to criticize the blunder in the second game that cost him a crucial point. He felt that he had outplayed the machine overall and played well. "I only made one mistake but unfortunately that one mistake lost the game."
Kasparov has written books on chess. He published a controversial autobiography when still in his early 20s, originally titled "Child of Change", later retitled "Unlimited Challenge". This book was subsequently updated several times after he became World Champion. Its content is mainly literary, with a small chess component of key unannotated games. He published an annotated games collection in 1985: "Fighting Chess: My Games and Career" and this book has also been updated several times in further editions. He also wrote a book annotating the games from his World Chess Championship 1985 victory, "World Chess Championship Match: Moscow, 1985".
He has annotated his own games extensively for the Yugoslav "Chess Informant" series and for other chess publications. In 1982, he co-authored "Batsford Chess Openings" with British grandmaster Raymond Keene and this book was an enormous seller. It was updated into a second edition in 1989. He also co-authored two opening books with his trainer Alexander Nikitin in the 1980s for British publisher Batsfordon the Classical Variation of the Caro-Kann Defence and on the Scheveningen Variation of the Sicilian Defence. Kasparov has also contributed extensively to the five-volume openings series "Encyclopedia of Chess Openings".
In 2000, Kasparov co-authored "Kasparov Against the World: The Story of the Greatest Online Challenge" with grandmaster Daniel King. The 202-page book analyzes the 1999 Kasparov versus the World game, and holds the record for the longest analysis devoted to a single chess game.
In 2003, the first volume of his five-volume work "Garry Kasparov on My Great Predecessors" was published. This volume, which deals with the world chess champions Wilhelm Steinitz, Emanuel Lasker, José Raúl Capablanca, Alexander Alekhine, and some of their strong contemporaries, has received lavish praise from some reviewers (including Nigel Short), while attracting criticism from others for historical inaccuracies and analysis of games directly copied from unattributed sources. Through suggestions on the book's website, most of these shortcomings were corrected in following editions and translations. Despite this, the first volume won the British Chess Federation's Book of the Year award in 2003. Volume two, covering Max Euwe, Mikhail Botvinnik, Vasily Smyslov and Mikhail Tal appeared later in 2003. Volume three, covering Tigran Petrosian and Boris Spassky appeared in early 2004. In December 2004, Kasparov released volume four, which covers Samuel Reshevsky, Miguel Najdorf, and Bent Larsen (none of these three were World Champions), but focuses primarily on Bobby Fischer. The fifth volume, devoted to the chess careers of World Champion Anatoly Karpov and challenger Viktor Korchnoi, was published in March 2006.
His book "Revolution in the 70s" (published in March 2007) covers "the openings revolution of the 1970s–1980s" and is the first book in a new series called "Modern Chess Series", which intends to cover his matches with Karpov and selected games. The book "Revolution in the 70s" concerns the revolution in opening theory that was witnessed in that decade. Such systems as the controversial (at the time) "Hedgehog" opening plan of passively developing the pieces no further than the first three ranks are examined in great detail. Kasparov also analyzes some of the most notable games played in that period. In a section at the end of the book, top opening theoreticians provide their own "take" on the progress made in opening theory in the 1980s.
Kasparov is publishing three volumes of his games.
In October 2015, Kasparov published a book titled "Winter Is Coming: Why Vladimir Putin and the Enemies of the Free World Must Be Stopped". The title is a reference to the HBO television series "Game of Thrones". In the book, Kasparov writes about the need for an organization solely composed of democratic countries to replace the United Nations. In an interview, he called the United Nations a "catwalk for dictators".
Kasparov believes that the conventional history of civilization is radically incorrect. Specifically, he believes that the history of ancient civilizations is based on misdatings of events and achievements that actually occurred in the medieval period. He has cited several aspects of ancient history that he says are likely to be anachronisms.
Kasparov has written in support of New Chronology (Fomenko), although with some reservations. In 2001, Kasparov expressed a desire to devote his time to promoting the New Chronology after his chess career. "New Chronology is a great area for investing my intellect ... My analytical abilities are well placed to figure out what was right and what was wrong." "When I stop playing chess, it may well be that I concentrate on promoting these ideas... I believe they can improve our lives."
Later, Kasparov renounced his support of Fomenko theories but reaffirmed his belief that mainstream historical knowledge is highly inconsistent.
In 2007, he wrote "How Life Imitates Chess", an examination of the parallels between decision-making in chess and in the business world.
In 2008, Kasparov published a sympathetic obituary for Bobby Fischer, writing: "I am often asked if I ever met or played Bobby Fischer. The answer is no, I never had that opportunity. But even though he saw me as a member of the evil chess establishment that he felt had robbed and cheated him, I am sorry I never had a chance to thank him personally for what he did for our sport."
He is the chief advisor for the book publisher Everyman Chess.
Kasparov works closely with Mig Greengard and his comments can often be found on Greengard's blog (apparently no longer active).
Kasparov collaborated with Max Levchin and Peter Thiel on "The Blueprint", a book calling for a revival of world innovation, planned to release in March 2013 from W. W. Norton & Company. The book was never released, as the authors disagreed on its contents.
Kasparov argued that Chess has become the Drosophila fruit fly of reasoning, in an editorial comment on Google's AlphaZero chess-playing system. "I was pleased to see that AlphaZero had a dynamic, open style like my own," he wrote in late 2018. | https://en.wikipedia.org/wiki?curid=12810 |
Flag of Greenland
The flag of Greenland was designed by Greenland native Thue Christiansen. It features two equal horizontal bands of white (top) and red with a large disk slightly to the hoist side of centre. The top half of the disk is red, the bottom half is white. The top half of the flag bears a slight resemblance to the Flag of Japan as a result. The entire flag measures 18 by 12 parts; each stripe measures 6 parts; the disk is 8 parts in diameter, horizontally offset by 7 parts from the hoist to the centre of the circle, and vertically centered.
Its local name in the Greenlandic language is "Erfalasorput", which means "our flag". The term "Aappalaartoq" (meaning "the red") is also used for both the Greenlandic flag and the flag of Denmark ("Dannebrog"). Today, Greenlanders display both the "Erfalasorput" and the "Dannebrog"—often side-by-side. The flag of Greenland is the only national flag of a Nordic country or territory without a Nordic Cross.
Greenland first entertained the idea of a flag of its own in 1973 when five Greenlanders proposed a green, white and blue flag. The following year, a newspaper solicited eleven design proposals (all but one of which was a Nordic cross) and polled the people to determine the most popular. The flag of Denmark was preferred to the others. Little came of this effort.
In 1978, Denmark granted home rule to Greenland, making it an equal member of the "Danish Realm". The home rule government held an official call for flag proposals, receiving 555 (of which 293 were submitted by Greenlanders).
The deciding committee came to no consensus, so more proposals were solicited. Finally the present red-and-white design by Christiansen narrowly won over a green-and-white Nordic cross by a vote of fourteen to eleven. Christiansen's red-and-white flag was officially adopted on 1 May 1989.
To honour the tenth anniversary of the "Erfalasorput", the Greenland Post Office issued commemorative postage stamps and a leaflet by the flag's creator. He described the white stripe as representing the glaciers and ice cap, which cover more than 80% of the island; the red stripe, the ocean; the red semicircle, the sun, with its bottom part sunk in the ocean; and the white semicircle, the icebergs and pack ice. The design is also reminiscent of the setting sun half-submerged below the horizon and reflected on the sea. In 1985 it was reported that Greenland's flag had exactly the same motif as the flag of the Danish rowing club HEI Rosport, which was founded before Greenland's flag was chosen. It is not clear whether this is a case of plagiarism or just a coincidence, but the rowing club gave Greenland permission to use their flag.
The colours of the "Erfalasorput" are the same as those of the "Dannebrog", symbolizing Greenland's place in the Danish realm. | https://en.wikipedia.org/wiki?curid=12815 |
Gustav Radbruch
Gustav Radbruch (21 November 1878 – 23 November 1949) was a German legal scholar and politician. He served as Minister of Justice of Germany during the early Weimar period. Radbruch is also regarded as one of the most influential legal philosophers of the 20th century.
Born in Lübeck, Radbruch studied law in Munich, Leipzig and Berlin. He passed his first bar exam ("Staatsexamen") in Berlin in 1901, and the following year he received his doctorate with a dissertation on "The Theory of Adequate Causation". This was followed in 1903 by his qualification to teach criminal law in Heidelberg. In 1904, he was appointed Professor of criminal and trial law and legal philosophy in Heidelberg. In 1914 he accepted a call to a professorship in Königsberg, and later that year assumed a professorship at Kiel.
Radbruch was a member of the Social Democratic Party of Germany (SPD), and held a seat in the Reichstag from 1920 to 1924. In 1921–22 and throughout 1923, he was minister of justice in the cabinets of Joseph Wirth and Gustav Stresemann. During his time in office, a number of important laws were implemented, such as those giving women access to the justice system, and, after the assassination of Walter Rathenau, the law for the protection of the republic.
In 1926, Radbruch accepted a renewed call to lecture at Heidelberg. After the Nazi seizure of power in January 1933, Radbruch, as a former Social Democratic politician, was dismissed from his university post under the terms of the so-called "Law for the Restoration of the Professional Civil Service" (""Gesetz zur Wiederherstellung des Berufsbeamtentums""). (The universities, as public bodies, were subject to civil service laws and regulations.) Despite the employment ban in Nazi Germany, during 1935/36 he was able to spend a year in England, at University College, Oxford. An important practical outcome of this was his book, "Der Geist des englischen Rechts" (""The Spirit of English Law""), although this could be published only in 1945. During the Nazi period, he devoted himself primarily to cultural-historical work.
Immediately after the end of the Second World War in 1945, he resumed his teaching activities, but died at Heidelberg in 1949 without being able to complete his planned updated edition of his textbook on legal philosophy.
In September 1945, Radbruch published a short paper "Fünf Minuten Rechtsphilosophie" (Five Minutes of Legal Philosophy), that was influential in shaping the jurisprudence of values ("Wertungsjurisprudenz"), prevalent in the aftermath of World War II as a reaction against legal positivism.
Radbruch's legal philosophy derived from Neokantianism, which assumes that a categorical cleavage exists between "is" ("sein") and "ought" ("sollen"). According to this view, "should" can never be derived from "Being." Indicative of the Heidelberg school of neokantianism to which Radbruch subscribed was that it interpolated the value-related cultural studies between the explanatory sciences (being) and philosophical teachings of values (should).
In relation to the law, this triadism shows itself in the subfields of legal sociology, legal philosophy and legal dogma. Legal dogma assumes a place in between. It posits itself in opposition to positive law, as the latter depicts itself in social reality and methodologically in the objective "should-have" sense of law, which reveals itself through value-related interpretation.
The core of Radbruch's legal philosophy consists of his tenets the concept of law and the idea of law. The idea of law is defined through a triad of justice, utility and certainty. Radbruch thereby had the idea of utility or usefulness spring forth from an analysis of the idea of justice. Upon this notion was based the Radbruch formula, which is still vigorously debated today. The concept of law, for Radbruch, is "nothing other than the given fact, which has the sense to serve the idea of law."
Hotly disputed is the question whether Radbruch was a legal positivist before 1933 and executed an about-face in his thinking due to the advent of Nazism, or whether he continued to develop, under the impression of Nazi crimes, the relativistic values-teaching he had already been advocating before 1933.
The problem of the controversy between the spirit and the letter of the law, in Germany, has been brought back to public attention due to the trials of former East German soldiers who guarded the Berlin Wall—the so-called necessity of following orders. Radbruch's theories are posited against the positivist "pure legal tenets" represented by Hans Kelsen and, to some extent, also from Georg Jellinek.
In sum, Radbruch's formula argues that where statutory law is incompatible with the requirements of justice "to an intolerable degree", or where statutory law was obviously designed in a way that deliberately negates "the equality that is the core of all justice", statutory law must be disregarded by a judge in favour of the justice principle. Since its first publication in 1946 the principle has been accepted by Germany's Federal Constitutional Court in a variety of cases. Many people partially blame the older German legal tradition of legal positivism for the ease with which Hitler obtained power in an outwardly "legal" manner, rather than by means of a coup. Arguably, the shift to a concept of natural law ought to act as a safeguard against dictatorship, an untrammeled State power and the abrogation of civil rights. | https://en.wikipedia.org/wiki?curid=12816 |
Greek fire
Greek fire was an incendiary weapon used by the Eastern Roman (Byzantine) Empire beginning . Used to set light to enemy ships, it consisted of a combustible compound emitted by a flame-throwing weapon. Greek fire was first used by the Romans besieged in Constantinople (673–78). Some historians believe it could be ignited on contact with water, and was probably based on naphtha and quicklime. The Byzantines typically used it in naval battles to great effect, as it could continue burning while floating on water. The technological advantage it provided was responsible for many key Byzantine military victories, most notably the salvation of Constantinople from two Arab sieges, thus securing the Empire's survival.
The impression made by Greek fire on the western European Crusaders was such that the name was applied to any sort of incendiary weapon, including those used by Arabs, the Chinese, and the Mongols. However, these mixtures used different formulas than the Byzantine Greek fire, which was a closely guarded state secret. Byzantines also used pressurized nozzles ("siphōn"s) to project the liquid onto the enemy, in a manner resembling a modern flamethrower.
Although usage of the term "Greek fire" has been general in English and most other languages since the Crusades, original Byzantine sources called the substance a variety of names, such as "sea fire" (Medieval Greek: ), "Roman fire" ( ), "war fire" ( ), "liquid fire" ( ), "sticky fire" ( ), or "manufactured fire" ( ).
The composition of Greek fire remains a matter of speculation and debate, with various proposals including combinations of pine resin, naphtha, quicklime, calcium phosphide, sulfur, or niter. In his history of Rome, Titus Livy describes priestesses of Bacchus dipping fire into the water, which did not extinguish, "for it was sulphur mixed with lime."
Incendiary and flaming weapons were used in warfare for centuries before Greek fire was invented. They included a number of sulfur-, petroleum-, and bitumen-based mixtures. Incendiary arrows and pots containing combustible substances surrounded by caltrops or spikes, or launched by catapults, were used as early as the 9th century BC by the Assyrians and were extensively used in the Greco-Roman world as well. Furthermore, Thucydides mentions that in the siege of Delium in 424 BC a long tube on wheels was used which blew flames forward using a large bellows. The Roman author Julius Africanus, writing in the 3rd Century AD, records a mixture that ignited from adequate heat and intense sunlight, used in grenades or night attacks:Automatic fire also by the following formula. This is the recipe: take equal amounts of sulphur, rock salt, ashes, thunder stone, and pyrite and pound fine in a black mortar at midday sun. Also in equal amounts of each ingredient mix together black mulberry resin and Zakynthian asphault, the latter in a liquid form and free-flowing, resulting in a product that is sooty colored. Then add to the asphalt the tiniest amount of quicklime. But because the sun is at its zenith, one must pound it carefully and protect the face, for it will ignite suddenly. When it catches fire, one should seal it in some sort of copper receptacle; in this way you will have it available in a box, without exposing it to the sun. If you should wish to ignite enemy armaments, you will smear it on in the evening, either on the armaments or some other object, but in secret; when the sun comes up, everything will be burnt up.In naval warfare, the Eastern Roman Emperor Anastasius I (r. 491–518) is recorded by chronicler John Malalas to have been advised by a philosopher from Athens called Proclus to use sulfur to burn the ships of Vitalianus. Greek fire proper, however, was developed in and is ascribed by the chronicler Theophanes to Kallinikos (Latinized Callinicus), an architect from Heliopolis in the former province of Phoenice, by then overrun by the Muslim conquests:
The accuracy and exact chronology of this account is open to question: Theophanes reports the use of fire-carrying and "siphōn"-equipped ships by the Byzantines a couple of years before the supposed arrival of Kallinikos at Constantinople. If this is not due to chronological confusion of the events of the siege, it may suggest that Kallinikos merely introduced an improved version of an established weapon. The historian James Partington further thinks it likely that Greek fire was not in fact the creation of any single person but "invented by chemists in Constantinople who had inherited the discoveries of the Alexandrian chemical school." Indeed, the 11th-century chronicler George Kedrenos records that Kallinikos came from Heliopolis in Egypt, but most scholars reject this as an error. Kedrenos also records the story, considered rather implausible by modern scholars, that Kallinikos' descendants, a family called "Lampros", "brilliant," kept the secret of the fire's manufacture and continued doing so to Kedrenos' time.
Kallinikos' development of Greek fire came at a critical moment in the Byzantine Empire's history: weakened by its long wars with Sassanid Persia, the Byzantines had been unable to effectively resist the onslaught of the Muslim conquests. Within a generation, Syria, Palestine, and Egypt had fallen to the Arabs, who in set out to conquer the imperial capital of Constantinople. Greek fire was used to great effect against the Muslim fleets, helping to repel the Muslims at the first and second Arab sieges of the city. Records of its use in later naval battles against the Saracens are more sporadic, but it did secure a number of victories, especially in the phase of Byzantine expansion in the late 9th and early 10th centuries. Utilisation of the substance was prominent in Byzantine civil wars, chiefly the revolt of the thematic fleets in 727 and the large-scale rebellion led by Thomas the Slav in 821–823. In both cases, the rebel fleets were defeated by the Constantinopolitan Imperial Fleet through the use of Greek fire. The Byzantines also used the weapon to devastating effect against the various Rus' raids on the Bosporus, especially those of 941 and 1043, as well as during the Bulgarian war of 970–971, when the fire-carrying Byzantine ships blockaded the Danube.
The importance placed on Greek fire during the Empire's struggle against the Arabs would lead to its discovery being ascribed to divine intervention. The Emperor Constantine Porphyrogennetos (r. 945–959), in his book "De Administrando Imperio", admonishes his son and heir, Romanos II (r. 959–963), to never reveal the secrets of its composition, as it was "shown and revealed by an angel to the great and holy first Christian emperor Constantine" and that the angel bound him "not to prepare this fire but for Christians, and only in the imperial city." As a warning, he adds that one official, who was bribed into handing some of it over to the Empire's enemies, was struck down by a "flame from heaven" as he was about to enter a church. As the latter incident demonstrates, the Byzantines could not avoid capture of their precious secret weapon: the Arabs captured at least one fireship intact in 827, and the Bulgars captured several "siphōns" and much of the substance itself in 812/814. This, however, was apparently not enough to allow their enemies to copy it (see below). The Arabs, for instance, employed a variety of incendiary substances similar to the Byzantine weapon, but they were never able to copy the Byzantine method of deployment by "siphōn", and used catapults and grenades instead.
Greek fire continued to be mentioned during the 12th century, and Anna Komnene gives a vivid description of its use in a naval battle against the Pisans in 1099. However, although the use of hastily improvised fireships is mentioned during the 1203 siege of Constantinople by the Fourth Crusade, no report confirms the use of the actual Greek fire. This might be because of the general disarmament of the Empire in the 20 years leading up to the sacking, or because the Byzantines had lost access to the areas where the primary ingredients were to be found, or even perhaps because the secret had been lost over time.
Records of a 13th-century event in which "Greek fire" was used by the Saracens against the Crusaders can be read through the Memoirs of the Lord of Joinville during the Seventh Crusade. One description of the memoir says "the tail of fire that trailed behind it was as big as a great spear; and it made such a noise as it came, that it sounded like the thunder of heaven. It looked like a dragon flying through the air. Such a bright light did it cast, that one could see all over the camp as though it were day, by reason of the great mass of fire, and the brilliance of the light that it shed."
In the 19th century, it is reported that an Armenian by the name of Kavafian approached the government of the Ottoman Empire with a new type of Greek fire he claimed to have developed. Kavafian refused to reveal its composition when asked by the government, insisting that he be placed in command of its use during naval engagements. Not long after this, he was poisoned by imperial authorities, without their ever having found out his secret.
As Constantine Porphyrogennetos' warnings show, the ingredients and the processes of manufacture and deployment of Greek fire were carefully guarded military secrets. So strict was the secrecy that the composition of Greek fire was lost forever and remains a source of speculation. Consequently, the "mystery" of the formula has long dominated the research into Greek fire. Despite this almost exclusive focus, however, Greek fire is best understood as a complete weapon system of many components, all of which were needed to operate together to render it effective. This comprised not only the formula of its composition, but also the specialized dromon ships that carried it into battle, the device used to prepare the substance by heating and pressurizing it, the "siphōn" projecting it, and the special training of the "siphōnarioi" who used it. Knowledge of the whole system was highly compartmentalised, with operators and technicians aware of the secrets of only one component, ensuring that no enemy could gain knowledge of it in its entirety. This accounts for the fact that when the Bulgarians took Mesembria and Debeltos in 814, they captured 36 "siphōns" and even quantities of the substance itself, but were unable to make any use of them.
The information available on Greek fire is exclusively indirect, based on references in the Byzantine military manuals and a number of secondary historical sources such as Anna Komnene and Western European chroniclers, which are often inaccurate. In her "Alexiad", Anna Komnene provides a description of an incendiary weapon, which was used by the Byzantine garrison of Dyrrhachium in 1108 against the Normans. It is often regarded as an at least partial "recipe" for Greek fire:This fire is made by the following arts: From the pine and certain such evergreen trees, inflammable resin is collected. This is rubbed with sulfur and put into tubes of reed, and is blown by men using it with violent and continuous breath. Then in this manner it meets the fire on the tip and catches light and falls like a fiery whirlwind on the faces of the enemies. At the same time, the reports by Western chroniclers of the famed "ignis graecus" are largely unreliable, since they apply the name to any and all sorts of incendiary substances.
In attempting to reconstruct the Greek fire system, the concrete evidence, as it emerges from the contemporary literary references, provides the following characteristics:
The first and, for a long time, most popular theory regarding the composition of Greek fire held that its chief ingredient was saltpeter, making it an early form of gunpowder. This argument was based on the "thunder and smoke" description, as well as on the distance the flame could be projected from the "siphōn", which suggested an explosive discharge. From the times of Isaac Vossius, several scholars adhered to this position, most notably the so-called "French school" during the 19th century, which included chemist Marcellin Berthelot.
This view has been rejected since, as saltpeter does not appear to have been used in warfare in Europe or the Middle East before the 13th century, and is absent from the accounts of the Muslim writers – the foremost chemists of the early medieval world – before the same period. In addition, the behavior of the proposed mixture would have been radically different from the "siphōn"-projected substance described by Byzantine sources.
A second view, based on the fact that Greek fire was inextinguishable by water (some sources suggest that water intensified the flames) suggested that its destructive power was the result of the explosive reaction between water and quicklime. Although quicklime was certainly known and used by the Byzantines and the Arabs in warfare, the theory is refuted by literary and empirical evidence. A quicklime-based substance would have to come in contact with water to ignite, while Emperor Leo's "Tactica" indicate that Greek fire was often poured directly on the decks of enemy ships, although admittedly, decks were kept wet due to lack of sealants. Likewise, Leo describes the use of grenades, which further reinforces the view that contact with water was not necessary for the substance's ignition. Furthermore, Zenghelis (1932) pointed out that, based on experiments, the actual result of the water–quicklime reaction would be negligible in the open sea.
Another similar proposition suggested that Kallinikos had in fact discovered calcium phosphide, which can be made by boiling bones in urine within a sealed vessel. On contact with water it releases phosphine, which ignites spontaneously. However, extensive experiments with calcium phosphide also failed to reproduce the described intensity of Greek fire.
Consequently, although the presence of either quicklime or saltpeter in the mixture cannot be entirely excluded, they were not the primary ingredient. Most modern scholars agree that Greek fire was based on either crude or refined petroleum, comparable to modern napalm. The Byzantines had easy access to crude oil from the naturally occurring wells around the Black Sea (e.g., the wells around Tmutorakan noted by Constantine Porphyrogennetos) or in various locations throughout the Middle East. An alternate name for Greek fire was "Median fire" (), and the 6th-century historian Procopius records that crude oil, called "naphtha" (in Greek: "naphtha", from Old Persian "naft") by the Persians, was known to the Greeks as "Median oil" (). This seems to corroborate the availability of naphtha as a basic ingredient of Greek fire.
Naphtha was also used by the Abbasids in the 9th century, with special troops, the "naffāṭūn", who wore thick protective suits and used small copper vessels containing burning oil, which they threw onto the enemy troops. There is also a surviving 9th century Latin text, preserved at Wolfenbüttel in Germany, which mentions the ingredients of what appears to be Greek fire and the operation of the "siphōn"s used to project it. Although the text contains some inaccuracies, it clearly identifies the main component as naphtha. Resins were probably added as a thickener (the "Praecepta Militaria" refer to the substance as , "sticky fire"), and to increase the duration and intensity of the flame. A modern theoretical concoction included the use of pine tar and animal fat along with other ingredients.
A 12th century treatise prepared by Mardi bin Ali al-Tarsusi for Saladin records an Arab version of Greek fire, called "naft", which also had a petroleum base, with sulfur and various resins added. Any direct relation with the Byzantine formula is unlikely. An Italian recipe from the 16th century has been recorded for recreational use; it includes coal from a willow tree, alcohol, incense, sulfur, wool and camphor as well as two undetermined components (burning salt and "pegola"); the concoction was guaranteed to "burn under water" and to be "beautiful."
The chief method of deployment of Greek fire, which sets it apart from similar substances, was its projection through a tube ("siphōn"), for use aboard ships or in sieges. Portable projectors ("cheirosiphōnes", χειροσίφωνες) were also invented, reputedly by Emperor Leo VI. The Byzantine military manuals also mention that jars ("chytrai" or "tzykalia") filled with Greek fire and caltrops wrapped with tow and soaked in the substance were thrown by catapults, while pivoting cranes ("gerania") were employed to pour it upon enemy ships. The "cheirosiphōnes" especially were prescribed for use at land and in sieges, both against siege machines and against defenders on the walls, by several 10th-century military authors, and their use is depicted in the "Poliorcetica" of Hero of Byzantium. The Byzantine dromons usually had a "siphōn" installed on their prow under the forecastle, but additional devices could also on occasion be placed elsewhere on the ship. Thus in 941, when the Byzantines were facing the vastly more numerous Rus' fleet, "siphōn"s were placed also amidships and even astern.
The use of tubular projectors (σίφων, "siphōn") is amply attested in the contemporary sources. Anna Komnene gives this account of beast-shaped Greek fire projectors being mounted to the bow of warships:
As he [the Emperor Alexios I] knew that the Pisans were skilled in sea warfare and dreaded a battle with them, on the prow of each ship he had a head fixed of a lion or other land-animal, made in brass or iron with the mouth open and then gilded over, so that their mere aspect was terrifying. And the fire which was to be directed against the enemy through tubes he made to pass through the mouths of the beasts, so that it seemed as if the lions and the other similar monsters were vomiting the fire.
Some sources provide more information on the composition and function of the whole mechanism. The Wolfenbüttel manuscript in particular provides the following description:
...having built a furnace right at the front of the ship, they set on it a copper vessel full of these things, having put fire underneath. And one of them, having made a bronze tube similar to that which the rustics call a "squitiatoria", "squirt," with which boys play, they spray [it] at the enemy.
Another, possibly first-hand, account of the use of Greek fire comes from the 11th-century "Yngvars saga víðförla", in which the Viking Ingvar the Far-Travelled faces ships equipped with Greek fire weapons:
[They] began blowing with smiths’ bellows at a furnace in which there was fire and there came from it a great din. There stood there also a brass [or bronze] tube and from it flew much fire against one ship, and it burned up in a short time so that all of it became white ashes...
The account, albeit embellished, corresponds with many of the characteristics of Greek fire known from other sources, such as a loud roar that accompanied its discharge. These two texts are also the only two sources that explicitly mention that the substance was heated over a furnace before being discharged; although the validity of this information is open to question, modern reconstructions have relied upon them.
Based on these descriptions and the Byzantine sources, John Haldon and Maurice Byrne designed a hypothetical apparatus as consisting of three main components: a bronze pump, which was used to pressurize the oil; a brazier, used to heat the oil (πρόπυρον, "propyron", "pre-heater"); and the nozzle, which was covered in bronze and mounted on a swivel (στρεπτόν, "strepton"). The brazier, burning a match of linen or flax that produced intense heat and the characteristic thick smoke, was used to heat oil and the other ingredients in an airtight tank above it, a process that also helped to dissolve the resins into a fluid mixture. The substance was pressurized by the heat and the usage of a force pump. After it had reached the proper pressure, a valve connecting the tank with the swivel was opened and the mixture was discharged from its end, being ignited at its mouth by some source of flame. The intense heat of the flame made necessary the presence of heat shields made of iron (βουκόλια, "boukolia"), which are attested in the fleet inventories.
The process of operating Haldon and Byrne's design was fraught with danger, as the mounting pressure could easily make the heated oil tank explode, a flaw which was not recorded as a problem with the historical fire weapon. In the experiments conducted by Haldon in 2002 for the episode "Fireship" of the television series "Machines Times Forgot", even modern welding techniques failed to secure adequate insulation of the bronze tank under pressure. This led to the relocation of the pressure pump between the tank and the nozzle. The full-scale device built on this basis established the effectiveness of the mechanism's design, even with the simple materials and techniques available to the Byzantines. The experiment used crude oil mixed with wood resins, and achieved a flame temperature of over and an effective range of up to .
The portable "cheirosiphōn" ("hand-"siphōn""), the earliest analogue to a modern flamethrower, is extensively attested in the military documents of the 10th century, and recommended for use in both sea and land. They first appear in the "Tactica" of emperor Leo VI the Wise, who claims to have invented them. Subsequent authors continued to refer to the "cheirosiphōnes", especially for use against siege towers, although Nikephoros II Phokas also advises their use in field armies, with the aim of disrupting the enemy formation. Although both Leo VI and Nikephoros Phokas claim that the substance used in the "cheirosiphōnes" was the same as in the static devices used on ships, Haldon and Byrne consider that the former were manifestly different from their larger cousins, and theorize that the device was fundamentally different, "a simple syringe [that] squirted both liquid fire (presumably unignited) and noxious juices to repel enemy troops." The illustrations of Hero's "Poliorcetica" show the "cheirosiphōn" also throwing the ignited substance.
In its earliest form, Greek fire was hurled onto enemy forces by firing a burning cloth-wrapped ball, perhaps containing a flask, using a form of light catapult, most probably a seaborne variant of the Roman light catapult or onager. These were capable of hurling light loads, around , a distance of .
Although the destructiveness of Greek fire is indisputable, it did not make the Byzantine navy invincible. It was not, in the words of naval historian John Pryor, a "ship-killer" comparable to the naval ram, which, by then, had fallen out of use. While Greek fire remained a potent weapon, its limitations were significant when compared to more traditional forms of artillery: in its "siphōn"-deployed version, it had a limited range, and it could be used safely only in a calm sea and with favourable wind conditions.
The Muslim navies eventually adapted themselves to it by staying out of its effective range and devising methods of protection such as felt or hides soaked in vinegar.
In Steve Berry's 2007 novel "The Venetian Betrayal" Greek Fire is described and used as a weapon.
In William Golding's 1958 play "The Brass Butterfly", derived from his "Envoy Extraordinary (novella)", the Greek inventor Phanocles demonstrates explosives to the emperor Mamillius. The emperor decides that his empire is not ready for this or for Phanocles's other inventions and sends him on "a slow boat to China".
In Victor Canning's stage play "Honour Bright" (1960), the crusader Godfrey of Ware returns with a casket of Greek Fire given to him by an old man in Athens.
In Rick Riordan's Greek storyline, Greek Fire is described as being a very soft green solid that, when it explodes, all of the substance is spread out over an area and burns continuously. It is very strong and dangerous.
In C. J. Sansom's historical mystery novel "Dark Fire", Thomas Cromwell sends the lawyer Matthew Shardlake to recover the secret of Greek fire, following its discovery in the library of a dissolved London monastery.
In Michael Crichton's sci-fi novel "Timeline", Professor Edward Johnston is stuck in the past in 14th century Europe, and claims to have knowledge of Greek fire.
In Mika Waltari's novel "The Dark Angel", some old men who are the last ones who know the secret of Greek fire are mentioned as present in the last Christian services held in Hagia Sophia before the Fall of Constantinople. The narrator is told that in the event of the city's fall, they will be killed so as to keep the secret from the Turks.
In George R. R. Martin's fantasy series of novels "A Song of Ice and Fire", and its television adaptation "Game of Thrones", wildfire is similar to Greek fire. It was used in naval battles as it could remain lit on water, and its recipe was jealously guarded.
In Leland Purvis's graphic novel "Vox : collected works, 1999-2003", there is a passage detailing Callinicus and Greek Fire. | https://en.wikipedia.org/wiki?curid=12822 |
Garbage in, garbage out
In computer science, garbage in, garbage out (GIGO) is the concept that flawed, or nonsense input data produces nonsense output or "garbage". In the UK the term sometimes used is rubbish in, rubbish out (RIRO).
The specific phrase is accredited by FOLDOC to the late Wilf Hey, who is also accredited by FOLDOC for work in developing RPG while working at IBM in 1965.
The principle also applies more generally to all analysis and logic, in that arguments are unsound if their premises are flawed.
It was popular in the early days of computing, but applies even more today, when powerful computers can produce large amounts of erroneous data or information in a short time. The first use of the phrase has been dated to a November 10, 1957, syndicated newspaper article about US Army mathematicians and their work with early computers, in which an Army Specialist named William D. Mellin explained that computers cannot think for themselves, and that "sloppily programmed" inputs inevitably lead to incorrect outputs. The underlying principle was noted by the inventor of the first programmable computing device design:
More recently, the Marine Accident Investigation Branch comes to a similar conclusion:
The term may have been derived from last-in, first-out (LIFO) or first-in, first-out (FIFO).
The term can also be used as an explanation for the poor quality of a digitized audio or video file. Although digitizing can be the first step in cleaning up a signal, it does not, by itself, improve the quality. Defects in the original analog signal will be faithfully recorded, but might be identified and removed by a subsequent step by digital signal processing.
GIGO is commonly used to describe failures in human decision-making due to faulty, incomplete, or imprecise data. This sort of issue predates the computer age, but the term can still be applied.
GIGO was the name of a Usenet gateway program to FidoNet, MAUSnet, e.a.
Incorrect data can still permit statistical analysis. Although in statistics incorrect or inaccurate data can hamper proper analysis, it can still be handled. The classic "a broken clock is right twice a day" can be defined as less than fully correct by noting that certain time settings are only correct once per day at least once per year. By contrast, information that relays a count, even if the count is incorrect, the data's precision is accurate. | https://en.wikipedia.org/wiki?curid=12823 |
General Agreement on Tariffs and Trade
The General Agreement on Tariffs and Trade (GATT) is a legal agreement between many countries, whose overall purpose was to promote international trade by reducing or eliminating trade barriers such as tariffs or quotas. According to its preamble, its purpose was the "substantial reduction of tariffs and other trade barriers and the elimination of preferences, on a reciprocal and mutually advantageous basis."
The GATT was first discussed during the United Nations Conference on Trade and Employment and was the outcome of the failure of negotiating governments to create the International Trade Organization (ITO). It was signed by 23 nations in Geneva on 30 October 1947, and took effect on 1 January 1948. It remained in effect until the signature by 123 nations in Marrakesh on 14 April 1994, of the Uruguay Round Agreements which established the World Trade Organization (WTO) on 1 January 1995. The WTO is the successor to the GATT, and the original GATT text (GATT 1947) is still in effect under the WTO framework, subject to the modifications of GATT 1994. Nations that were not party in 1995 to the GATT need to meet the minimum conditions spelled out in specific documents before they can accede; in September 2019, the list contained 36 nations.
The GATT, and its successor the WTO, have successfully reduced tariffs. The average tariff levels for the major GATT participants were about 22% in 1947, but were 5% after the Uruguay Round in 1999. Experts attribute part of these tariff changes to GATT and the WTO.
The General Agreement on Tariffs and Trade is a portmanteau for a series of global trade negotiations which were held in a total of nine "rounds" between 1947 and 1995. The GATT was first conceived in the aftermath of the Allied victory in the Second World War at the 1947 United Nations Conference on Trade and Employment (UNCTE), at which the International Trade Organization (ITO) was one of the ideas proposed. It was hoped that the ITO would be run alongside the World Bank and the International Monetary Fund (IMF). More than 50 nations negotiated ITO and organizing its founding charter, but after the withdrawal of the United States these negotiations collapsed.
Preparatory sessions were held simultaneously at the UNCTE regarding the GATT. After several of these sessions, 23 nations signed the GATT on 30 October 1947 in Geneva, Switzerland. It came into force on 1 January 1948.
The second round took place in 1949 in Annecy, France. 13 countries took part in the round. The main focus of the talks was more tariff reductions, around 5,000 in total.
The third round occurred in Torquay, England in 1951. Thirty-eight countries took part in the round. 8,700 tariff concessions were made totaling the remaining amount of tariffs to ¾ of the tariffs which were in effect in 1948. The contemporaneous rejection by the U.S. of the Havana Charter signified the establishment of the GATT as a governing world body.
The fourth round returned to Geneva in 1955 and lasted until May 1956. Twenty-six countries took part in the round. $2.5 billion in tariffs were eliminated or reduced.
The fifth round occurred once more in Geneva and lasted from 1960–1962. The talks were named after U.S. Treasury Secretary and former Under Secretary of State, Douglas Dillon, who first proposed the talks. Twenty-six countries took part in the round. Along with reducing over $4.9 billion in tariffs, it also yielded discussion relating to the creation of the European Economic Community (EEC).
The sixth round of GATT multilateral trade negotiations, held from 1964 to 1967. It was named after U.S. President John F. Kennedy in recognition of his support for the reformulation of the United States trade agenda, which resulted in the Trade Expansion Act of 1962. This Act gave the President the widest-ever negotiating authority.
As the Dillon Round went through the laborious process of item-by-item tariff negotiations, it became clear, long before the Round ended, that a more comprehensive approach was needed to deal with the emerging challenges resulting from the formation of the European Economic Community (EEC) and EFTA, as well as Europe's re-emergence as a significant international trader more generally.
Japan's high economic growth rate portended the major role it would play later as an exporter, but the focal point of the Kennedy Round always was the United States-EEC relationship. Indeed, there was an influential American view that saw what became the Kennedy Round as the start of a transatlantic partnership that might ultimately lead to a transatlantic economic community.
To an extent, this view was shared in Europe, but the process of European unification created its own stresses under which the Kennedy Round at times became a secondary focus for the EEC. An example of this was the French veto in January 1963, before the round had even started, on membership by the United Kingdom.
Another was the internal crisis of 1965, which ended in the Luxembourg Compromise. Preparations for the new round were immediately overshadowed by the Chicken War, an early sign of the impact variable levies under the Common Agricultural Policy would eventually have. Some participants in the Round had been concerned that the convening of UNCTAD, scheduled for 1964, would result in further complications, but its impact on the actual negotiations was minimal.
In May 1963 Ministers reached agreement on three negotiating objectives for the round:
The working hypothesis for the tariff negotiations was a linear tariff cut of 50% with the smallest number of exceptions. A drawn-out argument developed about the trade effects a uniform linear cut would have on the dispersed rates (low and high tariffs quite far apart) of the United States as compared to the much more concentrated rates of the EEC which also tended to be in the lower held of United States tariff rates.
The EEC accordingly argued for an evening-out or harmonization of peaks and troughs through its cerement, double cart and thirty: ten proposals. Once negotiations had been joined, the lofty working hypothesis was soon undermined. The special-structure countries (Australia, Canada, New Zealand and South Africa), so called because their exports were dominated by raw materials and other primary commodities, negotiated their tariff reductions entirely through the item-by-item method.
In the end, the result was an average 35% reduction in tariffs, except for textiles, chemicals, steel and other sensitive products; plus a 15% to 18% reduction in tariffs for agricultural and food products. In addition, the negotiations on chemicals led to a provisional agreement on the abolition of the American Selling Price (ASP). This was a method of valuing some chemicals used by the noted States for the imposition of import duties which gave domestic manufacturers a much higher level of protection than the tariff schedule indicated.
However, this part of the outcome was disallowed by Congress, and the American Selling Price was not abolished until Congress adopted the results of the Tokyo Round. The results on agriculture overall were poor. The most notable achievement was agreement on a Memorandum of Agreement on Basic Elements for the Negotiation of a World Grants Arrangement, which eventually was rolled into a new International Grains Arrangement.
The EEC claimed that for it the main result of the negotiations on agriculture was that they "greatly helped to define its own common policy". The developing countries, who played a minor role throughout the negotiations in this round, benefited nonetheless from substantial tariff cuts particularly in non-agricultural items of interest to them.
Their main achievement at the time, however, was seen to be the adoption of Part IV of the GATT, which absolved them from according reciprocity to developed countries in trade negotiations. In the view of many developing countries, this was a direct result of the call at UNCTAD I for a better trade deal for them.
There has been argument ever since whether this symbolic gesture was a victory for them, or whether it ensured their exclusion in the future from meaningful participation in the multilateral trading system. On the other hand, there was no doubt that the extension of the Long-Term Arrangement Regarding International Trade in Cotton Textiles, which later became the Multi-Fiber Arrangement, for three years until 1970 led to the longer-term impairment of export opportunities for developing countries.
Another outcome of the Kennedy Round was the adoption of an Anti-dumping Code, which gave more precise guidance on the implementation of Article VI of the GATT. In particular, it sought to ensure speedy and fair investigations, and it imposed limits on the retrospective application of anti-dumping measures.
Kennedy Round took place from 1962–1967. $40 billion in tariffs were eliminated or reduced.
Reduced tariffs and established new regulations aimed at controlling the proliferation of non-tariff barriers and voluntary export restrictions. 102 countries took part in the round. Concessions were made on $19 billion worth of trade.
The Quadrilateral Group was formed in 1982 by the European Union, the United States, Japan and Canada, in order to influence the GATT.
The Uruguay Round began in 1986. It was the most ambitious round to date, as of 1986, hoping to expand the competence of the GATT to important new areas such as services, capital, intellectual property, textiles, and agriculture. 123 countries took part in the round. The Uruguay Round was also the first set of multilateral trade negotiations in which developing countries had played an active role.
Agriculture was essentially exempted from previous agreements as it was given special status in the areas of import quotas and export subsidies, with only mild caveats. However, by the time of the Uruguay round, many countries considered the exception of agriculture to be sufficiently glaring that they refused to sign a new deal without some movement on agricultural products. These fourteen countries came to be known as the "Cairns Group", and included mostly small and medium-sized agricultural exporters such as Australia, Brazil, Canada, Indonesia, and New Zealand.
The Agreement on Agriculture of the Uruguay Round continues to be the most substantial trade liberalization agreement in agricultural products in the history of trade negotiations. The goals of the agreement were to improve market access for agricultural products, reduce domestic support of agriculture in the form of price-distorting subsidies and quotas, eliminate over time export subsidies on agricultural products and to harmonize to the extent possible sanitary and phytosanitary measures between member countries.
The Doha Development Round began in 2001. The Doha Round began with a ministerial-level meeting in Doha, Qatar in 2001. The aim was to focus on the needs of developing countries. The major factors discussed include trade facilitation, services, rules of origin and dispute settlement. Special and differential treatment for the developing countries were also discussed as a major concern. Subsequent ministerial meetings took place in Cancún, Mexico (2003), and Hong Kong (2005). Related negotiations took place in Paris, France (2005), Potsdam, Germany (2007), and Geneva, Switzerland (2004, 2006, 2008). Progress in negotiations stalled after the breakdown of the July 2008 negotiations.
In 1993, the GATT was updated ('GATT 1994') to include new obligations upon its signatories. One of the most significant changes was the creation of the World Trade Organization (WTO). The 76 existing GATT members and the European Communities became the founding members of the WTO on 1 January 1995. The other 51 GATT members rejoined the WTO in the following two years (the last being Congo in 1997). Since the founding of the WTO, 33 new non-GATT members have joined and 22 are currently negotiating membership. There are a total of 164 member countries in the WTO, with Liberia and Afghanistan being the newest members as of 2018.
Of the original GATT members, Syria, Lebanon and the SFR Yugoslavia have not rejoined the WTO. Since FR Yugoslavia,(renamed as Serbia and Montenegro and with membership negotiations later split in two), is not recognised as a direct SFRY successor state; therefore, its application is considered a new (non-GATT) one. The General Council of WTO, on 4 May 2010, agreed to establish a working party to examine the request of Syria for WTO membership. The contracting parties who founded the WTO ended official agreement of the "GATT 1947" terms on 31 December 1995. Montenegro became a member in 2012, while Serbia is in the decision stage of the negotiations and is expected to become a member of the WTO in the future.
Whilst GATT was a set of rules agreed upon by nations, the WTO is an intergovernmental organization with its own headquarters and staff, and its scope includes both traded goods and trade within the service sector and intellectual property rights. Although it was designed to serve multilateral agreements, during several rounds of GATT negotiations (particularly the Tokyo Round) plurilateral agreements created selective trading and caused fragmentation among members. WTO arrangements are generally a multilateral agreement settlement mechanism of GATT.
The average tariff levels for the major GATT participants were about 22 percent in 1947. As a result of the first negotiating rounds, tariffs were reduced in the GATT core of the United States, United Kingdom, Canada, and Australia, relative to other contracting parties and non-GATT participants. By the Kennedy round (1962–67), the average tariff levels of GATT participants were about 15%. After the Uruguay Round, tariffs were under 5%.
In addition to facilitating applied tariff reductions, the early GATT's contribution to trade liberalization "include binding the negotiated tariff reductions for an extended period (made more permanent in 1955), establishing the generality of nondiscrimination through most-favored nation (MFN) treatment and national treatment status, ensuring increased transparency of trade policy measures, and providing a forum for future negotiations and for the peaceful resolution of bilateral disputes. All of these elements contributed to the rationalization of trade policy and the reduction of trade barriers and policy uncertainty."
According to Dartmouth economic historian Douglas Irwin,
The prosperity of the world economy over the past half century owes a great deal to the growth of world trade which, in turn, is partly the result of farsighted officials who created the GATT. They established a set of procedures giving stability to the trade-policy environment and thereby facilitating the rapid growth of world trade. With the long run in view, the original GATT conferees helped put the world economy on a sound foundation and thereby improved the livelihood of hundreds of millions of people around the world.
Following the United Kingdom's vote to withdraw from the European Union, supporters of leaving the EU suggested that Article 24, paragraph 5B of the treaty could be used to maintain a "standstill" in trading conditions between the UK and the EU in the event of the UK leaving the EU without a trade deal, hence preventing the introduction of tariffs. According to proponents of this approach, it could be used to implement an interim agreement pending negotiation of a final agreement lasting up to ten years.
This claim formed the basis of the so-called "Malthouse compromise" between Conservative party factions as to how to replace the withdrawal agreement. However, this plan was rejected by parliament. The claim that Article 24 might be used was also adopted by Boris Johnson during his 2019 campaign to lead the Conservative Party.
The claim that Article 24 might be used in this way has been criticised by Mark Carney, Liam Fox and others as being unrealistic given the requirement in paragraph 5c of the treaty that there be an agreement between the parties in order for paragraph 5b to be of use as, in the event of a "no-deal" scenario, there would be no agreement. Moreover, critics of the GATT 24 approach point out that services would not be covered by such an arrangement. | https://en.wikipedia.org/wiki?curid=12831 |
G protein-coupled receptor
G protein-coupled receptors (GPCRs), also known as seven-(pass)-transmembrane domain receptors, 7TM receptors, heptahelical receptors, serpentine receptor, and G protein-linked receptors (GPLR), constitute a large protein family of receptors that detect molecules outside the cell and activate internal signal transduction pathways and, ultimately, cellular responses. Coupling with G proteins, they are called seven-transmembrane receptors because they pass through the cell membrane seven times.
G protein-coupled receptors are found only in eukaryotes, including yeast, choanoflagellates, and animals. The ligands that bind and activate these receptors include light-sensitive compounds, odors, pheromones, hormones, and neurotransmitters, and vary in size from small molecules to peptides to large proteins. G protein-coupled receptors are involved in many diseases.
There are two principal signal transduction pathways involving the G protein-coupled receptors:
When a ligand binds to the GPCR it causes a conformational change in the GPCR, which allows it to act as a guanine nucleotide exchange factor (GEF). The GPCR can then activate an associated G protein by exchanging the GDP bound to the G protein for a GTP. The G protein's α subunit, together with the bound GTP, can then dissociate from the β and γ subunits to further affect intracellular signaling proteins or target functional proteins directly depending on the α subunit type (Gαs, Gαi/o, Gαq/11, Gα12/13).
GPCRs are an important drug target and approximately 34% of all Food and Drug Administration (FDA) approved drugs target 108 members of this family. The global sales volume for these drugs is estimated to be 180 billion US dollars .
The 2012 Nobel Prize in Chemistry was awarded to Brian Kobilka and Robert Lefkowitz for their work that was "crucial for understanding how G protein-coupled receptors function". There have been at least seven other Nobel Prizes awarded for some aspect of G protein–mediated signaling. As of 2012, two of the top ten global best-selling drugs (Advair Diskus and Abilify) act by targeting G protein-coupled receptors.
The exact size of the GPCR superfamily is unknown, but at least 831 different human genes (or ~ 4% of the entire protein-coding genome) have been predicted to code for them from genome sequence analysis. Although numerous classification schemes have been proposed, the superfamily was classically divided into three main classes (A, B, and C) with no detectable shared sequence homology between classes.
The largest class by far is class A, which accounts for nearly 85% of the GPCR genes. Of class A GPCRs, over half of these are predicted to encode olfactory receptors, while the remaining receptors are liganded by known endogenous compounds or are classified as orphan receptors. Despite the lack of sequence homology between classes, all GPCRs have a common structure and mechanism of signal transduction. The very large rhodopsin A group has been further subdivided into 19 subgroups (A1-A19).
According to the classical A-F system, GPCRs can be grouped into 6 classes based on sequence homology and functional similarity:
More recently, an alternative classification system called GRAFS (Glutamate, Rhodopsin, "Adhesion", Frizzled/Taste2, Secretin) has been proposed for vertebrate GPCRs. They correspond to classical classes C, A, B2, F, and B.
An early study based on available DNA sequence suggested that the human genome encodes roughly 750 G protein-coupled receptors, about 350 of which detect hormones, growth factors, and other endogenous ligands. Approximately 150 of the GPCRs found in the human genome have unknown functions.
Some web-servers and bioinformatics prediction methods have been used for predicting the classification of GPCRs according to their amino acid sequence alone, by means of the pseudo amino acid composition approach.
GPCRs are involved in a wide variety of physiological processes. Some examples of their physiological roles include:
GPCRs are integral membrane proteins that possess seven membrane-spanning domains or transmembrane helices. The extracellular parts of the receptor can be glycosylated. These extracellular loops also contain two highly conserved cysteine residues that form disulfide bonds to stabilize the receptor structure. Some seven-transmembrane helix proteins (channelrhodopsin) that resemble GPCRs may contain ion channels, within their protein.
In 2000, the first crystal structure of a mammalian GPCR, that of bovine rhodopsin (), was solved. In 2007, the first structure of a human GPCR was solved This human β2-adrenergic receptor GPCR structure proved highly similar to the bovine rhodopsin. The structures of activated or agonist-bound GPCRs have also been determined. These structures indicate how ligand binding at the extracellular side of a receptor leads to conformational changes in the cytoplasmic side of the receptor. The biggest change is an outward movement of the cytoplasmic part of the 5th and 6th transmembrane helix (TM5 and TM6). The structure of activated beta-2 adrenergic receptor in complex with Gs confirmed that the Gα binds to a cavity created by this movement.
GPCRs exhibit a similar structure to some other proteins with seven transmembrane domains, such as microbial rhodopsins and adiponectin receptors 1 and 2 (ADIPOR1 and ADIPOR2). However, these 7TMH (7-transmembrane helices) receptors and channels do not associate with G proteins. In addition, ADIPOR1 and ADIPOR2 are oriented oppositely to GPCRs in the membrane (i.e. GPCRs usually have an extracellular N-terminus, cytoplasmic C-terminus, whereas ADIPORs are inverted).
In terms of structure, GPCRs are characterized by an extracellular N-terminus, followed by seven transmembrane (7-TM) α-helices (TM-1 to TM-7) connected by three intracellular (IL-1 to IL-3) and three extracellular loops (EL-1 to EL-3), and finally an intracellular C-terminus. The GPCR arranges itself into a tertiary structure resembling a barrel, with the seven transmembrane helices forming a cavity within the plasma membrane that serves a ligand-binding domain that is often covered by EL-2. Ligands may also bind elsewhere, however, as is the case for bulkier ligands (e.g., proteins or large peptides), which instead interact with the extracellular loops, or, as illustrated by the class C metabotropic glutamate receptors (mGluRs), the N-terminal tail. The class C GPCRs are distinguished by their large N-terminal tail, which also contains a ligand-binding domain. Upon glutamate-binding to an mGluR, the N-terminal tail undergoes a conformational change that leads to its interaction with the residues of the extracellular loops and TM domains. The eventual effect of all three types of agonist-induced activation is a change in the relative orientations of the TM helices (likened to a twisting motion) leading to a wider intracellular surface and "revelation" of residues of the intracellular helices and TM domains crucial to signal transduction function (i.e., G-protein coupling). Inverse agonists and antagonists may also bind to a number of different sites, but the eventual effect must be prevention of this TM helix reorientation.
The structure of the N- and C-terminal tails of GPCRs may also serve important functions beyond ligand-binding. For example, The C-terminus of M3 muscarinic receptors is sufficient, and the six-amino-acid polybasic (KKKRRK) domain in the C-terminus is necessary for its preassembly with Gq proteins. In particular, the C-terminus often contains serine (Ser) or threonine (Thr) residues that, when phosphorylated, increase the affinity of the intracellular surface for the binding of scaffolding proteins called β-arrestins (β-arr). Once bound, β-arrestins both sterically prevent G-protein coupling and may recruit other proteins, leading to the creation of signaling complexes involved in extracellular-signal regulated kinase (ERK) pathway activation or receptor endocytosis (internalization). As the phosphorylation of these Ser and Thr residues often occurs as a result of GPCR activation, the β-arr-mediated G-protein-decoupling and internalization of GPCRs are important mechanisms of desensitization. In addition, internalized "mega-complexes" consisting of a single GPCR, β-arr(in the tail conformation), and heterotrimeric G protein exist and may account for protein signaling from endosomes.
A final common structural theme among GPCRs is palmitoylation of one or more sites of the C-terminal tail or the intracellular loops. Palmitoylation is the covalent modification of cysteine (Cys) residues via addition of hydrophobic acyl groups, and has the effect of targeting the receptor to cholesterol- and sphingolipid-rich microdomains of the plasma membrane called lipid rafts. As many of the downstream transducer and effector molecules of GPCRs (including those involved in negative feedback pathways) are also targeted to lipid rafts, this has the effect of facilitating rapid receptor signaling.
GPCRs respond to extracellular signals mediated by a huge diversity of agonists, ranging from proteins to biogenic amines to protons, but all transduce this signal via a mechanism of G-protein coupling. This is made possible by a guanine-nucleotide exchange factor (GEF) domain primarily formed by a combination of IL-2 and IL-3 along with adjacent residues of the associated TM helices.
The G protein-coupled receptor is activated by an external signal in the form of a ligand or other signal mediator. This creates a conformational change in the receptor, causing activation of a G protein. Further effect depends on the type of G protein. G proteins are subsequently inactivated by GTPase activating proteins, known as RGS proteins.
GPCRs include one or more receptors for the following ligands:
sensory signal mediators (e.g., light and olfactory stimulatory molecules);
adenosine, bombesin, bradykinin, endothelin, γ-aminobutyric acid (GABA), hepatocyte growth factor (HGF), melanocortins, neuropeptide Y, opioid peptides, opsins, somatostatin, GH, tachykinins, members of the vasoactive intestinal peptide family, and vasopressin;
biogenic amines (e.g., dopamine, epinephrine, norepinephrine, histamine, serotonin, and melatonin);
glutamate (metabotropic effect);
glucagon;
acetylcholine (muscarinic effect);
chemokines;
lipid mediators of inflammation (e.g., prostaglandins, prostanoids, platelet-activating factor, and leukotrienes);
peptide hormones (e.g., calcitonin, C5a anaphylatoxin, follicle-stimulating hormone [FSH], gonadotropin-releasing hormone [GnRH], neurokinin, thyrotropin-releasing hormone [TRH], and oxytocin);
and endocannabinoids.
GPCRs that act as receptors for stimuli that have not yet been identified are known as orphan receptors.
However, in other types of receptors that have been studied, wherein ligands bind externally to the membrane, the ligands of GPCRs typically bind within the transmembrane domain. However, protease-activated receptors are activated by cleavage of part of their extracellular domain.
The transduction of the signal through the membrane by the receptor is not completely understood. It is known that in the inactive state, the GPCR is bound to a heterotrimeric G protein complex. Binding of an agonist to the GPCR results in a conformational change in the receptor that is transmitted to the bound Gα subunit of the heterotrimeric G protein via . The activated Gα subunit exchanges GTP in place of GDP which in turn triggers the dissociation of Gα subunit from the Gβγ dimer and from the receptor. The dissociated Gα and Gβγ subunits interact with other intracellular proteins to continue the signal transduction cascade while the freed GPCR is able to rebind to another heterotrimeric G protein to form a new complex that is ready to initiate another round of signal transduction.
It is believed that a receptor molecule exists in a conformational equilibrium between active and inactive biophysical states. The binding of ligands to the receptor may shift the equilibrium toward the active receptor states. Three types of ligands exist: Agonists are ligands that shift the equilibrium in favour of active states; inverse agonists are ligands that shift the equilibrium in favour of inactive states; and neutral antagonists are ligands that do not affect the equilibrium. It is not yet known how exactly the active and inactive states differ from each other.
When the receptor is inactive, the GEF domain may be bound to an also inactive α-subunit of a heterotrimeric G-protein. These "G-proteins" are a trimer of α, β, and γ subunits (known as Gα, Gβ, and Gγ, respectively) that is rendered inactive when reversibly bound to Guanosine diphosphate (GDP) (or, alternatively, no guanine nucleotide) but active when bound to guanosine triphosphate (GTP). Upon receptor activation, the GEF domain, in turn, allosterically activates the G-protein by facilitating the exchange of a molecule of GDP for GTP at the G-protein's α-subunit. The cell maintains a 10:1 ratio of cytosolic GTP:GDP so exchange for GTP is ensured. At this point, the subunits of the G-protein dissociate from the receptor, as well as each other, to yield a Gα-GTP monomer and a tightly interacting Gβγ dimer, which are now free to modulate the activity of other intracellular proteins. The extent to which they may diffuse, however, is limited due to the palmitoylation of Gα and the presence of an isoprenoid moiety that has been covalently added to the C-termini of Gγ.
Because Gα also has slow GTP→GDP hydrolysis capability, the inactive form of the α-subunit (Gα-GDP) is eventually regenerated, thus allowing reassociation with a Gβγ dimer to form the "resting" G-protein, which can again bind to a GPCR and await activation. The rate of GTP hydrolysis is often accelerated due to the actions of another family of allosteric modulating proteins called Regulators of G-protein Signaling, or RGS proteins, which are a type of GTPase-Activating Protein, or GAP. In fact, many of the primary effector proteins (e.g., adenylate cyclases) that become activated/inactivated upon interaction with Gα-GTP also have GAP activity. Thus, even at this early stage in the process, GPCR-initiated signaling has the capacity for self-termination.
GPCRs downstream signals have been shown to possibly interact with integrin signals, such as FAK. Integrin signaling will phosphorylate FAK, which can then decrease GPCR Gαs activity.
If a receptor in an active state encounters a G protein, it may activate it. Some evidence suggests that receptors and G proteins are actually pre-coupled. For example, binding of G proteins to receptors affects the receptor's affinity for ligands. Activated G proteins are bound to GTP.
Further signal transduction depends on the type of G protein. The enzyme adenylate cyclase is an example of a cellular protein that can be regulated by a G protein, in this case the G protein Gs. Adenylate cyclase activity is activated when it binds to a subunit of the activated G protein. Activation of adenylate cyclase ends when the G protein returns to the GDP-bound state.
Adenylate cyclases (of which 9 membrane-bound and one cytosolic forms are known in humans) may also be activated or inhibited in other ways (e.g., Ca2+/Calmodulin binding), which can modify the activity of these enzymes in an additive or synergistic fashion along with the G proteins.
The signaling pathways activated through a GPCR are limited by the primary sequence and tertiary structure of the GPCR itself but ultimately determined by the particular conformation stabilized by a particular ligand, as well as the availability of transducer molecules. Currently, GPCRs are considered to utilize two primary types of transducers: G-proteins and β-arrestins. Because β-arr's have high affinity only to the phosphorylated form of most GPCRs (see above or below), the majority of signaling is ultimately dependent upon G-protein activation. However, the possibility for interaction does allow for G-protein-independent signaling to occur.
There are three main G-protein-mediated signaling pathways, mediated by four sub-classes of G-proteins distinguished from each other by sequence homology (Gαs, Gαi/o, Gαq/11, and Gα12/13). Each sub-class of G-protein consists of multiple proteins, each the product of multiple genes or splice variations that may imbue them with differences ranging from subtle to distinct with regard to signaling properties, but in general they appear reasonably grouped into four classes. Because the signal transducing properties of the various possible βγ combinations do not appear to radically differ from one another, these classes are defined according to the isoform of their α-subunit.
While most GPCRs are capable of activating more than one Gα-subtype, they also show a preference for one subtype over another. When the subtype activated depends on the ligand that is bound to the GPCR, this is called functional selectivity (also known as agonist-directed trafficking, or conformation-specific agonism). However, the binding of any single particular agonist may also initiate activation of multiple different G-proteins, as it may be capable of stabilizing more than one conformation of the GPCR's GEF domain, even over the course of a single interaction. In addition, a conformation that preferably activates one isoform of Gα may activate another if the preferred is less available. Furthermore, feedback pathways may result in receptor modifications (e.g., phosphorylation) that alter the G-protein preference. Regardless of these various nuances, the GPCR's preferred coupling partner is usually defined according to the G-protein most obviously activated by the endogenous ligand under most physiological or experimental conditions.
The above descriptions ignore the effects of Gβγ–signalling, which can also be important, in particular in the case of activated Gαi/o-coupled GPCRs. The primary effectors of Gβγ are various ion channels, such as G-protein-regulated inwardly rectifying K+ channels (GIRKs), P/Q- and N-type voltage-gated Ca2+ channels, as well as some isoforms of AC and PLC, along with some phosphoinositide-3-kinase (PI3K) isoforms.
Although they are classically thought of working only together, GPCRs may signal through G-protein-independent mechanisms, and heterotrimeric G-proteins may play functional roles independent of GPCRs. GPCRs may signal independently through many proteins already mentioned for their roles in G-protein-dependent signaling such as β-arrs, GRKs, and Srcs. Such signaling has been shown to be physiologically relevant, for example, β-arrestin signaling mediated by the chemokine receptor CXCR3 was necessary for full efficacy chemotaxis of activated T cells. In addition, further scaffolding proteins involved in subcellular localization of GPCRs (e.g., PDZ-domain-containing proteins) may also act as signal transducers. Most often the effector is a member of the MAPK family.
In the late 1990s, evidence began accumulating to suggest that some GPCRs are able to signal without G proteins. The ERK2 mitogen-activated protein kinase, a key signal transduction mediator downstream of receptor activation in many pathways, has been shown to be activated in response to cAMP-mediated receptor activation in the slime mold "D. discoideum" despite the absence of the associated G protein α- and β-subunits.
In mammalian cells, the much-studied β2-adrenoceptor has been demonstrated to activate the ERK2 pathway after arrestin-mediated uncoupling of G-protein-mediated signaling. Therefore, it seems likely that some mechanisms previously believed related purely to receptor desensitisation are actually examples of receptors switching their signaling pathway, rather than simply being switched off.
In kidney cells, the bradykinin receptor B2 has been shown to interact directly with a protein tyrosine phosphatase. The presence of a tyrosine-phosphorylated ITIM (immunoreceptor tyrosine-based inhibitory motif) sequence in the B2 receptor is necessary to mediate this interaction and subsequently the antiproliferative effect of bradykinin.
Although it is a relatively immature area of research, it appears that heterotrimeric G-proteins may also take part in non-GPCR signaling. There is evidence for roles as signal transducers in nearly all other types of receptor-mediated signaling, including integrins, receptor tyrosine kinases (RTKs), cytokine receptors (JAK/STATs), as well as modulation of various other "accessory" proteins such as GEFs, guanine-nucleotide dissociation inhibitors (GDIs) and protein phosphatases. There may even be specific proteins of these classes whose primary function is as part of GPCR-independent pathways, termed activators of G-protein signalling (AGS). Both the ubiquity of these interactions and the importance of Gα vs. Gβγ subunits to these processes are still unclear.
There are two principal signal transduction pathways involving the G protein-linked receptors: the cAMP signal pathway and the phosphatidylinositol signal pathway.
The cAMP signal transduction contains 5 main characters: stimulative hormone receptor (Rs) or inhibitory hormone receptor (Ri); stimulative regulative G-protein (Gs) or inhibitory regulative G-protein (Gi); adenylyl cyclase; protein kinase A (PKA); and cAMP phosphodiesterase.
Stimulative hormone receptor (Rs) is a receptor that can bind with stimulative signal molecules, while inhibitory hormone receptor (Ri) is a receptor that can bind with inhibitory signal molecules.
Stimulative regulative G-protein is a G-protein linked to stimulative hormone receptor (Rs), and its α subunit upon activation could stimulate the activity of an enzyme or other intracellular metabolism. On the contrary, inhibitory regulative G-protein is linked to an inhibitory hormone receptor, and its α subunit upon activation could inhibit the activity of an enzyme or other intracellular metabolism.
Adenylyl cyclase is a 12-transmembrane glycoprotein that catalyzes ATP to form cAMP with the help of cofactor Mg2+ or Mn2+. The cAMP produced is a second messenger in cellular metabolism and is an allosteric activator of protein kinase A.
Protein kinase A is an important enzyme in cell metabolism due to its ability to regulate cell metabolism by phosphorylating specific committed enzymes in the metabolic pathway. It can also regulate specific gene expression, cellular secretion, and membrane permeability. The protein enzyme contains two catalytic subunits and two regulatory subunits. When there is no cAMP,the complex is inactive. When cAMP binds to the regulatory subunits, their conformation is altered, causing the dissociation of the regulatory subunits, which activates protein kinase A and allows further biological effects.
These signals then can be terminated by cAMP phosphodiesterase, which is an enzyme that degrades cAMP to 5'-AMP and inactivates protein kinase A.
In the phosphatidylinositol signal pathway, the extracellular signal molecule binds with the G-protein receptor (Gq) on the cell surface and activates phospholipase C, which is located on the plasma membrane. The lipase hydrolyzes phosphatidylinositol 4,5-bisphosphate (PIP2) into two second messengers: inositol 1,4,5-trisphosphate (IP3) and diacylglycerol (DAG). IP3 binds with the IP3 receptor in the membrane of the smooth endoplasmic reticulum and mitochondria to open Ca2+ channels. DAG helps activate protein kinase C (PKC), which phosphorylates many other proteins, changing their catalytic activities, leading to cellular responses.
The effects of Ca2+ are also remarkable: it cooperates with DAG in activating PKC and can activate the CaM kinase pathway, in which calcium-modulated protein calmodulin (CaM) binds Ca2+, undergoes a change in conformation, and activates CaM kinase II, which has unique ability to increase its binding affinity to CaM by autophosphorylation, making CaM unavailable for the activation of other enzymes. The kinase then phosphorylates target enzymes, regulating their activities. The two signal pathways are connected together by Ca2+-CaM, which is also a regulatory subunit of adenylyl cyclase and phosphodiesterase in the cAMP signal pathway.
GPCRs become desensitized when exposed to their ligand for a long period of time. There are two recognized forms of desensitization: 1) homologous desensitization, in which the activated GPCR is downregulated; and 2) heterologous desensitization, wherein the activated GPCR causes downregulation of a different GPCR. The key reaction of this downregulation is the phosphorylation of the intracellular (or cytoplasmic) receptor domain by protein kinases.
Cyclic AMP-dependent protein kinases (protein kinase A) are activated by the signal chain coming from the G protein (that was activated by the receptor) via adenylate cyclase and cyclic AMP (cAMP). In a "feedback mechanism", these activated kinases phosphorylate the receptor. The longer the receptor remains active the more kinases are activated and the more receptors are phosphorylated. In β2-adrenoceptors, this phosphorylation results in the switching of the coupling from the Gs class of G-protein to the Gi class. cAMP-dependent PKA mediated phosphorylation can cause heterologous desensitisation in receptors other than those activated.
The G protein-coupled receptor kinases (GRKs) are protein kinases that phosphorylate only active GPCRs. G-protein-coupled receptor kinases (GRKs) are key modulators of G-protein-coupled receptor (GPCR) signaling. They constitute a family of seven mammalian serine-threonine protein kinases that phosphorylate agonist-bound receptor. GRKs-mediated receptor phosphorylation rapidly initiates profound impairment of receptor signaling and desensitization. Activity of GRKs and subcellular targeting is tightly regulated by interaction with receptor domains, G protein subunits, lipids, anchoring proteins and calcium-sensitive proteins.
Phosphorylation of the receptor can have two consequences:
As mentioned above, G-proteins may terminate their own activation due to their intrinsic GTP→GDP hydrolysis capability. However, this reaction proceeds at a slow rate (≈.02 times/sec) and, thus, it would take around 50 seconds for any single G-protein to deactivate if other factors did not come into play. Indeed, there are around 30 isoforms of RGS proteins that, when bound to Gα through their GAP domain, accelerate the hydrolysis rate to ≈30 times/sec. This 1500-fold increase in rate allows for the cell to respond to external signals with high speed, as well as spatial resolution due to limited amount of second messenger that can be generated and limited distance a G-protein can diffuse in 0.03 seconds. For the most part, the RGS proteins are promiscuous in their ability to activate G-proteins, while which RGS is involved in a given signaling pathway seems more determined by the tissue and GPCR involved than anything else. In addition, RGS proteins have the additional function of increasing the rate of GTP-GDP exchange at GPCRs, (i.e., as a sort of co-GEF) further contributing to the time resolution of GPCR signaling.
In addition, the GPCR may be desensitized itself. This can occur as:
Once β-arrestin is bound to a GPCR, it undergoes a conformational change allowing it to serve as a scaffolding protein for an adaptor complex termed AP-2, which in turn recruits another protein called clathrin. If enough receptors in the local area recruit clathrin in this manner, they aggregate and the membrane buds inwardly as a result of interactions between the molecules of clathrin, in a process called opsonization. Once the pit has been pinched off the plasma membrane due to the actions of two other proteins called amphiphysin and dynamin, it is now an endocytic vesicle. At this point, the adapter molecules and clathrin have dissociated, and the receptor is either trafficked back to the plasma membrane or targeted to lysosomes for degradation.
At any point in this process, the β-arrestins may also recruit other proteins—such as the non-receptor tyrosine kinase (nRTK), c-SRC—which may activate ERK1/2, or other mitogen-activated protein kinase (MAPK) signaling through, for example, phosphorylation of the small GTPase, Ras, or recruit the proteins of the ERK cascade directly (i.e., Raf-1, MEK, ERK-1/2) at which point signaling is initiated due to their close proximity to one another. Another target of c-SRC are the dynamin molecules involved in endocytosis. Dynamins polymerize around the neck of an incoming vesicle, and their phosphorylation by c-SRC provides the energy necessary for the conformational change allowing the final "pinching off" from the membrane.
Receptor desensitization is mediated through a combination phosphorylation, β-arr binding, and endocytosis as described above. Downregulation occurs when endocytosed receptor is embedded in an endosome that is trafficked to merge with an organelle called a lysosome. Because lysosomal membranes are rich in proton pumps, their interiors have low pH (≈4.8 vs. the pH≈7.2 cytosol), which acts to denature the GPCRs. In addition, lysosomes contain many degradative enzymes, including proteases, which can function only at such low pH, and so the peptide bonds joining the residues of the GPCR together may be cleaved. Whether or not a given receptor is trafficked to a lysosome, detained in endosomes, or trafficked back to the plasma membrane depends on a variety of factors, including receptor type and magnitude of the signal.
GPCR regulation is additionally mediated by gene transcription factors. These factors can increase or decrease gene transcription and thus increase or decrease the generation of new receptors (up- or down-regulation) that travel to the cell membrane.
G-protein-coupled receptor oligomerisation is a widespread phenomenon. One of the best-studied examples is the metabotropic GABAB receptor. This so-called constitutive receptor is formed by heterodimerization of GABABR1 and GABABR2 subunits. Expression of the GABABR1 without the GABABR2 in heterologous systems leads to retention of the subunit in the endoplasmic reticulum. Expression of the GABABR2 subunit alone, meanwhile, leads to surface expression of the subunit, although with no functional activity (i.e., the receptor does not bind agonist and cannot initiate a response following exposure to agonist). Expression of the two subunits together leads to plasma membrane expression of functional receptor. It has been shown that GABABR2 binding to GABABR1 causes masking of a retention signal of functional receptors.
Signal transduction mediated by the superfamily of GPCRs dates back to the origin of multicellularity. Mammalian-like GPCRs are found in fungi, and have been classified according to the GRAFS classification system based on GPCR fingerprints. Identification of the superfamily members across the eukaryotic domain, and comparison of the family-specific motifs, have shown that the superfamily of GPCRs have a common origin. Characteristic motifs indicate that three of the five GRAFS families, "Rhodopsin", "Adhesion", and "Frizzled", evolved from the "Dictyostelium discoideum" cAMP receptors before the split of Opisthokonts. Later, the "Secretin" family evolved from the "Adhesion" GPCR receptor family before the split of nematodes. Insect GPCRs appear to be in their own group and Taste2 is identified as descending from "Rhodopsin". Note that the "Secretin"/"Adhesion" split is based on presumed function rather than signature, as the classical Class B (7tm_2, ) is used to identify both in the studies. | https://en.wikipedia.org/wiki?curid=12832 |
GTPase
GTPases are a large family of hydrolase enzymes that bind to the nucleotide guanosine triphosphate (GTP) and hydrolyze it to guanosine diphosphate (GDP). The GTP binding and hydrolysis takes place in the highly conserved G domain common to many GTPases.
GTPases function as molecular switches or timers in many fundamental cellular processes.
Examples of these roles include:
GTPases are active when bound to GTP and inactive when bound to GDP. In the generalized receptor-transducer-effector signaling model of Martin Rodbell, signaling GTPases act as transducers to regulate the activity of effector proteins. This inactive-active switch is due to conformational changes in the protein distinguishing these two forms, particularly of the “switch” regions that in the active state are able to make protein-protein contacts with partner proteins that alter the function of these effectors.
Hydrolysis of GTP bound to an (active) GTPase leads to deactivation of the signaling/timer function of the enzyme. The hydrolysis of the third (γ) phosphate of GTP to create guanosine diphosphate (GDP) and Pi, inorganic phosphate, occurs by the SN2 mechanism (see nucleophilic substitution) via a pentavalent transition state and is dependent on the presence of a magnesium ion Mg2+.
GTPase activity serves as the shutoff mechanism for the signaling roles of GTPases by returning the active, GTP-bound protein to the inactive, GDP-bound state. Most “GTPases” have functional GTPase activity, allowing them to remain active (that is, bound to GTP) only for a short time before deactivating themselves by converting bound GTP to bound GDP. However, many GTPases also use accessory proteins named GTPase-activating proteins or GAPs to accelerate their GTPase activity. This further limits the active lifetime of signaling GTPases. Some GTPases have little to no intrinsic GTPase activity, and are entirely dependent on GAP proteins for deactivation (such as the ADP-ribosylation factor or ARF family of small GTP-binding proteins that are involved in vesicle-mediated transport within cells).
To become activated, GTPases must bind to GTP. Since mechanisms to convert bound GDP directly into GTP are unknown, the inactive GTPases are induced to release bound GDP by the action of distinct regulatory proteins called guanine nucleotide exchange factors or GEFs. The nucleotide-free GTPase protein quickly rebinds GTP, which is in far excess in healthy cells over GDP, allowing the GTPase to enter the active conformation state and promote its effects on the cell. For many GTPases, activation of GEFs is the primary control mechanism in the stimulation of the GTPase signaling functions, although GAPs also play an important role. For heterotrimeric G proteins and many small GTP-binding proteins, GEF activity is stimulated by cell surface receptors in response to signals outside the cell (for heterotrimeric G proteins, the G protein-coupled receptors are themselves GEFs, while for receptor-activated small GTPases their GEFs are distinct from cell surface receptors).
Some GTPases also bind to accessory proteins called guanine nucleotide dissociation inhibitors or GDIs that stabilize the inactive, GDP-bound state.
The amount of active GTPase can be changed in several ways:
In most GTPases, the specificity for the base guanine versus other nucleotides is imparted by the base-recognition motif, which has the consensus sequence [N/T]KXD.
Note that while tubulin and related structural proteins also bind and hydrolyze GTP as part of their function to form intracellular tubules, these proteins utilize a distinct tubulin domain that is unrelated to the GTPase domain used by signaling GTPases.
Heterotrimeric G protein complexes are composed of three distinct protein subunits named "alpha" (α), "beta" (β) and "gamma" (γ) subunits. The alpha subunits contain the GTP binding/GTPase domain flanked by long regulatory regions, while the beta and gamma subunits form a stable dimeric complex referred to as the beta-gamma complex. When activated, a heterotrimeric G protein dissociates into activated, GTP-bound alpha subunit and separate beta-gamma subunit, each of which can perform distinct signaling roles. The α and γ subunit are modified by lipid anchors to increase their association with the inner leaflet of the plasma membrane.
Heterotrimeric G proteins act as the transducers of G protein-coupled receptors, coupling receptor activation to downstream signaling effectors and second messengers. In unstimulated cells, heterotrimeric G proteins are assembled as the GDP bound, inactive trimer (Gα-GDP-Gβγ complex). Upon receptor activation, the activated receptor intracellular domain acts as GEF to release GDP from the G protein complex and to promote binding of GTP in its place. The GTP-bound complex undergoes an activating conformation shift that dissociates it from the receptor and also breaks the complex into its component G protein alpha and beta-gamma subunit components. While these activated G protein subunits are now free to activate their effectors, the active receptor is likewise free to activate additional G proteins – this allows catalytic activation and amplification where one receptor may activate many G proteins.
G protein signaling is terminated by hydrolysis of bound GTP to bound GDP. This can occur through the intrinsic GTPase activity of the α subunit, or be accelerated by separate regulatory proteins that act as GTPase-activating proteins (GAPs), such as members of the Regulator of G protein signaling (RGS) family). The speed of the hydrolysis reaction works as an internal clock limiting the length of the signal. Once Gα is returned to being GDP bound, the two parts of the heterotrimer re-associate to the original, inactive state.
The heterotrimeric G proteins can be classified by sequence homology of the α unit and by their functional targets into four families: Gs family, Gi family, Gq family and G12 family. Each of these Gα protein families contains multiple members, such that the mammals have 16 distinct α-subunit genes. The Gβ and Gγ are likewise composed of many members, increasing heterotrimer structural and functional diversity. Among the target molecules of the specific G proteins are the second messenger-generating enzymes adenylyl cyclase and phospholipase C, as well as various ion channels.
Small GTPases function as monomers and have a molecular weight of about 21 kilodaltons that consists primarily of the GTPase domain. They are also called small or monomeric guanine nucleotide-binding regulatory proteins, “’small or monomeric GTP-binding proteins”’, or small or monomeric G-proteins, and because they have significant homology with the first-identified such protein, named Ras, they are also referred to as Ras superfamily GTPases. Small GTPases generally serve as molecular switches and signal transducers for a wide variety of cellular signaling events, often involving membranes, vesicles or cytoskeleton. According to their primary amino acid sequences and biochemical properties, the many Ras superfamily small GTPases are further divided into five subfamilies with distinct functions: Ras, Rho (“Ras-homology”), Rab, Arf and Ran. While many small GTPases are activated by their GEFs in response to intracellular signals emanating from cell surface receptors (particularly growth factor receptors), regulatory GEFs for many other small GTPases are activated in response to intrinsic cell signals, not cell surface (external) signals.
Multiple translation factor family GTPases play important roles in initiation, elongation and termination of protein biosynthesis.
For a discussion of Translocation factors and the role of GTP, see signal recognition particle (SRP).
See dynamin as a prototype for large monomeric GTPases. | https://en.wikipedia.org/wiki?curid=12833 |
Galla Placidia
Galla Placidia (388-89 / 392-93 – 27 November 450), daughter of the Roman emperor Theodosius I, was regent to Valentinian III from 423 until his majority in 437, and a major force in Roman politics for most of her life. She was queen consort to Ataulf, king of the Visigoths from 414 until his death in 415, and briefly empress consort to Constantius III in 421.
Placidia was the daughter of Theodosius I and his second wife, Galla, who was herself daughter of Valentinian I and his second wife, Justina. Galla Placidia's date of birth is not recorded, but she must have been born either in the period 388-89 or 392-93. Between these dates, her father was in Italy following his campaign against the usurper Magnus Maximus, while her mother remained in Constantinople. A surviving letter from Bishop Ambrose of Milan, dated 390, refers to a younger son of Theodosius named Gratianus, who died in infancy; as Gratian must have been born in the period 388-89, it is most probable that Galla Placidia was born during the second period, 392-93. Placidia's mother Galla died some time in 394, perhaps giving birth to a stillborn son. Placidia was a younger, paternal half-sister of emperors Arcadius and Honorius. Her older half-sister Pulcheria predeceased her parents according to Gregory of Nyssa, placing the death of Pulcheria prior to the death of Aelia Flaccilla, the first wife of Theodosius I, in 385. Coins issued in Placidia's honour in Constantinople after 425 give her name as AELIA PLACIDIA; this may have been intended to integrate Placidia with the eastern dynasty of Theodosius II. There is no evidence that the name Aelia was ever used in the west, or that it formed part of Placidia's official nomenclature.
Placidia was granted her own household by her father in the early 390s and was thus financially independent while underage. She was summoned to the court of her father in Mediolanum (Milan) during 394, and was present at Theodosius' death on January 17, 395. She was granted the title of "nobilissima puella" ("most noble girl") during her childhood.
Placidia spent most of her early years in the household of Stilicho and his wife, Serena. She is presumed to have learned weaving and embroidery. She might have also been given a classical education. Serena was a first cousin of Arcadius, Honorius and Placidia. The poem "In Praise of Serena" by Claudian and the "Historia Nova" by Zosimus clarify that Serena's father was an elder Honorius, a brother to Theodosius I. According to ""De Consulatu Stilichonis"" by Claudian, Placidia was betrothed to Eucherius, only known son of Stilicho and Serena. Her scheduled marriage is mentioned in the text as the third union between Stilicho's family and the Theodosian dynasty, following those of Stilicho to Serena and Maria, their daughter, to Honorius.
Stilicho was the magister militum of the Western Roman Empire. He was the only known person to hold the rank of "magister militum in praesenti" from 394 to 408 in both the Western and the Eastern Roman Empire. He was also titled "magister equitum et peditum" ("Master of the Horse and of Foot"), placing him in charge of both the cavalry and infantry forces of the Western Roman Empire. In 408, Arcadius died and was succeeded by his son Theodosius II, only seven years old. Stilicho planned to proceed to Constantinople and "undertake the management of the affairs of Theodosius", convincing Honorius not to travel to the East himself. Shortly after, Olympius, 'Magister Scrinii', attempted to convince Honorius that Stilicho was in fact conspiring to depose Theodosius II, to replace him with Eucherius. Olympius proceeded to lead a military coup d'état which left him in control of Honorius and his court. Stilicho was arrested and executed on August 22, 408. Eucherius sought refuge in Rome but was arrested there and executed by the eunuchs Arsacius and Tarentius, on imperial orders. Honorius appointed Tarentius imperial chamberlain, and gave the next post under him to Arsacius.
In the disturbances that followed the fall of Stilicho, the wives and children of the foederati living in the cities of Italy were slain. Most of the foederati, who were considered loyalists of Stilicho, joined the forces of Alaric I, King of the Visigoths. Alaric led them to Rome and began a blockade of the city, which was under siege, with minor interruptions, from autumn 408 to August 24, 410. Zosimus records that Placidia was within the city during the siege. When Serena was accused of conspiring with Alaric, "the whole senate therefore, with Placidia, uterine sister to the emperor, thought it proper that she should suffer death".
Prior to the fall of Rome, Placidia was captured by Alaric. She followed the Visigoths in their move from the Italian Peninsula to Gaul in 412. Their ruler Ataulf, having succeeded Alaric, entered an alliance with Honorius against Jovinus and Sebastianus, rival Western Roman emperors located in Gaul. He managed to defeat and execute both Gallo-Roman emperors in 413.
After the heads of Sebastianus and Jovinus arrived at Honorius' court in Ravenna in late August, to be forwarded for display among other usurpers on the walls of Carthage, relations between Ataulf and Honorius improved sufficiently for Ataulf to cement them by marrying Galla Placidia at Narbonne on January 1, 414. The nuptials were celebrated with high Roman festivities and magnificent gifts. Priscus Attalus gave the wedding speech, a classical epithalamium. The marriage was recorded by Hydatius. The historian Jordanes states that they married earlier, in 411 at Forum Livii (Forlì). Jordanes's date may actually be when she and the Gothic king first became more than captor and captive.
Placidia and Ataulf had a single known son, Theodosius. He was born in Barcelona by the end of 414. Theodosius died early in the following year, thus eliminating an opportunity for a Romano-Visigothic line. Years later the corpse was exhumed and reburied in the imperial mausoleum in Old St. Peter's Basilica, Rome. In Hispania, Ataulf imprudently accepted into his service a man identified as "Dubius" or "Eberwolf", a former follower of Sarus. Sarus was a Germanic chieftain who was killed while fighting under Jovinus and Sebastianus. His follower harbored a secret desire to avenge the death of his patron. And so, in the palace at Barcelona, the man brought Ataulf's reign to a sudden end by killing him while he bathed in August/September, 415.
The Amali faction proceeded to proclaim Sigeric, a brother of Sarus, as the next king of the Visigoths. According to "The History of the Decline and Fall of the Roman Empire" by Edward Gibbon, the first act of Sigeric's reign "was the inhuman murder" of Ataulf's six children from a former marriage "whom he tore, without pity, from the feeble arms of a venerable bishop" (Sigesar, bishop of the Goths). As for Galla Placidia, as Ataulf's widow, she was "treated with cruel and wanton insult" by being forced to walk more than twelve miles on foot among the crowd of captives driven ahead of the mounted Sigeric. Seeing the noble widow's sufferings, however, became one of the factors that roused indignant opponents of the usurper, who quickly assassinated Sigeric and replaced him with Wallia, Ataulf's relative.
According to the "Chronicon Albeldense", included in the "Códice de Roda", Wallia was desperate for food supplies. He surrendered to Constantius III, at the time magister militum of Honorius, negotiating terms giving foederati status for the Visigoths. Placidia was returned to Honorius as part of the peace treaty. Her brother Honorius forced her into marriage to Constantius III on January 1, 417. Their daughter Justa Grata Honoria was probably born in 417 or 418. The history of Paul the Deacon mentions her first among the children of the marriage, suggesting that she was the eldest. Their son Valentinian III was born July 2, 419.
Placidia intervened in the succession crisis following the death of Pope Zosimus on December 26, 418. Two factions of the Roman clergy had proceeded to elect their own popes, the first electing Eulalius (December 27) and the other electing Boniface I (December 28). They acted as rival popes, both in Rome, and their factions plunged the city into tumult. Symmachus, Prefect of Rome, sent his report to the imperial court at Ravenna, requesting an imperial decision on the matter. Placidia and, presumably, Constantius petitioned the emperor in favor of Eulalius. This was arguably the first intervention by an Emperor in the Papal election.
Honorius initially confirmed Eulalius as the legitimate pope. As this failed to put an end to the controversy, Honorius called a synod of Italian bishops at Ravenna to decide the matter. The synod met from February to March 419 but failed to reach a conclusion. Honorius called a second synod in May, this time including Gaulish and African bishops. In the meantime, the two rival popes were ordered to leave Rome. As Easter approached, however, Eulalius returned to the city and attempted to seize the Basilica of St. John Lateran in order to "preside at the paschal ceremonies". Imperial troops managed to repel him, and on Easter (March 30, 419) the ceremonies were led by Achilleus, Bishop of Spoleto. The conflict cost Eulalius the imperial favor, and Boniface was proclaimed the legitimate pope as of April 3, 419, returning to Rome a week later. Placidia had personally written to the African bishops, summoning them to the second synod. Three of her letters are known to have survived.
On February 8, 421, Constantius was proclaimed an Augustus, becoming co-ruler with the childless Honorius. Placidia was proclaimed an Augusta. She was the only Empress in the West, since Honorius had divorced his second wife Thermantia in 408 and had never remarried. Neither title was recognised by Theodosius II, the Eastern Roman Emperor. Constantius reportedly complained about the loss of personal freedom and privacy that came with the imperial office. He died of an illness on September 2, 421.
Galla Placidia herself was now forced from the Western Empire. Though the motivation for this remains unclear, the public issue was the increasingly scandalous public caresses she received from her own brother Honorius—this at least was the interpretation of Olympiodorus of Thebes, a historian used as a source by Zosimus, Sozomen and probably Philostorgius, as J.F. Matthews has postulated. Gibbon had a different opinion: "The power of Placidia; and the indecent familiarity of her brother, which might be no more than the symptoms of a childish affection, were universally attributed to incestuous love."
According to Gibbon, "On a sudden, by some base intrigues of a steward and a nurse, this excessive fondness was converted into an irreconcilable quarrel: the debates of the emperor and his sister were not long confined within the walls of the palace; and as the Gothic soldiers adhered to their queen, the city of Ravenna was agitated with bloody and dangerous tumults, which could only be appeased by the forced or voluntary retreat of Placidia and her children. The royal exiles landed at Constantinople, soon after the marriage of Theodosius, during the festival of the Persian victories. They were treated with kindness and magnificence; but as the statues of the emperor Constantius had been rejected by the Eastern court, the title of Augusta could not decently be allowed to his widow." The passage places the arrival of Placidia and her children after the marriage of Theodosius II to Aelia Eudocia, known to have occurred on June 7, 421. The "Persian victories" mentioned were probably victory celebrations over the brief Roman–Sasanian War (421–422) under the respective leadership of Theodosius II and Bahram V of the Sasanian Empire. The "Saracens of Hira" were the Lakhmids of al-Hirah, a pre-Islamic Arab state that was located in what is now Iraq.
On August 15, 423, Honorius died of edema, perhaps pulmonary edema. With no member of the Theodosian dynasty present at Ravenna to claim the throne, Theodosius II was expected to nominate a Western co-emperor. However, Theodosius hesitated and the decision was delayed. Taking advantage of the power vacuum, Castinus the Patrician proceeded to become a kingmaker. He declared Joannes, the "primicerius notariorum" "chief notary" (the head of the civil service), to be the new Western Roman Emperor. Among their supporters was Flavius Aetius. Aetius was a son of Flavius Gaudentius, magister militum, and Aurelia. Joannes' rule was accepted in the provinces of Italia, Gaul and Hispania, but not in the province of Africa.
Theodosius II reacted by preparing Valentinian III for eventual promotion to the imperial office. In 423/424, Valentinian was named "nobilissimus". In 424, Valentinian was betrothed to Licinia Eudoxia, his first cousin once removed. She was a daughter of Theodosius II and Aelia Eudocia. The year of their betrothal was recorded by Marcellinus Comes. At the time of their betrothal, Valentinian was approximately four years old, Licinia only two. Gibbon attributes the betrothal to "the agreement of the three females who governed the Roman world", meaning Placidia and her nieces Eudocia and Pulcheria. In the same year, Valentinian was proclaimed a Caesar in the Eastern court.
The campaign against Joannes also started in the same year. Forces of the Eastern Roman army gathered at Thessaloniki, and were placed under the general command of Ardaburius, the victorious general of the Roman-Persian War. The invasion force was to cross the Adriatic Sea by two routes. Aspar, son of Ardaburius, led the cavalry by land, following the coast of the Adriatic from the Western Balkans to Northern Italy. Placidia and Valentinian joined this force. Ardaburius and the infantry boarded ships of the Eastern Roman navy in an attempt to reach Ravenna by sea. Aspar marched his forces to Aquileia, taking the city by surprise and with virtually no resistance. The fleet, on the other hand, was dispersed by a storm. Ardaburius and two of his galleys were captured by forces loyal to Joannes and were held prisoners in Ravenna.
Ardaburius was treated well by Joannes, who probably intended to negotiate with Theodosius for an end to the hostilities. The prisoner was allowed the "courteous freedom" of walking the court and streets of Ravenna during his captivity. He took advantage of this privilege to come into contact with the forces of Joannes and convinced some of them to defect to Theodosius' side. The conspirators contacted Aspar and beckoned him to Ravenna. A shepherd led Aspar's cavalry force through the marshes of the Po to the gates of Ravenna; with the besiegers outside the walls and the defectors within, the city was quickly captured. Joannes was taken and his right hand cut off; he was then mounted on a donkey and paraded through the streets, and finally beheaded in the hippodrome of Aquileia.
With Joannes dead, Valentinian was officially proclaimed the new Augustus of the Western Roman Empire on October 23, 425, in the presence of the Roman Senate. Three days following Joannes' death, Aetius brought reinforcements for his army, a reported number of sixty thousand Huns from across the Danube. After some skirmishing, Placidia and Aetius came to an agreement that established the political landscape of the Western Roman Empire for the next thirty years. The Huns were paid off and sent home, while Aetius received the position of "magister militum per Gallias "(commander-in-chief of the Roman army in Gaul).
Galla Placidia was regent of the Western Roman Empire from 425 to 437, her regency ending when Valentinian reached his eighteenth birthday on July 2, 437. Among her early supporters was Bonifacius, governor of the Diocese of Africa. Aetius, his rival for influence, managed to secure Arles against Theodoric I of the Visigoths. The Visigoths concluded a treaty and were given Gallic noblemen as hostages. The later Emperor Avitus visited Theodoric, lived at his court and taught his sons.
Conflict between Placidia and Bonifacius started in 429. Placidia appointed Bonifacius general of Libya. Procopius records that Aetius played the two against each other, warning Placidia against Bonifacius and advising her to recall him to Rome; simultaneously writing to Bonifacius, warning him that Placidia was about to summon him for no good reason in order to put him away.
Bonifacius, trusting the warning from Aetius, refused the summons; and, thinking his position untenable, sought an alliance with the Vandals in Spain. The Vandals subsequently crossed from Spain into Libya to join him. To friends of Bonifacius in Rome, this apparent act of hostility toward the Empire seemed entirely out of character for Bonifacius. They traveled to Carthage at Placidia's behest to intercede with him, and he showed them the letter from Aetius. The plot now revealed, his friends returned to Rome to apprise Placidia of the true situation. She did not move against Aetius, as he wielded great influence, and as the Empire was already in danger; but she urged Bonifacius to return to Rome "and not to permit the empire of the Romans to lie under the hand of barbarians."
Bonifacius now regretted his alliance with the Vandals and tried to persuade them to return to Spain. Gaiseric offered battle instead, and Bonifacius was besieged at Hippo Regius in Numidia by the sea. (Augustine of Hippo was its bishop and died in this siege.) Unable to take the city, the Vandals eventually raised the siege. The Romans, with reinforcements under Aspar, renewed the struggle but were routed and lost Africa to the Vandals.
Bonifacius had meanwhile returned to Rome, where Placidia raised him to the rank of patrician and made him "master-general of the Roman armies". Aetius returned from Gaul with an army of "barbarians", and was met by Bonifacius in the bloody Battle of Ravenna (432). Bonifacius won the battle, but was mortally wounded and died a few days later. Aetius was compelled to retire to Pannonia.
With the generals loyal to her having either died or defected to Aetius, Placidia acknowledged the inevitable: Aetius was recalled from exile in 433 and given the titles "magister militum" and "Patrician". The appointments effectively left Aetius in control of the entire Western Roman Army and gave him considerable influence over imperial policy. Placidia continued to act as regent until 437, though her direct influence over decisions was diminished. She would continue to exercise political influence until her death in 450—no longer, however, the only power at court.
Aetius later played a pivotal role in the defense of the Western Empire against Attila. Attila was diverted from Constantinople towards Italy by a letter from Placidia's own daughter Justa Grata Honoria in the spring of 450, asking him to rescue her from an unwanted marriage to a Roman senator that the Imperial family, including Placidia, was trying to force upon her. Honoria included her engagement ring with the letter. Though Honoria may not have intended a proposal of marriage, Attila chose to interpret her message as such. He accepted, asking for half of the western Empire as dowry. When Valentinian discovered the plan, only the influence of Placidia persuaded him not to kill Honoria. Valentinian wrote to Attila denying the legitimacy of the supposed marriage proposal. Attila, unconvinced, sent an emissary to Ravenna to proclaim that Honoria was innocent, that the proposal had been legitimate, and that he would come to claim what was rightfully his. Honoria was quickly married to Flavius Bassus Herculanus, though this did not prevent Attila from pressing his claim.
Placidia died shortly afterwards at Rome, in November 450, and was buried in the Theodosian family mausoleum adjacent to Old St. Peter's Basilica, later the chapel of Saint Petronilla. She did not live to see Attila ravage Italy in 451–453, using Justa's letter as his "legitimate" excuse.
Placidia was a devout Christian. She was involved in the building and restoration of various churches throughout her period of influence. She restored and expanded the Basilica of Saint Paul Outside the Walls in Rome and the Church of the Holy Sepulchre in Jerusalem. She built San Giovanni Evangelista, Ravenna in thanks for the sparing of her life and those of her children in a storm while crossing the Adriatic Sea. The dedicatory inscription reads "Galla Placidia, along with her son Placidus Valentinian Augustus and her daughter Justa Grata Honoria Augusta, paid off their vow for their liberation from the danger of the sea."
Her Mausoleum in Ravenna was one of the UNESCO World Heritage Sites inscribed in 1996. However, the building never served as her tomb, but was initially erected as a chapel dedicated to Lawrence of Rome. It is unknown whether the sarcophagi therein contained the bodies of other members of the Theodosian dynasty, or when they were placed in the building. | https://en.wikipedia.org/wiki?curid=12835 |
Galicia (Spain)
Galicia (; , ; ) is an autonomous community of Spain and historic nationality under Spanish law. Located in the northwest Iberian Peninsula, it includes the provinces of A Coruña, Lugo, Ourense and Pontevedra.
Galicia is bordered by Portugal to the south, the Spanish autonomous communities of Castile and León and Asturias to the east, the Atlantic Ocean to the west, and the Cantabrian Sea to the north. It had a population of 2,701,743 in 2018 and a total area of . Galicia has over of coastline, including its offshore islands and islets, among them Cíes Islands, Ons, Sálvora, Cortegada, and the largest and most populated, A Illa de Arousa.
The area now called Galicia was first inhabited by humans during the Middle Paleolithic period, and takes its name from the Gallaeci, the Celtic people living north of the Douro River during the last millennium BC. Galicia was incorporated into the Roman Empire at the end of the Cantabrian Wars in 19 BC, and was made a Roman province in the 3rd century AD. In 410, the Germanic Suebi established a kingdom with its capital in Braga (Portugal); this kingdom was incorporated into that of the Visigoths in 585. In 711, the Islamic Umayyad Caliphate invaded the Iberian Peninsula conquering the Visigoth kingdom of Hispania by 718, but soon Galicia was incorporated into the Christian kingdom of Asturias by 740. During the Middle Ages, the kingdom of Galicia was occasionally ruled by its own kings, but most of the time it was leagued to the kingdom of Leon and later to that of Castile, while maintaining its own legal and customary practices and culture. From the 13th century on, the kings of Castile, as kings of Galicia, appointed an "Adiantado-mór", whose attributions passed to the "Governor and Captain General of the Kingdom of Galiza" from the last years of the 15th century. The Governor also presided the "Real Audiencia do Reino de Galicia", a royal tribunal and government body. From the 16th century, the representation and voice of the kingdom was held by an assembly of deputies and representatives of the cities of the kingdom, the "Cortes" or "Junta of the Kingdom of Galicia." This institution was forcibly discontinued in 1833 when the kingdom was divided into four administrative provinces with no legal mutual links. During the 19th and 20th centuries, demand grew for self-government and for the recognition of the culture of Galicia. This resulted in the Statute of Autonomy of 1936, soon frustrated by Franco's "coup d'etat" and subsequent long dictatorship. After democracy was restored the legislature passed the Statute of Autonomy of 1981, approved in referendum and currently in force, providing Galicia with self-government.
The interior of Galicia is characterized by a hilly landscape; mountain ranges rise to in the east and south. The coastal areas are mostly an alternate series of "rías" and beaches. The climate of Galicia is usually temperate and rainy, with markedly drier summers; it is usually classified as Oceanic. Its topographic and climatic conditions have made animal husbandry and farming the primary source of Galicia's wealth for most of its history, allowing for a relative high density of population. With the exception of shipbuilding and food processing, Galicia was based on a farming and fishing economy until after the mid-20th century, when it began to industrialize. In 2018, the nominal gross domestic product was €62,900 million, with a nominal GDP per capita of €23,300. Galicia is characterised, unlike other Spanish regions, by the absence of a metropolis dominating the territory. Indeed, the urban network is made up of 7 main cities (the four provincial capitals A Coruña, Pontevedra, Ourense and Lugo, the political capital Santiago de Compostela and the industrial cities Vigo and Ferrol) and other small towns. The population is largely concentrated in two main areas: from Ferrol to A Coruña in the northern coast, and in the Rías Baixas region in the southwest, including the cities of Vigo, Pontevedra, and the interior city of Santiago de Compostela. There are smaller populations around the interior cities of Lugo and Ourense. The political capital is Santiago de Compostela, in the province of A Coruña. A Coruña is the largest city with 213,418 while Vigo, in the province of Pontevedra, is the largest municipality, with 292,817 (2016).
Two languages are official and widely used today in Galicia: the native Galician, a Romance language closely related to Portuguese with which it shares the Galician-Portuguese medieval literature; and Spanish, usually known locally as "Castilian". While most Galicians are bilingual, a 2013 survey reported that 51% of the Galician population spoke Galician most often on a day-to-day basis, while 48% most often used Spanish.
Due to Galicia's history with mythology, the land has been called "Terra Meiga" (land of the witches/witch(ing) land)
The name "Galicia" derives from the Latin toponym Callaecia, later "Gallaecia," related to the name of an ancient Celtic tribe that resided north of the Douro river, the Gallaeci or Callaeci in Latin, or ("Kallaïkoí") in Greek. These "Callaeci" were the first tribe in the area to help the Lusitanians against the invading Romans. The Romans applied their name to all the other tribes in the northwest who spoke the same language and lived the same life.
The etymology of the name has been studied since the 7th century by authors such as Isidore of Seville, who wrote that "Galicians are called so, because of their fair skin, as the Gauls", relating the name to the Greek word for milk. In the 21st century, some scholars have derived the name of the ancient Callaeci either from Proto-Indo-European *kal-n-eH2 'hill', through a local relational suffix -aik-, so meaning 'the hill (people)'; or either from Proto-Celtic *kallī- 'forest', so meaning 'the forest (people)'. In any case, "Galicia", being "per se" a derivation of the ethnic name "Kallaikói," means 'the land of the Galicians'.
The most recent proposal comes from linguist Francesco Benozzo after identifying the root "gall-" / "kall-" in a number of Celtic words with the meaning "stone" or "rock", as follows: "gall" (old Irish), "gal" (Middle Welsh), "gailleichan" (Scottish Gaelic), "kailhoù" (Breton), "galagh" (Manx) and "gall" (Gaulish). Hence, Benozzo explains the ethnonym "Callaeci" as being "the stone people" or "the people of the stone" ("those who work with stones"), in reference to the builders of the ancient megaliths and stone formations so common in Galicia.
The name evolved during the Middle Ages from "Gallaecia," sometimes written "Galletia," to "Gallicia". In the 13th century, with the written emergence of the Galician language, "Galiza" became the most usual written form of the name of the country, being replaced during the 15th and 16th centuries by the current form, "Galicia." This coincides with the spelling of the Castilian Spanish name. The historical denomination "Galiza" became popular again during the end of the 19th and the first three-quarters of the 20th century, and is still used with some frequency today. The Xunta de Galicia, the local devolved government, uses "Galicia". The Royal Galician Academy, the institution responsible for regulating the Galician language, whilst recognizing "Galiza" as a legitimate current denomination, has stated that the only official name of the country is "Galicia".
The oldest attestation of human presence in Galicia has been found in the Eirós Cave, in the municipality of Triacastela, which has preserved animal remains and Neanderthal stone objects from the Middle Paleolithic. The earliest culture to have left significant architectural traces is the Megalithic culture, which expanded along the western European coasts during the Neolithic and Calcolithic eras. Thousands of Megalithic tumuli are distributed throughout the country, but mostly along the coastal areas. Within each tumulus is a stone burial chamber known locally as "anta" (dolmen), frequently preceded by a corridor. Galicia was later influenced by the Bell Beaker culture. Its rich mineral deposits of tin and gold led to the development of Bronze Age metallurgy, and to the commerce of bronze and gold items all along the Atlantic coast of Western Europe. A shared elite culture evolved in this region during the Atlantic Bronze Age.
Dating from the end of the Megalithic era, and up to the Bronze Age, numerous stone carvings (petroglyphs) are found in open air. They usually represent cup and ring marks, labyrinths, deer, Bronze Age weapons, and riding and hunting scenes. Large numbers of these stone carvings can be found in the Rías Baixas regions, at places such as Tourón and Campo Lameiro.
The Castro culture ('Culture of the Castles') developed during the Iron Age, and flourished during the second half of the first millennium BC. It is usually considered a local evolution of the Atlantic Bronze Age, with later developments and influences and overlapping into the Roman era. Geographically, it corresponds to the people the Romans called Gallaeci, which were composed of a large series of nations or tribes, among them the "Artabri", "Bracari", "Limici", "Celtici", "Albiones" and "Lemavi". They were capable fighters: Strabo described them as the most difficult foes the Romans encountered in conquering Lusitania, while Appian mentions their warlike spirit, noting that the women bore their weapons side by side with their men, frequently preferring death to captivity. According to Pomponius Mela all the inhabitants of the coastal areas were Celtic people.
Gallaeci lived in "castros". These were usually annular forts, with one or more concentric earthen or stony walls, with a trench in front of each one. They were frequently located at hills, or in seashore cliffs and peninsulas. Some well known "castros" can be found on the seashore at: Fazouro, Santa Tegra, Baroña, and O Neixón; and inland at: San Cibrao de Lás, Borneiro, Castromao, and Viladonga. Some other distinctive features, such as temples, baths, reservoirs, warrior statues and decorative carvings have been found associated to this culture, together with rich gold and metalworking traditions.
The Roman legions first entered the area under Decimus Junius Brutus in 137–136 BC, but the country was only incorporated into the Roman Empire by the time of Augustus (29 BC – 19 BC). The Romans were interested in Galicia mainly for its mineral resources, most notably gold. Under Roman rule, most Galician hillforts began to be – sometimes forcibly – abandoned, and Gallaeci served frequently in the Roman army as auxiliary troops. Romans brought new technologies, new travel routes, new forms of organizing property, and a new language; Latin. The Roman Empire established its control over Galicia through camps ("castra") as "Aquis Querquennis", Ciadella camp or Lucus Augusti (Lugo), roads ("viae") and monuments as the lighthouse known as "Tower of Hercules", in Corunna, but the remoteness and lesser interest of the country since the 2nd century of our era, when the gold mines stopped being productive, led to a lesser degree of "Romanization". In the 3rd century it was made a province, under the name Gallaecia, which included also northern Portugal, Asturias, and a large section of what today is known as Castile and León.
In the early 5th century, the deep crisis suffered by the Roman Empire allowed different tribes of Central Europe (Suebi, Vandals and Alani) to cross the Rhine and penetrate into the rule on 31 December 406. Its progress towards the Iberian Peninsula forced the Roman authorities to establish a treaty ("foedus") by which the Suebi would settle peacefully and govern Galicia as imperial allies. So, from 409 Galicia was taken by the Suebi, forming the first medieval kingdom to be created in Europe, in 411, even before the fall of the Roman Empire, being also the first Germanic kingdom to mint coinage in Roman lands. During this period a Briton colony and bishopric (see Mailoc) was established in Northern Galicia (Britonia), probably as foederati and allies of the Suebi. In 585, the Visigothic King Leovigild invaded the Suebic kingdom of Galicia and defeated it, bringing it under Visigoth control.
Later the Muslims invaded Spain (711), but the Arabs and Moors never managed to have any real control over Galicia, which was later incorporated into the expanding Christian Kingdom of Asturias, usually known as Gallaecia or Galicia ("Yillīqiya" and "Galīsiya") by Muslim Chroniclers, as well as by many European contemporaries. This era consolidated Galicia as a Christian society which spoke a Romance language. During the next century Galician noblemen took northern Portugal, conquering Coimbra in 871, thus freeing what was considered the southernmost city of ancient Galicia.
In the 9th century, the rise of the cult of the Apostle James in Santiago de Compostela gave Galicia a particular symbolic importance among Christians, an importance it would hold throughout the "Reconquista". As the Middle Ages went on, Santiago became a major pilgrim destination and the Way of Saint James (Camiño de Santiago) a major pilgrim road, a route for the propagation of Romanesque art and the words and music of the troubadors. During the 10th and 11th centuries, a period during which Galician nobility become related to the royal family, Galicia was at times headed by its own native kings, while Vikings (locally known as "Leodemanes" or "Lordomanes") occasionally raided the coasts. The Towers of Catoira (Pontevedra) were built as a system of fortifications to prevent and stop the Viking raids on Santiago de Compostela.
In 1063, Ferdinand I of Castile divided his realm among his sons, and the Kingdom of Galicia was granted to Garcia II of Galicia. In 1072, it was forcibly annexed by Garcia's brother Alfonso VI of León; from that time Galicia was united with the Kingdom of León under the same monarchs. In the 13th century Alfonso X of Castile standardized the Castilian language and made it the language of court and government. Nevertheless, in his Kingdom of Galicia the Galician language was the only language spoken, and the most used in government and legal uses, as well as in literature.
During the 14th and 15th centuries, the progressive distancing of the kings from Galician affairs left the kingdom in the hands of the local knights, counts and bishops, who frequently fought each other to increase their fiefs, or simply to plunder the lands of others. At the same time, the deputies of the Kingdom in the "Cortes" stopped being called. The Kingdom of Galicia, slipping away from the control of the King, responded with a century of fiscal insubordination.
On the other hand, the lack of an effective royal justice system in the Kingdom led to the social conflict known as the "Guerras Irmandiñas" ('Wars of the brotherhoods'), when leagues of peasants and burghers, with the support of a number of knights, noblemen, and under legal protection offered by the remote king, toppled many of the castles of the Kingdom and briefly drove the noblemen into Portugal and Castile. Soon after, in the late 15th century, in the dynastic conflict between Isabella I of Castile and Joanna La Beltraneja, part of the Galician aristocracy supported Joanna. After Isabella's victory, she initiated an administrative and political reform which the chronicler Jeronimo Zurita defined as "doma del Reino de Galicia": 'It was then when the taming of Galicia began, because not just the local lords and knights, but all the people of that nation were the ones against the others very bold and warlike'. These reforms, while establishing a local government and tribunal (the "Real Audiencia del Reino de Galicia"), and bringing the nobleman under submission, also brought most Galician monasteries and institutions under Castilian control, in what has been criticized as a process of centralisation. At the same time the kings began to call the "Xunta" or "Cortes" of the Kingdom of Galicia, an assembly of deputies or representatives of the cities of the Kingdom, to ask for monetary and military contributions. This assembly soon developed into the voice and legal representation of the Kingdom, and the depositary of its will and laws.
The modern period of the kingdom of Galicia began with the defeat of some of the most powerful Galician lords, such as Pedro Álvarez de Sotomayor, called Pedro Madruga, and Rodrigo Henriquez Osorio, at the hands of the Castilian armies sent to Galicia between the years 1480 and 1486. Isabella I of Castile, considered a usurper by many Galician nobles, defeated all armed resistance and definitively established the royal power of the Castilian monarchy. Fearing a general revolt, the monarchs ordered the banishing of the rest of the great lords like Pedro de Bolaño, Diego de Andrade or Lope Sánchez de Moscoso, among others.
The establishment of the Santa Hermandad in 1480, and of the Real Audiencia del Reino de Galicia in 1500—a tribunal and executive body directed by the Governor-Captain General as a direct representative of the King—implied initially the submission of the Kingdom to the Crown, after a century of unrest and fiscal insubordination. As a result, from 1480 to 1520 the Kingdom of Galicia contributed more than 10% of the total earnings of the Crown of Castille, including the Americas, well over its economic relevance. Like the rest of Spain, the 16th century was marked by population growth up to 1580, when the simultaneous wars with the Netherlands, France and England hampered Galicia's Atlantic commerce, which consisted mostly in the exportation of sardines, wood, and some cattle and wine.
In the late years of the 15th century the written form of the Galician language began a slow decline as it was increasingly replaced by Spanish, which would culminate in the "Séculos Escuros" "the Dark Centuries" of the language, roughly from the 16th century through to the mid-18th century, when written Galician almost completely disappeared except for private or occasional uses but the spoken language remained the common language of the people in the villages and even the cities.
From that moment Galicia, which participated to a minor extent in the American expansion of the Spanish Empire, found itself at the center of the Atlantic wars fought by Spain against the French and the Protestant powers of England and the Netherlands, whose privateers attacked the coastal areas, but major assaults were not common as the coastline was difficult and the harbors easily defended. The most famous assaults were upon the city of Vigo by Sir Francis Drake in 1585 and 1589, and the siege of A Coruña in 1589 by the "English Armada". Galicia also suffered occasional slave raids by Barbary pirates, but not as frequently as the Mediterranean coastal areas. The most famous Barbary attack was the bloody sack of the town of Cangas in 1617. At the time, the king's petitions for money and troops became more frequent, due to the human and economic exhaustion of Castile; the Junta of the Kingdom of Galicia (the local "Cortes" or representative assembly) was initially receptive to these petitions, raising large sums, accepting the conscription of the men of the kingdom, and even commissioning a new naval squadron which was sustained with the incomes of the Kingdom.
After the rupture of the wars with Portugal and Catalonia, the "Junta" changed its attitude, this time due to the exhaustion of Galicia, now involved not just in naval or oversea operations, but also in an exhausting war with the Portuguese, war which produced thousands of casualties and refugees and was heavily disturbing to the local economy and commerce. So, in the second half of the 17th century the "Junta" frequently denied or considerably reduced the initial petitions of the monarch, and though the tension didn't rise to the levels experienced in Portugal or Catalonia, there were frequent urban mutinies and some voices even asked for the secession of the Kingdom of Galicia.
During the Peninsular War the successful uprising of the local people against the new French authorities, together with the support of the British Army, limited the occupation to a six-month period in 1808–1809. During the pre-war period the Supreme Council of the Kingdom of Galicia ("Junta Suprema del Reino de Galicia"), auto-proclaimed interim sovereign in 1808, was the sole government of the country and mobilized near 40,000 men against the invaders.
The 1833 territorial division of Spain put a formal end to the Kingdom of Galicia, unifying Spain into a single centralized monarchy. Instead of seven provinces and a regional administration, Galicia was reorganized into the current four provinces. Although it was recognized as a "historical region", that status was strictly honorific. In reaction, nationalist and federalist movements arose.
The liberal General Miguel Solís Cuetos led a separatist coup attempt in 1846 against the authoritarian regime of Ramón María Narváez. Solís and his forces were defeated at the Battle of Cacheiras, 23 April 1846, and the survivors, including Solís himself, were shot. They have taken their place in Galician memory as the Martyrs of Carral or simply the Martyrs of Liberty.
Defeated on the military front, Galicians turned to culture. The "Rexurdimento" focused on recovery of the Galician language as a vehicle of social and cultural expression. Among the writers associated with this movement are Rosalía de Castro, Manuel Murguía, Manuel Leiras Pulpeiro, and Eduardo Pondal.
In the early 20th century came another turn toward nationalist politics with "Solidaridad Gallega" (1907–1912) modeled on "Solidaritat Catalana" in Catalonia. Solidaridad Gallega failed, but in 1916 "Irmandades da Fala" (Brotherhood of the Language) developed first as a cultural association but soon as a full-blown nationalist movement. Vicente Risco and Ramón Otero Pedrayo were outstanding cultural figures of this movement, and the magazine "Nós" ('Us'), founded 1920, its most notable cultural institution, Lois Peña Novo the outstanding political figure.
The Second Spanish Republic was declared in 1931. During the republic, the Partido Galeguista (PG) was the most important of a shifting collection of Galician nationalist parties. Following a referendum on a Galician Statute of Autonomy, Galicia was granted the status of an autonomous region.
Galicia was spared the worst of the fighting in that war: it was one of the areas where the initial coup attempt at the outset of the war was successful, and it remained in Nationalist (Franco's army's) hands throughout the war. While there were no pitched battles, there was repression and death: all political parties were abolished, as were all labor unions and Galician nationalist organizations as the "Seminario de Estudos Galegos". Galicia's statute of autonomy was annulled (as were those of Catalonia and the Basque provinces once those were conquered). According to Carlos Fernández Santander, at least 4,200 people were killed either extrajudicially or after summary trials, among them republicans, communists, Galician nationalists, socialists and anarchists. Victims included the civil governors of all four Galician provinces; Juana Capdevielle, the wife of the governor of A Coruña; mayors such as Ánxel Casal of Santiago de Compostela, of the Partido Galeguista; prominent socialists such as Jaime Quintanilla in Ferrol and Emilio Martínez Garrido in Vigo; Popular Front deputies Antonio Bilbatúa, José Miñones, Díaz Villamil, Ignacio Seoane, and former deputy Heraclio Botana); soldiers who had not joined the rebellion, such as Generals Rogelio Caridad Pita and Enrique Salcedo Molinuevo and Admiral Antonio Azarola; and the founders of the PG, Alexandre Bóveda and Víctor Casas, as well as other professionals akin to republicans and nationalists, as the journalist Manuel Lustres Rivas or physician Luis Poza Pastrana. Many others were forced to escape into exile, or were victims of other reprisals and removed from their jobs and positions.
General Francisco Franco – himself a Galician from Ferrol – ruled as dictator from the civil war until his death in 1975. Franco's centralizing regime suppressed any official use of the Galician language, including the use of Galician names for newborns, although its everyday oral use was not forbidden. Among the attempts at resistance were small leftist guerrilla groups such as those led by José Castro Veiga ("O Piloto") and Benigno Andrade ("Foucellas"), both of whom were ultimately captured and executed. In the 1960s, ministers such as Manuel Fraga Iribarne introduced some reforms allowing technocrats affiliated with Opus Dei to modernize administration in a way that facilitated capitalist economic development. However, for decades Galicia was largely confined to the role of a supplier of raw materials and energy to the rest of Spain, causing environmental havoc and leading to a wave of migration to Venezuela and to various parts of Europe. Fenosa, the monopolistic supplier of electricity, built hydroelectric dams, flooding many Galician river valleys.
The Galician economy finally began to modernize with a Citroën factory in Vigo, the modernization of the canning industry and the fishing fleet, and eventually a modernization of small peasant farming practices, especially in the production of cows' milk. In the province of Ourense, businessman and politician Eulogio Gómez Franqueira gave impetus to the raising of livestock and poultry by establishing the Cooperativa Orensana S.A. (Coren).
During the last decade of Franco's rule, there was a renewal of nationalist feeling in Galicia. The early 1970s were a time of unrest among university students, workers, and farmers. In 1972, general strikes in Vigo and Ferrol cost the lives of Amador Rey and Daniel Niebla. Later, the bishop of Mondoñedo-Ferrol, Miguel Anxo Araúxo Iglesias, wrote a pastoral letter that was not well received by the Franco regime, about a demonstration in Bazán (Ferrol) where two workers died.
As part of the transition to democracy upon the death of Franco in 1975, Galicia regained its status as an autonomous region within Spain with the Statute of Autonomy of 1981, which begins, "Galicia, historical nationality, is constituted as an Autonomous Community to access to its self-government, in agreement with the Spanish Constitution and with the present Statute (...)". Varying degrees of nationalist or independentist sentiment are evident at the political level. The "Bloque Nacionalista Galego" or BNG, is a conglomerate of left-wing parties and individuals that claims Galician political status as a nation.
From 1990 to 2005, Manuel Fraga, former minister and ambassador in the Franco dictature, presided over the Galician autonomous government, the Xunta de Galicia. Fraga was associated with the "Partido Popular" ('People's Party', Spain's main national conservative party) since its founding. In 2002, when the oil tanker Prestige sank and covered the Galician coast in oil, Fraga was accused by the grassroots movement "Nunca Mais" ("Never again") of having been unwilling to react. In the 2005 Galician elections, the 'People's Party' lost its absolute majority, though remaining (barely) the largest party in the parliament, with 43% of the total votes. As a result, power passed to a coalition of the "Partido dos Socialistas de Galicia" (PSdeG) ('Galician Socialists' Party'), a federal sister-party of Spain's main social-democratic party, the "Partido Socialista Obrero Español" (PSOE, 'Spanish Socialist Workers Party') and the nationalist "Bloque Nacionalista Galego" (BNG). As the senior partner in the new coalition, the PSdeG nominated its leader, Emilio Perez Touriño, to serve as Galicia's new president, with Anxo Quintana, the leader of BNG, as its vice president.
In 2009, the PSdG-BNG coalition lost the elections, and the government went back to the People's Party (conservative), even though the PSdG-BNG coalition actually obtained the most votes. Alberto Núñez Feijóo (PPdG) is now Galicia's president.
Galicia has a surface area of . Its northernmost point, at 43°47′N, is Estaca de Bares (also the northernmost point of Spain); its southernmost, at 41°49′N, is on the Portuguese border in the Baixa Limia-Serra do Xurés Natural Park. The easternmost longitude is at 6°42′W on the border between the province of Ourense and the Castilian-Leonese province of Zamora) its westernmost at 9°18′W, reached in two places: the A Nave Cape in Fisterra (also known as Finisterre), and Cape Touriñán, both in the province of A Coruña.
The interior of Galicia is a hilly landscape, composed of relatively low mountain ranges, usually below high, without sharp peaks, rising to in the eastern mountains. There are many rivers, most (though not all) running down relatively gentle slopes in narrow river valleys, though at times their courses become far more rugged, as in the canyons of the Sil river, Galicia's second most important river after the Miño.
Topographically, a remarkable feature of Galicia is the presence of many firth-like inlets along the coast, estuaries that were drowned with rising sea levels after the ice age. These are called "rías" and are divided into the smaller "Rías Altas" ("High Rías"), and the larger "Rías Baixas" ("Low Rías"). The "Rías Altas" include Ribadeo, Foz, Viveiro, O Barqueiro, Ortigueira, Cedeira, Ferrol, Betanzos, A Coruña, Corme e Laxe and Camariñas. The Rías Baixas, found south of Fisterra, include Corcubión, Muros e Noia, Arousa, Pontevedra and Vigo. The Rías Altas can sometimes refer only to those east of Estaca de Bares, with the others being called "Rías Medias" ("Intermediate Rías").
Erosion by the Atlantic Ocean has contributed to the great number of capes. Besides the aforementioned Estaca de Bares in the far north, separating the Atlantic Ocean from the Cantabrian Sea, other notable capes are Cape Ortegal, Cape Prior, Punta Santo Adrao, Cape Vilán, Cape Touriñán (westernmost point in Galicia), Cape Finisterre or Fisterra, considered by the Romans, along with Finistère in Brittany and Land's End in Cornwall, to be the end of the known world.
All along the Galician coast are various archipelagos near the mouths of the "rías". These archipelagos provide protected deepwater harbors and also provide habitat for seagoing birds. A 2007 inventory estimates that the Galician coast has 316 archipelagos, islets, and freestanding rocks. Among the most important of these are the archipelagos of Cíes, Ons, and Sálvora. Together with Cortegada Island, these make up the Atlantic Islands of Galicia National Park. Other significant islands are Islas Malveiras, Islas Sisargas, and, the largest and holding the largest population, Arousa Island.
The coast of this 'green corner' of the Iberian Peninsula, some in length, attracts great numbers of tourists, although real estate development in the 2000–2010 decade have degraded it partially.
Galicia is quite mountainous, a fact which has contributed to isolate the rural areas, hampering communications, most notably in the inland. The main mountain range is the Macizo Galaico (Serra do Eixe, Serra da Lastra, Serra do Courel), also known as "Macizo Galaico-Leonés", located in the eastern parts, bordering with Castile and León. Noteworthy mountain ranges are O Xistral (northern Lugo), the Serra dos Ancares (on the border with León and Asturias), O Courel (on the border with León), O Eixe (the border between Ourense and Zamora), Serra de Queixa (in the center of Ourense province), O Faro (the border between Lugo and Pontevedra), Cova da Serpe (border of Lugo and A Coruña), Montemaior (A Coruña), Montes do Testeiro, Serra do Suído, and Faro de Avión (between Pontevedra and Ourense); and, to the south, A Peneda, O Xurés and O Larouco, all on the border of Ourense and Portugal.
The highest point in Galicia is Trevinca or Pena Trevinca (), located in the Serra do Eixe, at the border between Ourense and León and Zamora provinces. Other tall peaks are Pena Survia () in the Serra do Eixe, O Mustallar () in Os Ancares, and Cabeza de Manzaneda () in Serra de Queixa, where there is a ski resort.
Galicia is poetically known as the "country of the thousand rivers" ("o país dos mil ríos"). The largest and most important of these rivers is the Miño, poetically known as "O Pai Miño" (Father Miño), which is long and discharges per second, with its affluent the Sil, which has created a spectacular canyon. Most of the rivers in the inland are tributaries of this river system, which drains some . Other rivers run directly into the Atlantic Ocean or the Cantabrian Sea, most of them having short courses. Only the Navia, Ulla, Tambre, and Limia have courses longer than .
Galicia's many hydroelectric dams take advantage of the steep, deep, narrow rivers and their canyons. Due to their steep course, few of Galicia's rivers are navigable, other than the lower portion of the Miño and the portions of various rivers that have been dammed into reservoirs. Some rivers are navigable by small boats in their lower reaches: this is taken great advantage of in a number of semi-aquatic festivals and pilgrimages.
Galicia has preserved some of its dense forests. It is relatively unpolluted, and its landscapes composed of green hills, cliffs and "rias" are generally different from what is commonly understood as Spanish landscape. Nevertheless, Galicia has some important environmental problems.
Deforestation and forest fires are a problem in many areas, as is the continual spread of the eucalyptus tree, a species imported from Australia, actively promoted by the paper industry since the mid-20th century. Galicia is one of the more forested areas of Spain, but the majority of Galicia's plantations, usually growing eucalyptus or pine, lack any formal management. Massive eucalyptus plantation, especially of "Eucalyptus globulus", began in the Francisco Franco era, largely on behalf of the paper company Empresa Nacional de Celulosas de España (ENCE) in Pontevedra, which wanted it for its pulp. Galician photographer Delmi Álvarez began documenting the fires in Galicia from 2006 in a project called "Queiman Galiza (Burn Galicia).". Wood products figure significantly in Galicia's economy. Apart from tree plantations Galicia is also notable for the extensive surface occupied by meadows used for animal husbandry, especially cattle, an important activity. Hydroelectric development in most rivers has been a serious concern for local conservationists during the last decades.
Fauna, most notably the European wolf, has suffered because of the actions of livestock owners and farmers, and because of the loss of habitats, whilst the native deer species have declined because of hunting and development.
Oil spills are a major issue. The Prestige oil spill in 2002 spilt more oil than the Exxon Valdez in Alaska.
Galicia has more than 2,800 plant species and 31 endemic plant taxons. Plantations and mixed forests of eucalyptus predominates in the west and north; a few oak forests (variously known locally as "fragas" or "devesas") remain, particularly in the north-central part of the province of Lugo and the north of the province of A Coruña (Fragas do Eume). In the interior regions of the country, oak and bushland predominates.
Galicia has 262 inventoried species of vertebrates, including 12 species of freshwater fish, 15 amphibians, 24 reptiles, 152 birds, and 59 mammals.
The animals most often thought of as being "typical" of Galicia are the livestock raised there. The Galician horse is native to the region, as is the Galician Blond cow and the domestic fowl known as the "galiña de Mos". The last is an endangered species, although it is showing signs of a comeback since 2001.
Galicia is home to one of the largest population of wolves in western Europe. Galicia's woodlands and mountains are also home to rabbits, hares, wild boars, and roe deer, all of which are popular with hunters. Several important bird migration routes pass through Galicia, and some of the community's relatively few environmentally protected areas are Special Protection Areas (such as on the Ría de Ribadeo) for these birds. From a domestic point of view, Galicia has been credited for author Manuel Rivas as the "land of one million cows". Galician Blond and Holstein cattle coexist on meadows and farms.
Being located on the Atlantic coastline, Galicia has a very mild climate for the latitude and the marine influence affects most of the province to various degrees. In comparison to similar latitudes on the other side of the Atlantic, winters are exceptionally mild, with consistent rainfall. At sea level snow is exceptional, with temperatures just occasionally dropping below freezing; on the other hand, snow regularly falls in the eastern mountains from November to May. Overall, the climate of Galicia is comparable to the Pacific Northwest; the warmest coastal station of Pontevedra has a yearly mean temperature of . Ourense located somewhat inland is only slightly warmer with . Lugo, to the north, is colder, with , similar to the of Portland, Oregon.
In coastal areas summers are tempered, with daily maximums averaging around in Vigo. Temperatures are further cooler in A Coruña, with a subdued normal. Temperatures do however soar in inland areas such as Ourense, where days above are very regular.
The lands of Galicia are ascribed to two different areas in the Köppen climate classification: a south area (roughly, the province of Ourense and Pontevedra) with an appreciable summer drought, classified as a warm-summer Mediterranean climate ("Csb"), with mild temperatures and rainfall usual throughout the year; and the western and northern coastal regions, the provinces of Lugo and A Coruña, which are characterized by their Oceanic climate ("Cfb"), with a more uniform precipitation distribution along the year, and milder summers. However, precipitation in southern coastal areas are often classified as oceanic since the averages remain significantly higher than a typical Mediterranean climate.
As an example, Santiago de Compostela, the capital city, has an average of 129 rainy days (> 1 mm) and per year (with just 17 rainy days in the three summer months) and 2,101 sunlight hours per year, with just 6 days with frosts per year. But the colder city of Lugo, to the east, has an average of 1,759 sunlight hours per year, 117 days with precipitations (> 1 mm) totalling , and 40 days with frosts per year. The more mountainous parts of the provinces of Ourense and Lugo receive significant snowfall during the winter months. The sunniest city is Pontevedra with 2,223 sunny hours per year.
Climate data for some locations in Galicia (average 1981–2010):
Galicia has partial self-governance, in the form of a devolved government, established on 16 March 1978 and reinforced by the Galician Statute of Autonomy, ratified on 28 April 1981. There are three branches of government: the executive branch, the Xunta de Galicia, consisting of the President and the other independently elected councillors; the legislative branch consisting of the Galician Parliament; and the judicial branch consisting of the High Court of Galicia and lower courts.
The Xunta de Galicia is a collective entity with executive and administrative power. It consists of the President, a vice president, and twelve councillors. Administrative power is largely delegated to dependent bodies. The Xunta also coordinates the activities of the provincial councils () located in A Coruña, Pontevedra, Ourense and Lugo.
The President of the Xunta directs and coordinates the actions of the Xunta. He or she is simultaneously the representative of the autonomous community and of the Spanish state in Galicia. He or she is a member of the parliament and is elected by its deputies and then formally named by the monarch of Spain.
The Galician Parliament consists of 75 deputies elected by universal adult suffrage under a system of proportional representation. The franchise includes also Galicians who reside abroad. Elections occur every four years.
The last elections, held 25 September 2016, resulted in the following distribution of seats:
There are 314 municipalities () in Galicia, each of which is run by a mayor–council government known as a .
There is a further subdivision of local government known as an ; each has its own council () and mayor (). There are nine of these in Galicia: Arcos da Condesa, Bembrive, Camposancos, Chenlo, Morgadáns, Pazos de Reis, Queimadelos, Vilasobroso and Berán.
Galicia is also traditionally subdivided in some 3,700 civil parishes, each one comprising one or more "vilas" (towns), "aldeas" (villages), "lugares" (hamlets) or "barrios" (neighbourhoods).
Galicia's interests are represented at national level by 25 elected deputies in the Congress of Deputies and 19 senators in the Senate – of these, 16 are elected and 3 are appointed by the Galician parliament.
Prior to the 1833 territorial division of Spain Galicia was divided into seven administrative provinces:
From 1833, the seven original provinces of the 15th century were consolidated into four:
Galicia is further divided into 53 "comarcas", 315 municipalities (93 in A Coruña, 67 in Lugo, 92 in Ourense, 62 in Pontevedra) and 3,778 parishes. Municipalities are divided into parishes, which may be further divided into "aldeas" ("hamlets") or "lugares" ("places"). This traditional breakdown into such small areas is unusual when compared to the rest of Spain. Roughly half of the named population entities of Spain are in Galicia, which occupies only 5.8 percent of the country's area. It is estimated that Galicia has over a million named places, over 40,000 of them being communities.
In comparison to the other regions of Spain, the major economic benefit of Galicia is its fishing Industry. Galicia is a land of economic contrast. While the western coast, with its major population centers and its fishing and manufacturing industries, is prosperous and increasing in population, the rural hinterland—the provinces of Ourense and Lugo—is economically dependent on traditional agriculture, based on small landholdings called "minifundios". However, the rise of tourism, sustainable forestry and organic and traditional agriculture are bringing other possibilities to the Galician economy without compromising the preservation of the natural resources and the local culture.
Traditionally, Galicia depended mainly on agriculture and fishing. Reflecting that history, the European Fisheries Control Agency, which coordinates fishing controls in European Union waters, is based in Vigo. Nonetheless, today the tertiary sector of the economy (the service sector) is the largest, with 582,000 workers out of a regional total of 1,072,000 (as of 2002).
The secondary sector (manufacturing) includes shipbuilding in Vigo and Ferrol, textiles and granite work in A Coruña. A Coruña also manufactures automobiles, but not nearly on the scale of the French automobile manufacturing in Vigo. The "Centro de Vigo de PSA Peugeot Citroën", founded in 1958, makes about 450,000 vehicles annually (455,430 in 2006); a Citroën C4 Picasso made in 2007 was their nine-millionth vehicle.
Arteixo, an industrial municipality in the A Coruña metropolitan area, is the headquarters of Inditex, the world's largest fashion retailer. Of their eight brands, Zara is the best-known; indeed, it is the best-known Spanish brand of any sort on an international basis. For 2007, Inditex had 9,435 million euros in sales for a net profit of 1,250 million euros. The company president, Amancio Ortega, is the richest person in Spain and indeed Europe with a net worth of 45 billion euros.
Galicia is home to the savings bank, and to Spain's two oldest commercial banks Banco Etcheverría (the oldest) and Banco Pastor, owned since 2011 by Banco Popular Español.
Galicia was late to catch the tourism boom that has swept Spain in recent decades, but the coastal regions (especially the Rías Baixas and Santiago de Compostela) are now significant tourist destinations and are especially popular with visitors from other regions in Spain, where the majority of tourists come from. In 2007, 5.7 million tourists visited Galicia, an 8% growth over the previous year, and part of a continual pattern of growth in this sector. 85% of tourists who visit Galicia visit Santiago de Compostela. Tourism constitutes 12% of Galician GDP and employs about 12% of the regional workforce.
The Gross domestic product (GDP) of the autonomous community was 62.6 billion euros in 2018, accounting for 5.2% of Spanish economic output. GDP per capita adjusted for purchasing power was 24,900 euros or 82% of the EU27 average in the same year. The GDP per employee was 95% of the EU average.
The unemployment rate stood at 15.7% in 2017 and was lower than the national average.
Galicia's main airport is Santiago de Compostela Airport. Having been used by 2,083,873 passengers in 2014, it connects the Galician capital with cities in Spain as well as several major European cities. There are two other international airports in Galicia: A Coruña Airport – Alvedro and Vigo-Peinador Airport.
The most important Galician fishing port is the Port of Vigo; It is one of the world's leading fishing ports, second only to Tokyo, with an annual catch worth 1,500 million euros. In 2007 the port took in of fish and seafood, and about of other cargoes. Other important ports are Ferrol, A Coruña, Marín and the smaller port of Vilagarcía de Arousa, as well as important recreational ports in Pontevedra capital city and Burela. Beyond these, Galicia has 120 other organized ports.
Within Galicia are the Autopista AP-9 from Ferrol to Vigo and the Autopista AP-53 (also known as AG-53, because it was initially built by the Xunta de Galicia) from Santiago to Ourense. Additional roads under construction include Autovía A-54 from Santiago de Compostela to Lugo, and Autovía A-56 from Lugo to Ourense. The Xunta de Galicia has built roads connecting comarcal capitals, such as the before mentioned AG-53, Autovía AG-55 connecting A Coruña to Carballo or AG-41 connecting Pontevedra to Sanxenxo.
The first railway line in Galicia was inaugurated 15 September 1873. It ran from O Carril, Vilagarcía da Arousa to Cornes, Conxo, Santiago de Compostela. A second line was inaugurated in 1875, connecting A Coruña and Lugo. In 1883, Galicia was first connected by rail to the rest of Spain, by way of O Barco de Valdeorras. Galicia today has roughly of rail lines. Several lines operated by Adif and Renfe Operadora connect all the important Galician cities. A line operated by FEVE connects Ferrol to Ribadeo and Oviedo. An electrified line is the Ponferrada-Monforte de Lemos-Ourense-Vigo line. Several high-speed rail lines are under construction. Among these are the Olmedo-Zamora-Galicia high-speed rail line that opened partly in 2011, and the AVE Atlantic Axis route, which will connect all of the major Galician Atlantic coast cities A Coruña, Santiago de Compostela, Pontevedra and Vigo to Portugal. Another projected AVE line will connect Ourense to Pontevedra and Vigo.
Galicia's inhabitants are known as Galicians (, ). For well over a century Galicia has grown more slowly than the rest of Spain, due largely to a poorer economy compared with other regions of Spain and emigration to Latin America and to other parts of Spain. Sometimes Galicia has lost population in absolute terms. In 1857, Galicia had Spain's densest population and constituted 11.5% of the national population. , only 6.1% of the Spanish population resided in the autonomous community. This is due to an exodus of Galician people since the 19th century, first to South America and later to Central Europe and to the development of population centers and industry in other parts of Spain.
According to the 2006 census, Galicia has a fertility rate of 1.03 children per woman, compared to 1.38 nationally, and far below the figure of 2.1 that represents a stable populace. Lugo and Ourense provinces have the lowest fertility rates in Spain, 0.88 and 0.93, respectively.
In northern Galicia, the A Coruña-Ferrol metropolitan area has become increasingly dominant in terms of population. The population of the city of A Coruña in 1900 was 43,971. The population of the rest of the province, including the City and Naval Station of nearby Ferrol and Santiago de Compostela, was 653,556. A Coruña's growth occurred after the Spanish Civil War at the same speed as other major Galician cities, but since the revival of democracy after the death of Francisco Franco, A Coruña has grown at a faster rate than all the other Galician cities.
The rapid increase of population of A Coruña, Vigo and to a lesser degree other major Galician cities, like Ourense, Pontevedra or Santiago de Compostela during the years that followed the Spanish Civil War during the mid-20th century occurred as the rural population declined: many villages and hamlets of the four provinces of Galicia disappeared or nearly disappeared during the same period. Economic development and mechanization of agriculture resulted in the fields being abandoned, and most of the population moving to find jobs in the main cities. The number of people working in the Tertiary and Quaternary sectors of the economy has increased significantly.
Since 1999, the absolute number of births in Galicia has been increasing. In 2006, 21,392 births were registered in Galicia, 300 more than in 2005, according to the Instituto Galego de Estatística. Since 1981, the Galician life expectancy has increased by five years, thanks to a higher quality of life.
Roman Catholicism is, by far, the largest religion in Galicia. In 2012, the proportion of Galicians that identify themselves as Roman Catholic was 82.2%.
The principal cities are A Coruña, Ourense, Lugo, Pontevedra, Santiago de Compostela – the political capital and archiepiscopal seat – Vigo and Ferrol.
The largest conurbations are:
Like many rural areas of Western Europe, Galicia's history has been defined by mass emigration. Significant internal migration took place from Galicia in the late 19th and early 20th centuries to the industrialized Spanish cities of Barcelona, Bilbao, Zaragoza and Madrid. Other Galicians emigrated to Latin America – Argentina, Uruguay, Venezuela, Mexico, Brazil and Cuba in particular. Fidel Castro was born in Cuba to a wealthy planter father who was an immigrant from Galicia; Castro's mother was of Galician descent.
The two cities with the greatest number of people of Galician descent outside Galicia are Buenos Aires, Argentina, and nearby Montevideo, Uruguay. Immigration from Galicia was so significant in these areas that Argentines and Uruguayans now commonly refer to all Spaniards as "gallegos" (Galicians).
During the Franco years, there was a new wave of emigration out of Galicia to other European countries, most notably to France, Germany, Switzerland, and the United Kingdom. Many of these immigrant or expatriate communities have their own groups or clubs, which they formed in the first decades of settling in a new place. The Galician diaspora is so widespread that websites such as Fillos de Galicia have been created in the 21st century to organize and form a network of ethnic Galicians throughout the world.
After this, a third wave was a Spanish internal emigration to heavier industrialised areas of Spain, like the Basque Country or Catalunya.
The proportion of foreign-born people in Galicia is only 2.9 percent compared to a national figure of 10 percent; among the autonomous communities, only Extremadura has a lower percentage of immigrants. Of the foreign nationals resident in Galicia, 17.93 percent are the ethnically related Portuguese, 10.93 percent are Colombian and 8.74 percent Brazilian.
Galicia has two official languages: Galician (Galician: "galego") and Spanish (also known in Spain as "castellano", "Castilian"), both of them Romance languages. Galician originated regionally; the latter was associated with Castile. Galician is recognized in the Statute of Autonomy of Galicia as the "lingua propia" ("own language") of Galicia.
Galician is closely related to Portuguese. Both share a common medieval phase known as Galician-Portuguese. The independence of Portugal since the late Middle Ages has favored the divergence of the Galician and Portuguese languages as they developed. Though considered to be independent languages in Galicia, the shared history between Galician and Portuguese has been widely acknowledged; in 2014, the Galician parliament approved Law 1/2014 on the promotion of Portuguese and links with the Lusophony.
The official Galician language has been standardized by the Real Academia Galega on the basis of literary tradition. Although there are local dialects, Galician media conform to this standard form, which is also used in primary, secondary, and university education. There are more than three million Galician speakers in the world. Galician ranks in the lower orders of the 150 most widely spoken languages on earth.
For more than four centuries of Castilian domination, Spanish was the only official language in Galicia. Galician faded from day-to-day use in urban areas. Since the re-establishment of democracy in Spain—in particular since passage and implementation of the "Lei de Normalización Lingüística" ("Law of Linguistic Normalization", Ley 3/1983, 15 June 1983)—the first generation of students in mass education has attended schools conducted in Galician. (Castilian Spanish is also taught.)
Since the late 20th century and the establishment of Galicia's autonomy, the Galician language is resurgent. In the cities, it is generally used as a second language for most. According to a 2001 census, 99.16 percent of the population of Galicia understood the language, 91.04 percent spoke it, 68.65 percent could read it and 57.64 percent could write it. The first two numbers (understanding and speaking) were roughly the same as responses a decade earlier. But there were great gains among the percentage of the population who could read and write Galician: a decade earlier, only 49.3 percent of the population could read Galician, and 34.85 percent could write it. During the Franco era, the teaching of Galician was prohibited. Today older people may speak the language but have no written competence because of those years. Among the regional languages of Spain, Galician has the highest percentage of speakers in its population.
The earliest known document in Galician-Portuguese dates from 1228. The "Foro do bo burgo do Castro Caldelas" was granted by Alfonso IX of León to the town of Burgo, in Castro Caldelas, after the model of the constitutions of the town of Allariz. A distinct Galician literature emerged during the Middle Ages: In the 13th century important contributions were made to the Romance canon in Galician-Portuguese, the most notable those by the troubadour Martín Codax, the priest Airas Nunes, King Denis of Portugal, and King Alfonso X of Castile, "Alfonso O Sabio" ("Alfonso the Wise"), the same monarch who began the process of establishing the hegemony of Castilian. During this period, Galician-Portuguese was considered the language of love poetry in the Iberian Romance linguistic culture. The names and memories of Codax and other popular cultural figures are well preserved in modern Galicia.
Christianity is the most widely practised religion in Galicia. It was introduced in Late Antiquity and was practiced alongside the native Celtic religion for a few centuries which, incidentally, was re-established as an officially recognised religion in 2015. Still, today about 77.7% of Galicians identify as Catholic. Most Christians adhere to Roman Catholicism, though only 32.1% of the population described themselves as active members. The Catholic Church in Galicia has had its primatial seat in Santiago de Compostela since the 12th century.
Since the Middle Ages, the Galician Catholic Church has been organized into five ecclesiastical dioceses (Lugo, Ourense, Santiago de Compostela, Mondoñedo-Ferrol and Tui-Vigo). While these may have coincided with contemporary 15th-century civil provinces, they no longer have the same boundaries as the modern civil provincial divisions. The church is led by one archbishop and four bishops. The five dioceses of Galicia are divided among 163 districts and 3,792 parishes. A few are governed by administrators, the remainder by parish priests.
The patron saint of Galicia is Saint James the Greater. According to Catholic tradition, his body was discovered in 814 near Compostela. After that date, the relics of Saint James attracted an extraordinary number of pilgrims. Since the 9th century these relics have been kept in the heart of the church – the modern-day cathedral – dedicated to him. There are many other Galician and associated saints; some of the best-known are: Saint Ansurius, Saint Rudesind, Saint Mariña of Augas Santas, Saint Senorina, Trahamunda and Froilan.
Galicia's education system is administered by the regional government's Ministry of Education and University Administration. 76% of Galician teenagers achieve a high school degree – ranked fifth out of the 17 autonomous communities.
There are three public universities in Galicia: University of A Coruña with campuses in A Coruña and Ferrol, University of Santiago de Compostela with campuses in Santiago de Compostela and Lugo and the University of Vigo with campuses in Pontevedra, Ourense and Vigo.
Galicia's public healthcare system is the (SERGAS). It is administered by the regional government's Ministry of Health.
Hundreds of ancient standing stone monuments like dolmens, menhirs and megalithics Tumulus were erected during the prehistoric period in Galicia, amongst the best-known are the dolmens of Dombate, Corveira, Axeitos of Pedra da Arca, menhirs like the "Lapa de Gargñáns". From the Iron Age, Galicia has a rich heritage based mainly on a great number of Hill forts, few of them excavated like Baroña, Sta. Tegra, San Cibrao de Lás and Formigueiros among others. With the introduction of Ancient Roman architecture there was a development of basilicas, castra, city walls, cities, villas, Roman temples, Roman roads, and the Roman bridge of Ponte Vella. It was the Romans who founded some of the first cities in Galicia like Lugo and Ourense. Perhaps the best-known examples are the Roman Walls of Lugo and the Tower of Hercules in A Coruña.
During the Middle Ages, a huge quantity of fortified castles were built by Galician feudal nobles to mark their powers against their rivals. Although the most of them were demolished during the Irmandiño Wars (1466–1469), some Galician castles that survived are Pambre, Castro Caldelas, Sobroso, Soutomaior and Monterrei among others. Ecclesiastical architecture raised early in Galicia, and the first churches and monasteries as San Pedro de Rocas, began to be built in 5th and 6th centuries. However, the most famous medieval architecture in Galicia had been using Romanesque architecture like most of Western Europe. Some of the greatest examples of Romanesque churches in Galicia are the Cathedral of Santiago de Compostela, the Ourense Cathedral, Saint John of Caaveiro, Our Lady Mary of Cambre and the Church of San Xoán of Portomarín among others.
Galician cuisine often uses fish and shellfish. The "empanada" is a meat or fish pie, with a bread-like base, top and crust with the meat or fish filling usually being in a tomato sauce including onions and garlic. "Caldo galego" is a hearty soup whose main ingredients are potatoes and a local vegetable named grelo (Broccoli rabe). The latter is also employed in "Lacón con grelos", a typical carnival dish, consisting of pork shoulder boiled with "grelos", potatoes and chorizo. "Centolla" is the equivalent of king crab. It is prepared by being boiled alive, having its main body opened like a shell, and then having its innards mixed vigorously. Another popular dish is octopus, boiled (traditionally in a copper pot) and served in a wooden plate, cut into small pieces and laced with olive oil, sea salt and "pimentón" (Spanish paprika). This dish is called "Pulpo a la gallega" or in Galician "Polbo á Feira", which roughly translates as "Fair-style Octopus", most commonly translated as "Galician-style Octopus". There are several regional varieties of cheese. The best-known one is the so-called "tetilla", named after its breast-like shape. Other highly regarded varieties include the San Simón cheese from Vilalba and the creamy cheese produced in the Arzúa-Ulloa area. A classical is "filloas", crêpe-like pancakes made with flour, broth or milk, and eggs. When cooked at a pig slaughter festival, they may also contain the animal's blood. A famous almond cake called "Tarta de Santiago" (St. James' cake) is a Galician sweet speciality mainly produced in Santiago de Compostela and all around Galicia.
Galicia has 30 products with "Denominación de orixe" (D.O.), some of them with "Denominación de Orixe Protexida" (D.O.P.). D.O. and D.O.P. are part of a system of regulation of quality and geographical origin among Spain's finest producers. Galicia produces a number of high-quality Galician wines, including Albariño, Ribeiro, Ribeira Sacra, Monterrei and Valdeorras. The grape varieties used are local and rarely found outside Galicia and Northern Portugal. Just as notably from Galicia comes the spirit "Augardente"—the name means burning water—often referred to as Orujo in Spain and internationally or as caña in Galicia. This spirit is made from the distillation of the pomace of grapes.
The traditional music of Galicia and Asturias features highly distinctive folk styles that have some similarities with the neighboring area of Cantabria. The music is characterized by the use of bagpipes.
As with many other Romance languages, Galician-Portuguese emerged as a literary language in the Middle Ages, during the 12th and 13th centuries, when a rich lyric tradition developed, followed by a minor prose tradition, whilst being the predominant language used for legal and private texts till the 15th century. However, in the face of the hegemony of Castilian Spanish, during the so-called "Séculos Escuros" ("Dark Centuries") from 1530 to the late 18th century, it fell from major literary or legal written use.
As a literary language it was revived again during the 18th and, most notably, the 19th-century ("Rexurdimento" "Resurgence") with such writers as Rosalía de Castro, Manuel Murguía, Manuel Leiras Pulpeiro, and Eduardo Pondal. In the 20th century, before the Spanish Civil War the "Irmandades da Fala" ("Brotherhood of the Language") and "Grupo Nós" included such writers as Vicente Risco, Ramón Cabanillas and Castelao. Public use of Galician was largely suppressed during the Franco dictatorship but has been resurgent since the restoration of democracy. Contemporary writers in Galician include Xosé Luís Méndez Ferrín, Manuel Rivas, Chus Pato, and Suso de Toro.
In 2015 only five "corridas" took place within Galicia.
In addition, recent studies have stated that 92% of Galicians are firmly against bullfighting, the highest rate in Spain. Despite this, popular associations, such as "Galicia Mellor Sen Touradas" ("Galicia Better without Bullfights"), have blamed politicians for having no compromise in order to abolish it and have been very critical of local councils', especially those governed by the PP and PSOE, payment of subsidies for corridas. The province government of Pontevedra stopped the end of these subsidies and declared the province "free of bullfights". The province government of A Coruña approved a document supporting the abolition of these events.
Televisión de Galicia (TVG) is the autonomous community's public channel, which has broadcast since 24 July 1985 and is part of the Compañía de Radio-Televisión de Galicia (CRTVG). TVG broadcasts throughout Galicia and has two international channels, Galicia Televisión Europa and Galicia Televisión América, available throughout the European Union and the Americas through Hispasat. CRTVG also broadcasts a digital terrestrial television (DTT) channel known as tvG2 and is considering adding further DTT channels, with a 24-hour news channel projected for 2010.
Radio Galega (RG) is the autonomous community's public radio station and is part of CRTVG. Radio Galega began broadcasting 24 February 1985, with regular programming starting 29 March 1985. There are two regular broadcast channels: Radio Galega and Radio Galega Música. In addition, there is a DTT and internet channel, Son Galicia Radio, dedicated specifically to Galician music.
Galicia has several free and community radiostations. Cuac FM is the headquarters of the Community Media Network (which brings together media non-profit oriented and serve their community). CUAC FM (A Coruña), Radio Filispim (Ferrol), Radio Roncudo (corme), Kalimera Radio (Santiago de Compostela), Radio Piratona (Vigo) and Radio Clavi (Lugo) are part of the Galician Network of Free and Association of Community Radio Broadcasters(ReGaRLiC)
The most widely distributed newspaper in Galicia is "La Voz de Galicia", with 12 local editions and a national edition. Other major newspapers are "El Correo Gallego" (Santiago de Compostela), "Faro de Vigo" (Vigo), "Diario de Pontevedra" (Pontevedra), "El Progreso" (Lugo), "La Región" (Ourense), and "Galicia Hoxe" – The first daily newspaper to publish exclusively in Galician. Other newspapers are "Diario de Ferrol", the sports paper "DxT Campeón", "El Ideal Gallego" from A Coruña, the "Heraldo de Vivero", "Atlántico Diario" from Vigo and the "Xornal de Galicia".
Galicia has a long sporting tradition dating back to the early 20th century, when the majority of sports clubs in Spain were founded. The most popular and well-supported teams in the region are Celta Vigo and Deportivo La Coruña. When the two sides play, it is referred to as the Galician derby. Deportivo were champions of La Liga in the 1999–2000 season.
Pontevedra CF from Pontevedra and Racing Ferrol from Ferrol are two other notable clubs and they currently play in third level, but nowadays the third most important football team of Galicia is CD Lugo, currently playing in the second division of La Liga (Liga Adelante). Similarly to Catalonia and the Basque Country, the Galician Football Federation also periodically fields a national team against international opposition. This fact causes some political controversy because matches involving other national football teams different from the Spanish official national team threaten its status as the one and only national football team of the State. The policy of centralization in sport is very strong as it is systematically used as a patriotic device with which to build a symbol of the supposed unity of Spain which is actually a plurinational State.
Football aside, the most popular team sports in Galicia are futsal, handball and basketball. In basketball, Obradoiro CAB is the most successful team of note, and currently the only Galician team that plays in the Liga ACB; other teams are CB Breogan, Club Ourense Baloncesto and OAR Ferrol. In the sport of handball, Club Balonmán Cangas plays in the top-flight (Liga ASOBAL). The sport is particularly popular in the province of Pontevedra with the three other Galician teams in the top two divisions: SD Teucro (Pontevedra), Octavio Pilotes Posada (Vigo) and SD Chapela (Redondela).
In roller hockey HC Liceo is the most successful Galician team, in any sport, with numerous European and World titles. In futsal teams, Lobelle Santiago and Azkar Lugo.
Galicia is also known for its tradition of participation in water sports both at sea and in rivers; these include rowing, yachting, canoeing and surfing. Its athletes have regularly won medals in the Olympics; currently the most notable examples are David Cal, Carlos Pérez Rial and Fernando Echavarri.
Galician triathlon contenders Francisco Javier Gómez Noya and Iván Raña have been world champions. In 2006 the cyclist Oscar Pereiro won the Tour de France after the disqualification of American Floyd Landis, gaining the top position on the penultimate day of the race. Galicians are also prominent athletes in the sport of mountaineering—Chus Lago is the third woman to reach the summit of Everest without supplemental oxygen.
Since 2011, several Gaelic football teams have been set up in Galicia. The first was Fillos de Breogán (A Coruña), followed Artabros (Oleiros), Irmandinhos (A Estrada), SDG Corvos (Pontevedra), and Suebia (Santiago de Compostela) with talk of creating a Galician league. Galicia also fielded a Gaelic football side (recognised as national by the GAA) that beat Brittany in July 2012 and was reported in the Spanish nationwide press.
Rugby is growing in popularity, although the success of local teams is hampered by the absence of experienced expat players from English-speaking countries typically seen at teams based on the Mediterranean coast or in the big cities. Galicia has a long established Rugby Federation that organises its own women's, children's and men's leagues. Galicia has also fielded a national side for friendly matches against other regions of Spain and against Portugal. A team of expat Galicians in Salvador, Brazil have also formed Galicia Rugby, a sister team of the local football club.
A golden chalice enclosed in a field of azure has been the symbol of Galicia since the 13th century. Originated as a Canting arms due to the phonetic similarity between the words "chalice" and "Galyce" ("Galicia" in old Norman language), the first documented mention of this emblem is on the "Segar's Roll", an English medieval roll of arms where are represented all the Christian kingdoms of 13th-century Europe. In following centuries, the Galician emblem was variating; diverse shapes and number of chalices (initially three and later one or five), wouldn't be until the 16th century that its number was fixed finally as one single chalice. Centuries after, a field of crosses was slowly added to the azure background, and latterly also a silver host. Since then basically the emblem of the kingdom would be kept until nowadays.
The ancient flag of the Kingdom of Galicia was based mainly on its coat of arms until the 19th century. However, when in 1833 the Government of Spain decided to abolish the kingdom and divided it in four provinces, the Galician emblem as well as flag, lost its legal status and international validity. It wouldn't be until the late 19th century that some Galician intellectuals (nationalist politicians and writers) began to use a new flag as symbol a renewed national unity for Galicia. That flag, what was composed by a diagonal stripe over a white background, was designated "official flag of Galicia" in 1984, after the fall of the Franco's dictatorship. In addition, the Royal Academy of Galicia asked the Galician government to incorporate the ancient coat of arms of the kingdom onto the modern flag, being present in it since then.
In addition to its coat of arms and flag, Galicia also has an own anthem. While it is true that the Kingdom of Galicia had during centuries a kind of unofficial anthem known as the "Solemn March of the kingdom", the Galician current anthem was not created until 1907, although its composition had begun already in 1880. Titled "Os Pinos" ("The Pines"), the Galician anthem lyrics was written by Eduardo Pondal, one of the greatest modern Galician poets, and its music was composed by Pascual Veiga. Performed for the first time in 1907 in Havana (Cuba) by Galician emigrants, the anthem was banned since 1927 by diverse Spanish Governments until 1977, when it was officially established by the Galician authorities.
Galicia Peak in Vinson Massif, Antarctica is named after the autonomous community of Galicia. | https://en.wikipedia.org/wiki?curid=12837 |
G protein
G proteins, also known as guanine nucleotide-binding proteins, are a family of proteins that act as molecular switches inside cells, and are involved in transmitting signals from a variety of stimuli outside a cell to its interior. Their activity is regulated by factors that control their ability to bind to and hydrolyze guanosine triphosphate (GTP) to guanosine diphosphate (GDP). When they are bound to GTP, they are 'on', and, when they are bound to GDP, they are 'off'. G proteins belong to the larger group of enzymes called GTPases.
There are two classes of G proteins. The first function as monomeric small GTPases (small G-proteins), while the second function as heterotrimeric G protein complexes. The latter class of complexes is made up of "alpha" (α), "beta" (β) and "gamma" (γ) subunits. In addition, the beta and gamma subunits can form a stable dimeric complex referred to as the beta-gamma complex
.
Heterotrimeric G proteins located within the cell are activated by G protein-coupled receptors (GPCRs) that span the cell membrane. Signaling molecules bind to a domain of the GPCR located outside the cell, and an intracellular GPCR domain then in turn activates a particular G protein. Some active-state GPCRs have also been shown to be "pre-coupled" with G proteins. The G protein activates a cascade of further signaling events that finally results in a change in cell function. G protein-coupled receptor and G proteins working together transmit signals from many hormones, neurotransmitters, and other signaling factors. G proteins regulate metabolic enzymes, ion channels, transporter proteins, and other parts of the cell machinery, controlling transcription, motility, contractility, and secretion, which in turn regulate diverse systemic functions such as embryonic development, learning and memory, and homeostasis.
G proteins were discovered when Alfred G. Gilman and Martin Rodbell investigated stimulation of cells by adrenaline. They found that when adrenaline binds to a receptor, the receptor does not stimulate enzymes (inside the cell) directly. Instead, the receptor stimulates a G protein, which then stimulates an enzyme. An example is adenylate cyclase, which produces the second messenger cyclic AMP. For this discovery, they won the 1994 Nobel Prize in Physiology or Medicine.
Nobel prizes have been awarded for many aspects of signaling by G proteins and GPCRs. These include receptor antagonists, neurotransmitters, neurotransmitter reuptake, G protein-coupled receptors, G proteins, second messengers, the enzymes that trigger protein phosphorylation in response to cAMP, and consequent metabolic processes such as glycogenolysis.
Prominent examples include (in chronological order of awarding):
G proteins are important signal transducing molecules in cells. "Malfunction of GPCR [G Protein-Coupled Receptor] signaling pathways are involved in many diseases, such as diabetes, blindness, allergies, depression, cardiovascular defects, and certain forms of cancer. It is estimated that about 30% of the modern drugs' cellular targets are GPCRs." The human genome encodes roughly 800 G protein-coupled receptors, which detect photons of light, hormones, growth factors, drugs, and other endogenous ligands. Approximately 150 of the GPCRs found in the human genome still have unknown functions.
Whereas G proteins are activated by G protein-coupled receptors, they are inactivated by RGS proteins (for "Regulator of G protein signalling"). Receptors stimulate GTP binding (turning the G protein on). RGS proteins stimulate GTP hydrolysis (creating GDP, thus turning the G protein off).
All eukaryotes use G proteins for signaling and has evolved a large diversity of G proteins. For instance, humans encode 18 different Gα proteins, 5 Gβ proteins, and 12 Gγ proteins.
G protein can refer to two distinct families of proteins. Heterotrimeric G proteins, sometimes referred to as the "large" G proteins, are activated by G protein-coupled receptors and are made up of alpha (α), beta (β), and gamma (γ) subunits. ""Small" G proteins" (20-25kDa) belong to the Ras superfamily of small GTPases. These proteins are homologous to the alpha (α) subunit found in heterotrimers, but are in fact monomeric, consisting of only a single unit. However, like their larger relatives, they also bind GTP and GDP and are involved in signal transduction.
Different types of heterotrimeric G proteins share a common mechanism. They are activated in response to a conformational change in the GPCR, exchanging GDP for GTP, and dissociating in order to activate other proteins in a particular signal transduction pathway. The specific mechanisms, however, differ between protein types.
Receptor-activated G proteins are bound to the inner surface of the cell membrane. They consist of the Gα and the tightly associated Gβγ subunits. There are many classes of Gα subunits: Gsα (G stimulatory), Giα (G inhibitory), Goα (G other), Gq/11α, and G12/13α are some examples. They behave differently in the recognition of the effector molecule, but share a similar mechanism of activation.
When a ligand activates the G protein-coupled receptor, it induces a conformational change in the receptor that allows the receptor to function as a guanine nucleotide exchange factor (GEF) that exchanges GDP for GTP – thus turning the GPCR "on". The GTP (or GDP) is bound to the Gα subunit in the traditional view of heterotrimeric GPCR activation. This exchange triggers the dissociation of the Gα subunit (which is bound to GTP) from the Gβγ dimer and the receptor as a whole. However, models which suggest molecular rearrangement, reorganization, and pre-complexing of effector molecules are beginning to be accepted. Both Gα-GTP and Gβγ can then activate different "signaling cascades" (or "second messenger pathways") and effector proteins, while the receptor is able to activate the next G protein.
The Gα subunit will eventually hydrolyze the attached GTP to GDP by its inherent enzymatic activity, allowing it to re-associate with Gβγ and starting a new cycle. A group of proteins called Regulator of G protein signalling (RGSs), act as GTPase-activating proteins (GAPs), are specific for Gα subunits. These proteins accelerate the hydrolysis of GTP to GDP, thus terminating the transduced signal. In some cases, the effector "itself" may possess intrinsic GAP activity, which then can help deactivate the pathway. This is true in the case of phospholipase C-beta, which possesses GAP activity within its C-terminal region. This is an alternate form of regulation for the Gα subunit. Such Gα GAPs do not have catalytic residues (specific amino acid sequences) to activate the Gα protein. They work instead by lowering the required activation energy for the reaction to take place.
Gαs activates the cAMP-dependent pathway by stimulating the production of cyclic AMP (cAMP) from ATP. This is accomplished by direct stimulation of the membrane-associated enzyme adenylate cyclase. cAMP can then act as a second messenger that goes on to interact with and activate protein kinase A (PKA). PKA can phosphorylate a myriad downstream targets.
The cAMP-dependent pathway is used as a signal transduction pathway for many hormones including:
Gαi inhibits the production of cAMP from ATP.
eg. somatostatin, prostaglandins
Gαq/11 stimulates the membrane-bound phospholipase C beta, which then cleaves PIP2 (a minor membrane phosphoinositol) into two second messengers, IP3 and diacylglycerol (DAG).
The Inositol Phospholipid Dependent Pathway is used as a signal transduction pathway for many hormones including:
Small GTPases, also known as small G-proteins, bind GTP and GDP likewise, and are involved in signal transduction. These proteins are homologous to the alpha (α) subunit found in heterotrimers, but exist as monomers. They are small (20-kDa to 25-kDa) proteins that bind to guanosine triphosphate (GTP). This family of proteins is homologous to the Ras GTPases and is also called the Ras superfamily GTPases.
In order to associate with the inner leaflet of the plasma membrane, many G proteins and small GTPases are lipidated, that is, covalently modified with lipid extensions. They may be myristolated, palmitoylated or prenylated. | https://en.wikipedia.org/wiki?curid=12841 |
Gary Gygax
Ernest Gary Gygax ( ; July 27, 1938 – March 4, 2008) was an American game designer and author best known for co-creating the pioneering role-playing game "Dungeons & Dragons" ("D&D") with Dave Arneson.
In the 1960s, Gygax created an organization of wargaming clubs and founded the Gen Con gaming convention. In 1971, he helped develop "Chainmail", a miniatures wargame based on medieval warfare. He co-founded the company Tactical Studies Rules (TSR, Inc.) with childhood friend Don Kaye in 1973. The following year, he and Arneson created "D&D", which expanded on Gygax's "Chainmail" and included elements of the fantasy stories he loved as a child. In the same year, he founded "The Dragon", a magazine based around the new game. In 1977, Gygax began work on a more comprehensive version of the game, called "Advanced Dungeons & Dragons". Gygax designed numerous manuals for the game system, as well as several pre-packaged adventures called "modules" that gave a person running a "D&D" game (the "Dungeon Master") a rough script and ideas on how to run a particular gaming scenario. In 1983, he worked to license the "D&D" product line into the successful "D&D" cartoon series.
After leaving TSR in 1986 over issues with its new majority owner, Gygax continued to create role-playing game titles independently, beginning with the multi-genre "Dangerous Journeys" in 1992. He designed another gaming system called "Lejendary Adventure", released in 1999. In 2005, Gygax was involved in the "Castles & Crusades" role-playing game, which was conceived as a hybrid between the third edition of "D&D" and the original version of the game conceived by Gygax.
Gygax was married twice and had six children. In 2004, Gygax suffered two strokes, narrowly avoided a subsequent heart attack, was then diagnosed with an abdominal aortic aneurysm, and died in March 2008.
Gygax was born in Chicago, the son of Almina Emelie "Posey" (Burdick) and Swiss immigrant and former Chicago Symphony Orchestra violinist Ernst Gygax. He was named Ernest after his father, but he was commonly known as Gary, the middle name given to him by his mother after the actor Gary Cooper. The family lived on Kenmore Avenue, close enough to Wrigley Field that he could hear the roar of the crowds watching the Chicago Cubs play. At age 7, he became a member of a small group of friends who called themselves the "Kenmore Pirates". In 1946, after the Kenmore Pirates were involved in a fracas with another gang of boys, his father decided to move the family to Posey's family home in Lake Geneva, Wisconsin, where Posey's family had settled in the early 19th century, and where Gary's grandparents still lived.
In this new setting, Gygax soon made friends with several of his peers, including Don Kaye and Mary Jo Powell. During his childhood and teen years, he developed a love of games and an appreciation for fantasy and science fiction literature. When he was five, he played card games such as pinochle and then board games such as chess. At the age of ten, he and his friends played the sort of make-believe games that eventually came to be called "live action role-playing games" with one of them acting as a referee. His father introduced him to science fiction and fantasy through pulp novels. His interest in games, combined with an appreciation of history, eventually led Gygax to begin playing miniature war games in 1953 with his best friend Don Kaye. As teenagers Gygax and Kaye designed their own miniatures rules for toy soldiers with a large collection of and figures, where they used "ladyfingers" (small firecrackers) to simulate explosions.
By the time he reached his teens, Gygax had a voracious appetite for pulp fiction authors such as Robert Howard, Jack Vance, Fritz Leiber, H. P. Lovecraft, and Edgar Burroughs. Gygax was a mediocre student, and in 1956, a few months after his father died, dropped out of high school in his junior year. He briefly joined the Marines, but after being diagnosed with walking pneumonia, was given a medical discharge and moved back home with his mother. From there, he commuted to a job as a shipping clerk with Kemper Insurance Co. in Chicago. Shortly after his return, a friend introduced him to Avalon Hill's new wargame "Gettysburg", and Gygax was soon obsessed with the game, often playing marathon sessions once a week or more. It was also from Avalon Hill that he ordered the first blank hex mapping sheets that were available, which he then employed to design his own games.
At about the same time that he discovered "Gettysburg", his mother re-introduced him to Mary Jo Powell, who had left Lake Geneva as a child and had just returned. Gygax was smitten with the woman and, after a short courtship, persuaded her to marry him, despite the fact that he was only 19. This caused some friction with his best friend Don Kaye, who had also been wooing Mary Jo, to the point where Kaye refused to attend Gygax's wedding. (Kaye and Gygax reconciled after the wedding.)
The young couple moved to Chicago where Gygax continued as a shipping clerk at Kemper Insurance, and also found Mary Jo a job there too. (The company laid her off when she became pregnant with their first child.) He also took anthropology classes at the University of Chicago. Gygax also volunteered as a Republican precinct captain during the 1960 presidential election, and observed many infractions by his Democratic counterpart. When he threatened to report these, he was offered a full scholarship to the University of Chicago if he kept silent. Although Gygax ultimately did not report the infractions, since he felt nothing would be done, he also did not accept the scholarship.
Despite his commitments to his job, raising a family, school, and his political volunteerism, Gygax continued to play wargames. It reached the point that Mary Jo, pregnant with their second child, believed he was having an affair and confronted him in a friend's basement only to discover him and his friends sitting around a map-covered table.
In 1962, Gygax got a job as an insurance underwriter at Fireman's Fund Insurance Co. His family continued to grow, and after his third child was born, he decided to move his family back to Lake Geneva. Except for a few months he would spend in Clinton, Wisconsin, following his divorce, and his time in Hollywood while he was the head of TSR's entertainment division, Lake Geneva would be his home for the rest of his life.
By 1966, Gygax was active in the wargame hobby world and was writing many magazine articles on the subject. Gygax learned about H. G. Wells' "Little Wars" book for play of military miniatures wargames and Fletcher Pratt's "Naval Wargame" book. Gygax later looked for innovative ways to generate random numbers, and he used not only common, six-sided dice, but dice of all five Platonic solid shapes, which he discovered in a school supply catalog.
Gygax cited his influences as Robert E. Howard, L. Sprague de Camp, Jack Vance, Fletcher Pratt, Fritz Leiber, Poul Anderson, A. Merritt, and H. P. Lovecraft.
In 1967, Gygax co-founded the International Federation of Wargamers (IFW) with Bill Speer and Scott Duncan. The IFW grew rapidly, especially by assimilating several pre-existing wargaming clubs, and aimed to promote interest in wargames of all periods. It provided a forum for wargamers, via its newsletters and societies, which enabled them to form local groups and share rules. In 1967, Gygax organized a 20-person gaming meet in the basement of his home; this event would later be referred to as "Gen Con 0". In 1968, Gygax rented Lake Geneva's vine-covered Horticultural Hall for () to hold the first Lake Geneva Convention, also known as the Gen Con gaming convention for short. Gen Con is now one of North America's largest annual hobby-game gatherings. Gygax met Dave Arneson, the future co-creator of "D&D", at the second Gen Con in August 1969.
Together with Don Kaye, Mike Reese, and Leon Tucker, Gygax created a military miniatures society called Lake Geneva Tactical Studies Association (LGTSA) in 1970, with its first headquarters in Gygax's basement. Shortly thereafter in 1970, Robert Kuntz and Gygax founded the Castle & Crusade Society of the IFW.
Late in October 1970, Gygax lost his job at the insurance company after almost nine years. Unemployed and now with a family of five children — Ernest ("Ernie"), Lucion ("Luke"), Heidi, Cindy, and Elise—he tried to use his enthusiasm for games to make a living by designing board games for commercial sale. This clearly proved to be unsustainable when he only grossed $882 in 1971 (). He began cobbling shoes in his basement, which provided him with a steady income and gave him more time for pursuing his interest in game development. In 1971, he began doing some editing work at Guidon Games, a publisher of wargames, for which he produced the board games "Alexander the Great" and "". Early that same year, Gygax published "Chainmail", a miniatures wargame that simulated medieval-era tactical combat, which he had originally written with hobby-shop owner Jeff Perren. The "Chainmail" medieval miniatures rules were originally published in the Castle & Crusade Society's fanzine "The Domesday Book". Guidon Games hired Gygax to produce a "Wargaming with Miniatures" series of games, and a new edition of "Chainmail" (1971) was the first book in the series. The first edition of "Chainmail" included a fantasy supplement to the rules. These comprised a system for warriors, wizards, and various monsters of non-human races drawn from the works of J. R. R. Tolkien and other sources. For a small publisher like Guidon Games, "Chainmail" was relatively successful, selling 100 copies per month.
Gygax also collaborated on Tractics with Mike Reese & Leon Tucker, his contribution being the change to a 20-sided spinner or a coffee can with 20 numbered poker chips (or eventually 20-sided dice) to decide combat resolutions instead of the standard 6-sided dice. He also collaborated with Dave Arneson on the Napoleonic naval wargame "Don't Give Up the Ship!"
Dave Arneson adapted the "Chainmail" rules for his fantasy "Blackmoor" campaign. In the fall of 1972, around late November, Dave Arneson and friend David Megarry, inventor of the Dungeon! board game, traveled to Lake Geneva to showcase their respective games to Gygax, in his role as a representative of Guidon Games. Gygax saw potential in both games, and was especially excited by Arneson's role-playing game. Gygax and Arneson immediately started to collaborate on creating "The Fantasy Game", the role-playing game which would evolve into "Dungeons & Dragons".
Two weeks after Arneson's "Blackmoor" demonstration, Gygax had produced a 50-page set of rules, and was ready to try it on his two oldest children, Ernie and Elise, in a setting he called "Greyhawk". This group rapidly expanded to include Don Kaye, Rob Kuntz and eventually a large circle of players. Gygax sent the 50 pages of rules to his wargaming contacts and asked them to playtest the new game. Gygax and Arneson continued to trade notes about their respective campaigns. The final draft, however contained changes that were not vetted by Arneson, and Gygax's vision differed on some rule details Arneson had preferred.
Based on the feedback he received, Gygax created a 150-page revision of the rules by mid-1973. Several aspects of the system governing magic in the game were inspired by "The Dying Earth" stories of fantasy author Jack Vance (notably the fact that "magic-users" in the game forget the spells that they have learned immediately upon casting them, and must re-study them in order to cast them again), and the system as a whole drew upon the work of authors such as Robert E. Howard, L. Sprague de Camp, Michael Moorcock, Roger Zelazny, Poul Anderson, Tolkien, Bram Stoker, and others. He asked Guidon Games to publish it, but the 3-volume rule set in a labeled box was beyond the scope of the small publisher. Gygax attempted to pitch the game to Avalon Hill, but the largest company in wargaming did not understand the new concept of role-playing, and turned down his offer.
By 1974, Gygax's Greyhawk group, which had started off with himself, Ernie Gygax, Don Kaye, Rob Kuntz, and Terry Kuntz, had grown to over 20 people, with Rob Kuntz becoming the co-dungeon-master so that each of them could referee groups of only a dozen players.
Gygax left Guidon Games in 1973 and in October, with Don Kaye as a partner, founded Tactical Studies Rules, later known as TSR, Inc. The two men each invested in the venture — Kaye borrowed his share on his life insurance policy — in order to print a thousand copies of the "Dungeons & Dragons" boxed set. They also tried to raise money by immediately publishing a set of wargame rules called "Cavaliers and Roundheads", but sales were poor; when the printing costs for the thousand copies of "Dungeons & Dragons" rose from $2000 to $2500, they still did not have enough capital to publish it. Worried that the other playtesters and wargamers now familiar with Gygax's rules would bring a similar product to the market first, the two accepted an offer in December 1973 by game playing acquaintance Brian Blume to invest $2,000 in TSR to become an equal one-third partner. (Gygax accepted Blume's offer right away. Kaye was less enthusiastic, and after a week to consider the offer, he questioned Blume closely before acquiescing.) Blume's investment finally brought the financing that enabled them to publish "D&D". Gygax worked on rules for more miniatures and tabletop battle games including "Classic Warfare" (Ancient Period: 1500 BC to 500 AD), and "Warriors of Mars".
The first commercial version of "D&D" was released by TSR in January 1974 as a boxed set. Sales of the hand-assembled print run of 1,000 copies, put together in Gygax's home, sold out in less than a year. (In 2018, a first printing of the boxed set sold at auction for more than $20,000.)
At the end of 1974, with sales of D&D skyrocketing, the future looked bright for Gygax and Kaye, who were only 36. However, in January 1975, Kaye unexpectedly died of a heart attack. He had not made any specific provision in his will regarding his one-third share of the company, simply leaving his entire estate to his wife Donna. Although she had worked briefly for TSR as an accountant, she had not shared her husband's enthusiasm for gaming, and made it clear that she would not be having anything to do with managing the company. Gygax characterized her as "less than personable... After Don died she dumped all the Tactical Studies Rules materials off on my front porch. It would have been impossible to manage a business with her involved as a partner." After Kaye's death, TSR was forced to relocate from Kaye's dining room to Gygax's basement. In July 1975, Gygax and Blume reorganized their company from a partnership to a corporation called TSR Hobbies. Gygax owned 150 shares, Blume owned the other 100 shares, and both had the option to buy up to 700 shares at any time in the future. But TSR Hobbies had nothing to publish—D&D was still owned by the three-way partnership of TSR, and neither Gygax nor Blume had the money to buy out the shares owned by Kaye's wife. Blume persuaded a reluctant Gygax to allow his father, Melvin Blume, to buy Donna's shares, and those were converted to 200 shares in TSR Hobbies. In addition, Brian bought another 140 shares. These purchases reduced Gygax from the majority shareholder in control of the company to minority shareholder; he effectively became the Blumes' employee.
Gygax wrote the supplements "Greyhawk", "Eldritch Wizardry", and "Swords & Spells" for the original "D&D game". With Brian Blume, Gygax also designed the wild west-oriented role-playing game "Boot Hill". In the same year, Gygax created the magazine "The Strategic Review" with himself as editor. But wanting a more industry-wide periodical, he hired Tim Kask as TSR's first employee to change this magazine to the fantasy periodical "The Dragon", with Gygax as writer, columnist, and publisher (from 1978 to 1981). "The Dragon" debuted in June 1976, and Gygax commented on its success years later: "When I decided that "The Strategic Review" was not the right vehicle, hired Tim Kask as a magazine editor for Tactical Studies Rules, and named the new publication he was to produce "The Dragon", I thought we would eventually have a great periodical to serve gaming enthusiasts worldwide... At no time did I ever contemplate so great a success or so long a lifespan."
In 1976, TSR moved out of Gygax's house into its first professional home, known as "The Dungeon Hobby Shop". Dave Arneson was hired as part of the creative staff, but was let go after only ten months, another sign that Gygax and Arneson still had creative differences over D&D.
The "Dungeons & Dragons Basic Set", released in 1977, was an introductory version of the original "D&D" geared towards new players and edited by J. Eric Holmes. In the same year, TSR Hobbies released a completely new and complex version of "D&D", "Advanced Dungeons & Dragons" ("AD&D"). The "Monster Manual", released later that year, became the first supplemental rule book of the new system, and many more followed. The "AD&D" rules were not fully compatible with those of the "D&D Basic Set" and as a result, "D&D" and "AD&D" became distinct product lines. Splitting the game lines created a further rift between Gygax and Arneson; although Arneson received a 10% royalty on sales of all "D&D" products, Gygax refused to pay him royalties on "AD&D" books, claiming it was a new and different property. In 1979, Arneson filed a lawsuit against TSR; it was eventually settled in March 1981 with the agreement that Arneson would receive a 2.5% royalty on all AD&D products, giving him a very comfortable six-figure annual income for the next decade.
Gygax wrote the "AD&D" hardcovers "Players Handbook", "Dungeon Masters Guide", "Monster Manual," and "Monster Manual II". Gygax also wrote or co-wrote numerous "AD&D" and basic "D&D" adventure modules, including "The Keep on the Borderlands", "Tomb of Horrors", "Expedition to the Barrier Peaks", "The Temple of Elemental Evil", "The Forgotten Temple of Tharizdun", "Mordenkainen's Fantastic Adventure", "Isle of the Ape", and all seven of the modules later combined into "Queen of the Spiders". In 1980, Gygax's long-time campaign setting of Greyhawk was published in the form of the "World of Greyhawk Fantasy World Setting" folio, which was expanded in 1983 into the "World of Greyhawk Fantasy Game Setting" boxed set. Sales of the "D&D" game reached in 1980. Gygax also provided assistance on the "Gamma World" science fantasy role-playing game in 1981 and co-authored the "Gamma World" adventure "Legion of Gold".
In 1979, a Michigan State University student, James Dallas Egbert III, allegedly disappeared into the school's steam tunnels while playing a live-action version of "D&D". In fact, Egbert was discovered in Louisiana several weeks later, but negative mainstream media attention focused on "D&D" as the cause. In 1982, Patricia Pulling's son killed himself. Blaming "D&D" for her son's suicide, Pulling formed an organization named B.A.D.D. (Bothered About Dungeons & Dragons) to attack the game and the company that produced it. Gygax defended the game on a segment of "60 Minutes", which aired in 1985. When death threats started arriving at the TSR office, Gygax hired a bodyguard. Despite the negative publicity, or perhaps because of it, TSR's annual "D&D" sales increased in 1982 to , and in January 1983, "The New York Times" speculated that "D&D" might become "the great game of the 1980s" in the same manner that "Monopoly" was emblematic of the Great Depression.
Brian Blume persuaded Gygax to allow Brian's brother Kevin to purchase Melvin Blume's shares. This gave the Blume brothers a controlling interest, and by 1981, Gygax and the Blumes were increasingly at loggerheads over management of the company. Gygax's frustrations at work, and increased prosperity from his generous royalty cheques brought a number of changes to his personal life. He and Mary Jo had been active members of the local Jehovah's Witnesses, but others in the congregation already felt uneasy about Gygax's smoking and drinking; his connection to the "satanic" game of D&D caused enough friction that the Gygaxes finally disassociated themselves from Jehovah's Witnesses. Mary Jo, continuing to resent the amount of time her husband spent "playing games", had begun to drink excessively, and the couple argued frequently. Gygax, who had started smoking marijuana when he lost his insurance job in 1970, started to use cocaine, and had a number of extramarital affairs. Finally in 1983, the two had an acrimonious divorce.
At the same time, the Blumes, wanting to get Gygax out of Lake Geneva so they could manage the company without his "interference", split TSR Hobbies into TSR, Inc., and TSR Entertainment, Inc. Gygax became the President of TSR Entertainment, Inc., and the Blumes sent him to Hollywood to develop TV and movie opportunities. He became co-producer of the licensed "D&D" cartoon series for CBS, which led its time slot for two years.
Gygax, newly single, took advantage of his time on the West Coast, renting an immense mansion, increasing his cocaine use, and spending time with several young starlets.
Because he was occupied with getting a movie off the ground in Hollywood, Gygax had to leave the day-to-day operations of TSR to Kevin and Brian Blume. In 1984, after months of negotiation, he reached an agreement with Orson Welles to star in a D&D movie, and John Boorman to act as producer and director. But almost at the same time, he received word that back in Lake Geneva, TSR had run into severe financial difficulties and Kevin Blume was shopping the company for .
Gygax immediately discarded his movie ambitions—his "D&D" movie would never be made—and flew back to Lake Geneva. There, he discovered to his shock that although industry leader TSR was grossing , it was barely breaking even; it was in fact in debt and teetering on the edge of insolvency. After investigating the reasons why, Gygax brought his findings to the five other company directors. (Since 1982, TSR, Inc. had conformed to the recommendations of the American Management Association by adding three "outside" directors to the board, increasing its size to six.) Gygax charged that the financial crisis was due to mismanagement by Kevin Blume: excess inventory, overstaffing, too many company cars, and some questionable (and expensive) projects such as dredging up a 19th century shipwreck. Gygax demanded that Kevin Blume be removed as company president, and the three outside directors agreed with him. However, the board still believed the financial problems were terminal and the company needed to be sold. In an effort to stay in control, in March 1985, Gygax exercised his 700-share stock option, giving him just over 50% control. He appointed himself president and CEO, and rather than selling the company, he took steps to produce new revenue generating products. To that end, he contacted Dave Arneson with a view to produce some Blackmoor material. He also bet heavily on a new AD&D book, "Unearthed Arcana", a compilation of material culled from "Dragon" magazine articles. And he quickly wrote a novel set in his Greyhawk setting, "Saga of Old City", featuring a protagonist called Gord the Rogue. In order to bring some financial stability to TSR, he hired a company manager, Lorraine Williams.
When "Unearthed Arcana" was released in July, Gygax's bet paid off, as the new book sold 90,000 copies in the first month. His novel also sold well, and he immediately published a sequel, "Artifact of Evil". The financial crisis had been averted, but Gygax had paved the way for his own downfall. In October 1985, the new manager, Lorraine Williams, revealed that she had purchased all of the shares of Kevin and Brian Blume—after Brian had triggered his own 700-share option. Williams was now the majority shareholder, and replaced Gygax as president and CEO. She also made it clear that Gygax would be making no further creative contributions to TSR. Several of his projects were immediately shelved and never published. Gygax took TSR to court in a bid to block the Blumes' sale of their shares to Williams, but he lost.
Sales of "D&D" reached in 1985, but Gygax, seeing his future at TSR as untenable, resigned all positions with TSR, Inc. in October 1986, and settled his disputes with TSR in December 1986. By the terms of his settlement with TSR, Gygax kept the rights to Gord the Rogue as well as all "D&D" characters whose names were anagrams or plays on his own name (for example, Yrag and Zagyg). However, he lost the rights to all his other work, including the "World of Greyhawk" and the names of all the characters he had ever used in TSR material, such as Mordenkainen, Robilar, and Tenser.
Immediately after leaving TSR, Gygax was approached by a wargaming acquaintance, Forrest Baker, who had done some consulting work for TSR in 1983 and 1984. Gygax, who was tired of company management, was simply looking for some way to market more of his Gord the Rogue novels, but Baker had a vision for a new gaming company. He promised that he would handle the business end, while Gygax would handle the creative projects. Baker also guaranteed that, using Gygax's name, he would be able to bring in one to two million dollars of investment. Gygax decided this was a good opportunity, and in October 1986, New Infinities Productions, Inc. (NIPI) was publicly announced. To help him with the creative work, Gygax poached Frank Mentzer and "Dragon" magazine editor Kim Mohan from TSR. But before a single product was released, Forrest Baker left NIPI when his promised outside investment of one to two million dollars failed to materialize.
Against his will, Gygax was back in charge again; he immediately looked for a quick product to get NIPI off the ground. He had retained the rights to Gord the Rogue as part of his severance agreement with TSR, so he licensed Greyhawk from TSR and started writing new novels beginning with "Sea of Death" (1987); sales were brisk, and Gygax's Gord the Rogue novels ended up keeping New Infinities in business.
Gygax brought in Don Turnbull from Games Workshop to manage the company, then worked with Mohan and Mentzer on a science fiction-themed RPG, "Cyborg Commando", which was published in 1987. However, sales of the new game were not brisk: the game received overwhelmingly negative reception. NIPI was still dependent on Gord the Rogue.
Mentzer and Mohan also wrote a series of generic RPG adventures called "Gary Gygax Presents Fantasy Master". They also began working on a third line of products, which began with an adventure written by Mentzer called "The Convert" (1987); Mentzer had written the adventure as an RPGA tournament for "D&D", but TSR was not interested in publishing it. Mentzer got verbal permission to publish it with New Infinities, but since the permission was not in writing TSR filed an injunction to prevent the adventure's sale, although the injunction was later lifted. The legal costs further drained NIPI of capital.
During all of this drama, Gygax became a father again. Over the past year, he had formed a romantic relationship with Gail Carpenter, his former assistant at TSR. In November 1986, she gave birth to Gygax's sixth child, Alex. Biographer Michael Witwer believes the birth of Alex forced Gygax to reconsider the equation of work, gaming and family that, up until this time, had been dominated by work and gaming. "Gary, keenly aware that he had made mistakes as a father, and husband in the past, was determined not to make them again... Gary was also a realist, and knew what good fatherhood would demand, especially at his age." On August 15, 1987, on what would have been his parents' 50th wedding anniversary, Gygax married Gail Carpenter.
During 1987 and 1988, Gygax worked with Flint Dille on the "Sagard the Barbarian" books, as well as "Role-Playing Mastery" and its sequel, "Master of the Game". He also wrote two more Gord the Rogue novels, "City of Hawks" (1987), and "Come Endless Darkness" (1988). However, by 1988, TSR had rewritten the setting for the world of Greyhawk, and Gygax was not happy with the new direction in which TSR was taking "his" creation. In a literary declaration that his old world was dead, and wanting to make a clean break with all things Greyhawk, Gygax destroyed his version of Oerth in the final Gord the Rogue novel, "Dance of Demons".
With the Gord the Rogue novels finished, NIPI's main source of steady income dried up. The company needed a new product. Gygax announced in 1988 in a company newsletter that he and Rob Kuntz, his co-Dungeon Master during the early days of the Greyhawk campaign, were working as a team again. This time they would create a new multi-genre fantasy RPG called "Infinite Adventures", which would be supported by different gamebooks for different genres. This line would detail the Castle and City of Greyhawk as Gygax and Kuntz had originally envisioned them, now called "Castle Dunfalcon".
However, before work on this project could commence, NIPI ran out of money, was forced into bankruptcy, and was dissolved in 1989.
After NIPI folded, Gygax decided to create an entirely new RPG called "The Carpenter Project", one considerably more complex and "rule heavy" than his original and relatively simple "D&D" system, which had been encompassed by a mere 150 typewritten pages. He also wanted to create a horror setting for the new RPG called "Unhallowed". He began working on the RPG and the setting with the help of games designer Mike McCulley. Game Designers Workshop became interested in publishing the new system, and it also drew the attention of JVC and NEC, who were looking for a new RPG system and setting to turn into a series of computer games. NEC and JVC were not interested in horror though, and work on the "Unhallowed" setting was shelved in favour of a fantasy setting called "Mythus". JVC also wanted a name change for the RPG, favoring "Dangerous Dimensions" over "The Carpenter Project". Work progressed favourably until March 1992, when TSR filed an injunction against "Dangerous Dimensions", claiming the name and initials were too similar to "Dungeons & Dragons". Gygax, with the approval of NEC and JVC, quickly changed the name to "Dangerous Journeys", and work on the new game continued.
The marketing strategy for "Dangerous Journeys: Mythus" was multi-pronged: in addition to the RPG and setting to be published by Games Designers Workshop, and the "Mythus" computer game being prepared by NEC and JVC, there would also be a series of books based on the Mythus setting written by Gygax. So in addition to his work on the RPG and the "Mythus" setting, Gygax wrote three novels, released under publisher Penguin/Roc and later reprinted by Paizo Publishing: "The Anubis Murders", "The Samarkand Solution", and "Death in Delhi".
In late 1992, the "Dangerous Journeys" RPG was released by Games Designer Workshop, but TSR immediately applied for an injunction against the entire "Dangerous Journeys" RPG and the "Mythus" setting, arguing that "Dangerous Journeys" was based on "D&D" and "AD&D". Although the injunction failed, TSR moved forward with litigation. Gygax believed the legal action was without merit and fuelled by Lorraine Williams' personal enmity, but NEC and JVC both withdrew from the project, killing the "Mythus" computer game. By 1994, the legal costs associated with many months of pretrial discovery had drained all of Gygax's resources; believing that TSR was also suffering, Gygax offered to settle. In the end, TSR paid Gygax for the complete rights to "Dangerous Journeys" and "Mythus". Although Gygax was well compensated for his years of work on "Dangerous Journeys" and "Mythus", TSR immediately and permanently shelved them both.
In 1995, Gygax began work on a new computer role-playing game called "Lejendary Adventures". In contrast to the rules-heavy "Dangerous Journeys", this new system was a return to simple and basic rules. Although he was not able to successfully release a "Lejendary Adventures" computer game, Gygax decided to instead publish it as a tabletop game.
Meanwhile, in 1996 the games industry was rocked by the news that TSR had run into insoluble financial problems and had been bought by Wizards of the Coast. While WotC was busy refocussing TSR's products, Christopher Clark of Inner City Games Designs approached Gygax in 1997 to suggest that they produce some adventures to sell in game stores while TSR was otherwise occupied; the result was a pair of fantasy adventures published by Inner City Games: "A Challenge of Arms" (1998) and "The Ritual of the Golden Eyes" (1999). Gygax introduced some investors to Clark's publication setup, and although the investors were not willing to fund publication of "Legendary Adventures", Clark and Gygax formed a partnership called Hekaforge Productions. Gygax was thus able to return to publish "Lejendary Adventures" in 1999. The game was published as a three-volume set: "The Lejendary Rules for All Players" (1999), "Lejend Master's Lore" (2000) and "Beasts of Lejend" (2000).
The new owner of TSR, WotC's Peter Adkison, clearly did not harbor any of Lorraine Williams' ill-will toward Gygax: Adkison purchased all of Gygax's residual rights to D&D and AD&D for a six-figure sum. Although Gygax did not write any new supplements or books for TSR or WotC, he did agree to write the preface to the 1998 adventure "Return to the Tomb of Horrors", a paean to Gygax's original AD&D adventure "Tomb of Horrors". He also returned to the pages of Dragon Magazine, writing the "Up on a Soapbox" column from Issue #268 (January 2000) to Issue #320 (June 2004).
Gygax continued to work on "Lejendary Adventures" which he believed was his best work. However, sales were below expectation.
On June 11, 2001, Stephen Chenault and Davis Chenault of Troll Lord Games announced that Gygax would be writing books for their company. Gygax's early work for Troll Lord included a series of hardcover books that eventually came to be called "Gygaxian Fantasy Worlds"; the first was "The Canting Crew" (2002), a look at the roguish underworld. He also wrote "World Builder" (2003) and "Living Fantasy" (2003), generic game design books usable in many different settings. After the first four books in the series, Gygax stepped down from writing and took on an advisory role, though the series logo still carried his name. Troll Lord also published a few adventures as a result of their partnership with Gygax, including "The Hermit" (2002) an adventure intended for d20 and also for "Lejendary Adventures".
By 2002, Gygax had given Christopher Clark of Hekaforge an encyclopaedic 72,000-word text describing the Lejendary Earth. Clark split the manuscript up into five books and expanded it, with each of the final books coming to about 128,000 words, giving Hekaforge a third Lejendary Adventures line to supplement the core rules and adventures. Hekaforge managed to publish the first two of those Lejendary Earth sourcebooks, "Gazetteer" (2002) and "Noble Kings and Great Lands" (2003), but by 2003 the small company was having financial difficulties. Clark had to ask Troll Lord Games to become an "angel" investor by publishing the three remaining "Lejendary Adventures" books.
On October 9, 2001, Necromancer Games announced that they would be publishing a d20 version of "Necropolis", an adventure originally planned by Gygax for New Infinities Productions and later printed in 1992 as a "Mythus" adventure by GDW; "Gary Gygax's Necropolis" was published a year later.
Gygax also performed voiceover narration for cartoons and video games. In 2000, he voiced his own cartoon self for an episode of Futurama, "Anthology of Interest I" that also included the voices of Al Gore, Stephen Hawking and Nichelle Nichols. Gygax also performed as a guest Dungeon Master in the Delera's Tomb quest series of the massively multiplayer online role-playing game "".
The scale of the project was enormous: By the time Gygax and Kuntz had stopped working on their original home campaign, the castle dungeons had encompassed 50 levels of cunningly complex passages with thousands of rooms and traps. This, plus plans for the city of Yggsburgh and encounter areas outside the castle and city, would clearly be too much to fit into the proposed 6 volumes. Gygax decided he would compress the castle dungeons into 13 levels, the size of his original Castle Greyhawk in 1973 by amalgamating the best of what could be gleaned from binders and boxes of old notes. However, neither Gygax nor Kuntz had kept careful or comprehensive plans. Because they had often made up details of play sessions on the spot, they usually just scribbled a quick map as they played, with cursory notes about monsters, treasures, and traps. These sketchy maps had contained just enough detail that the two could ensure their independent work would dovetail. All of these old notes now had to be deciphered, 25-year-old memories dredged up as to what had happened in each room, and a decision made whether to keep or discard each new piece. Recreating the city too would be a challenge. Although Gygax still had his old maps of the original city, all of his previously published work on the city was owned by WotC, so he would have to create most of the city from scratch while still maintaining the "look and feel" of his original.
Due to creative differences, Kuntz backed out of the project, but created an adventure module that would be published at the same time as Gygax's first book. Gygax continued to painstakingly put Castle Zagyg together on his own, but even this slow and laborious process came to a complete halt when Gygax suffered a serious stroke in April 2004 and then another one a few weeks later. Although he returned to his keyboard after a seven-month convalescence, his output was reduced from 14-hour work days to only one or two hours per day. Finally in 2005, "Castle Zagyg Part I: Yggsburgh", the first book in the six-book series, appeared. Later that year, Troll Lord Games also published "Castle Zagyg: Dark Chateau" (2005), the adventure module written for the Yggsburgh setting by Rob Kuntz. Jeff Talanian helped with the creation of the dungeon, eventually resulting in publication of the limited edition "CZ9: The East Marks Gazetteer" (2007).
That same year, Gygax was diagnosed with a potentially deadly abdominal aortic aneurysm. Doctors concurred that surgery was needed, but their estimates of success varied from 50% to 90%. With no firm medical consensus, Gygax came to believe that he would likely die on the operating table; he refused to consider surgery, although he realized that a rupture of the aneurysm – likely inevitable – would be fatal. In one concession to his condition, he switched from cigarettes, which he had smoked since high school, to cigars.
It wasn't until 2008 that Gygax was able to finish the second volume of six volumes, "Castle Zagyg: The Upper Works", which described details of the castle above ground. The next two volumes were supposed to detail the dungeons beneath Castle Zagyg. However, before they could be written, Gygax died in March 2008. Three months after his death, Gygax Games – a new company formed by Gary's widow, Gail – withdrew all of the Gygax licenses from Troll Lord, and also from Hekaforge.
From an early age, Gygax hunted and was a target-shooter with both bow and gun. He was also an avid gun collector, and at various times owned a variety of rifles, shotguns, and handguns.
As the "father of role-playing games", Gygax received many awards, honors, and tributes related to gaming: | https://en.wikipedia.org/wiki?curid=12848 |
Governor of New South Wales
The Governor of New South Wales is the viceregal representative of the Australian monarch, Queen Elizabeth II, in the state of New South Wales. In an analogous way to the Governor-General of Australia at the national level, the Governors of the Australian states perform constitutional and ceremonial functions at the state level. The governor is appointed by the queen on the advice of the premier of New South Wales, for an unfixed period of time—known as serving "At Her Majesty's pleasure"—though five years is the norm. The current governor is retired judge Margaret Beazley, who succeeded David Hurley on 2 May 2019.
The office has its origin in the 18th-century colonial governors of New South Wales upon its settlement in 1788, and is the oldest continuous institution in Australia. The present incarnation of the position emerged with the Federation of Australia and the "New South Wales Constitution Act 1902", which defined the viceregal office as the governor acting by and with the advice of the Executive Council of New South Wales. However, the post still ultimately represented the government of the United Kingdom until, after continually decreasing involvement by the British government, the passage in 1942 of the Statute of Westminster Adoption Act 1942 (see Statute of Westminster) and the Australia Act 1986, after which the governor became the direct, personal representative of the uniquely Australian sovereign.
The Office of Governor is required by the New South Wales Constitution Act, 1902. The Australian monarch, on the advice and recommendation of the premier of New South Wales, approves the appointment of governor with a commission issued under the royal sign-manual and Public Seal of the State, who is from then until being sworn in by the premier and chief justice referred to as the "governor-designate".
Besides the administration of the oaths of office, there is no set formula for the swearing-in of a governor-designate. The constitution act stipulates that: "Before assuming office, a person appointed to be Governor shall take the Oath or Affirmation of Allegiance and the Oath or Affirmation of Office in the presence of the Chief Justice or another Judge of the Supreme Court." The sovereign will also hold an audience with the appointee and will at that time induct the governor-designate as a Companion of the Order of Australia (AC).
The incumbent will generally serve for at least five years, though this is only a developed convention, and the governor still technically acts at Her Majesty's pleasure (or the "Royal Pleasure"). The premier may therefore recommend to the queen that the viceroy remain in her service for a longer period of time, sometimes upwards of more than seven years. A governor may also resign and three have died in office. In such a circumstance, or if the governor leaves the country for longer than one month, the lieutenant governor of New South Wales, concurrently held by the chief justice of New South Wales since 1872, serves as Administrator of the Government and exercises all powers of the governor. Furthermore, if the lieutenant governor becomes incapacitated while serving in the office of governor or is also absent from the state, the next most senior judge of the Supreme Court is sworn in as the administrator.
Between 1788 and 1957, all governors were born outside New South Wales and were often members of the peerage. Historian A. J. P. Taylor once noted that "going out and governing New South Wales became the British aristocracy's 'abiding consolation'". However, even though the implementation of the Australian Citizenship Act in 1948 established the concept of an independent Australian citizenship, the idea of Australian-born persons being appointed governor of New South Wales was much earlier. Coincidentally the first Australian-born governor, Sir John Northcott on 1 August 1946, was also the first Australian-born governor of any state. However, as Northcott was born in Victoria, it was not until Sir Eric Woodward's appointment by Queen Elizabeth II in 1957 that the position was filled by a New South Welshman; this practice continued until 1996, when Queen Elizabeth II commissioned as her representative Gordon Samuels, a London-born immigrant to Australia.
Although required by the tenets of constitutional monarchy to be non-partisan while in office, governors were frequently former politicians, many being members of the House of Lords by virtue of their peerage. The first governors were all military officers and the majority of governors since have come from a military background, numbering 19. Samuels was the first governor in New South Wales history without either a political, public service or military background, being a former justice of the Supreme Court of New South Wales. The first woman to hold this position is also the first Lebanese-Australian governor, Dame Marie Bashir.
As Australia shares its monarch equally with fifteen other countries in the Commonwealth of Nations and the sovereign lives predominantly outside New South Wales' borders, the governor's primary task is to perform the sovereign's constitutional duties on his or her behalf, acting within the principles of parliamentary democracy and responsible government as a guarantor of continuous and stable governance and as a nonpartisan safeguard against the abuse of power. For the most part, however, the powers of the Crown are exercised on a day-to-day basis by elected and appointed individuals, leaving the governor to perform the various ceremonial duties the sovereign otherwise carries out when in the country; at such a moment, the governor removes him or herself from public, though the presence of the monarch does not affect the governor's ability to perform governmental roles.
It is the governor who is required by the Constitution Act 1902, to appoint persons to the Government of New South Wales, who are all theoretically tasked with tendering to the monarch and viceroy guidance on the exercise of the Royal Prerogative. Convention dictates, that the governor must draw from the Parliament an individual to act as premier, who is also capable of forming government—in almost all cases the Member of Parliament who commands the confidence of the Legislative Assembly. The premier then directs the governor to appoint other members of parliament to the Executive Council of New South Wales known as the Cabinet, and it is in practice only from this group of ministers of the Crown that the queen and governor will take direction on the use of executive power, an arrangement called the "Queen-in-Council" or, more specifically, the "Governor-in-Council". In this capacity, the governor will issue royal proclamations and sign orders in council. The governor-in-council is also required to appoint in the queen's name the President of the Legislative Council, the Speaker of the Legislative Assembly, Supreme Court and District Court justices, and local court magistrates in the state, though all of these are made on the advice of either the premier and cabinet or the majority of elected members of each house in the case of the Speaker or President. The advice given by the Cabinet is, in order to ensure the stability of government, typically binding; both the queen and her viceroy, however, may in exceptional circumstances invoke the reserve powers, which remain the Crown's final check against a ministry's abuse of power, this was last fully exercised in 1932, when Sir Philip Game revoked the commission of Premier Jack Lang.
The governor alone is constitutionally mandated to summon parliament. Beyond that, the viceroy carries out the other conventional parliamentary duties in the sovereign's absence, including reading the Speech from the throne and the proroguing and dissolving of parliament. The governor grants Royal Assent in the queen's name; legally, he or she has three options: grant Royal Assent (making the bill law), withhold Royal Assent (vetoing the bill), or reserve the bill for the queen's pleasure (allowing the sovereign to personally grant or withhold assent). If the governor withholds the queen's assent, the sovereign may within two years disallow the bill, thereby annulling the law in question. No modern viceroy has denied Royal Assent to a bill. With most constitutional functions delegated to Cabinet, the governor acts in a primarily ceremonial fashion. He or she will host members of Australia's royal family, as well as foreign royalty and heads of state. Also as part of international relations, the governor receives letters of credence and of recall from foreign consuls-general appointed to Sydney. When they are the longest-serving state governor, the governor of New South Wales holds a dormant commission to act as the Administrator of the Commonwealth when the Governor-General of Australia is absent from Australia, a role most recently held by Governor Bashir.
The governor is also tasked with fostering unity and pride. He or she will also induct individuals into the various national orders and present national medals and decorations, however the most senior awards such as ACs or the Victoria Cross are the sole prerogative of the governor general. The governor also "ex-officio" serves as Honorary Colonel of the Royal New South Wales Regiment (since 1960), Honorary Air Commodore of No. 22 (City of Sydney) Squadron, Royal Australian Air Force (since 1937) and Honorary Commodore of the Royal Australian Navy , as well as the Chief Scout for New South Wales.
As the personal representative of the monarch, the governor follows only the sovereign in the NSW order of precedence. The incumbent governor is entitled to use the style of "His" or "Her Excellency", while in office. On 28 November 2013 the premier of NSW announced that the Queen had given approval for the title of "The Honourable" to be accorded to the governors and former governors of New South Wales. He or she also upon installation serves as a Deputy Prior of the Most Venerable Order of the Hospital of Saint John of Jerusalem in Australia and is also traditionally invested as either a Knight or Dame of Justice or Grace of the Order. It is also customary that the governor is made a Companion of the Order of Australia, though this is not necessarily automatic.
The Viceregal Salute—composed of the first and last four bars of the National Anthem ("Advance Australia Fair")—is the salute used to greet the governor upon arrival at, and mark his or her departure from most official events, although "God Save The Queen", as the Royal Anthem, is also used. To mark the viceroy's presence at any building, ship, aeroplane, or car in Australia, the governor's flag is employed. The present form was adopted on 15 January 1981. The state badge of the New South Wales crowned with the St Edward's Crown is employed as the badge of the governor, appearing on the viceroy's flag and on other objects associated with the person or the office.
Aside from the Crown itself, the office of Governor of New South Wales is the oldest constitutional office in Australia. Captain Arthur Phillip assumed office as Governor of New South Wales on 7 February 1788, when the Colony of New South Wales, the first British settlement in Australia, was formally proclaimed. The early colonial governors held an almost autocratic power due to the distance from and poor communications with Great Britain, until 1824 when the New South Wales Legislative Council, Australia's first legislative body, was appointed to advise the governor.
Between 1850 and 1861, the governor of New South Wales was titled governor-general, in an early attempt at federalism imposed by Earl Grey. All communication between the Australian colonies and the British Government was meant to go through the governor-general, and the other colonies had lieutenant-governors. As South Australia (1836), Tasmania (January 1855) and Victoria (May 1855) obtained responsible government, their lieutenant-governors were replaced by governors. Although he had ceased acting as a governor-general, Sir William Denison retained the title until his retirement in 1861.
The six British colonies in Australia joined together to form the Commonwealth of Australia in 1901. New South Wales and the other colonies became states in the federal system under the Constitution of Australia. In 1902, the "New South Wales Constitution Act" 1902 confirmed the modern system of government of New South Wales as a state, including defining the role of the governor as the monarch's representative, who acts by and with the advice of the Executive Council. Like the new federal Governor-General and the other state governors, in the first years after federation, the governor of New South Wales continued to act both as a constitutional head of the state, and as a liaison between the government and the imperial government in London. However, the British government's involvement in Australian affairs gradually reduced in the next few years.
In 1942, the Commonwealth of Australia passed the "Statute of Westminster Adoption Act 1942", which rendered Australia dominion status under the Statute of Westminster, and while Australia and Britain share the same person as monarch, that person acts in a distinct capacity when acting as the monarch of each dominion. The convention that the monarch acts in respect of Australian affairs on the advice of his or her Australian ministers, rather than his or her British ministers, became enshrined in law. For New South Wales however, because the Statute of Westminster did not disturb the constitutional arrangements of the Australian states, the governor remains (at least formally) in New South Wales the representative of the British monarch. This arrangement seemed incongruous with the Commonwealth of Australia's independent dominion status conferred by the Statute of Westminster, and with the federal structure. After much negotiation between the federal and state governments of Australia, the British government and Buckingham Palace, the "Australia Act 1986" removed any remaining constitutional roles of the British monarch and British government in the Australian states, and established that the governor of New South Wales (along with the other state governors) was the direct, personal representative of the Australian monarch, and not the British monarch or the British government, nor the Governor-General of Australia or the Australian federal government.
On his arrival in Sydney in 1788, Governor Phillip resided in a temporary wood and canvas house before the construction of a more substantial house on a site now bounded by Bridge Street and Phillip Street, Sydney. This first Government House was extended and repaired by the following eight governors, but was generally in poor condition and was vacated when the governor relocated to the new building in 1845, designed by Edward Blore and Mortimer Lewis.
With the federation of the Australian colonies in 1901, it was announced that Government House was to serve as the secondary residence of the new Governor-General of Australia. As a consequence the NSW Government leased the residence of Cranbrook, Bellevue Hill as the residence of the governor. This arrangement lasted until 1913 when the NSW Government terminated the Commonwealth lease of Government House (the governor-general moved to the new Sydney residence of Admiralty House), the governor from 1913 to 1917, Sir Gerald Strickland, continued to live in Cranbrook and on his departure his successor returned to Government House.
On 16 January 1996, Premier Bob Carr announced that the next governor would be Gordon Samuels, that he would not live or work at Government House and that he would retain his appointment as chairman of the New South Wales Law Reform Commission. On these changes, Carr said: "The Office of the Governor should be less associated with pomp and ceremony, less encumbered by anachronistic protocol, more in tune with the character of the people." The state's longest-serving governor, Sir Roden Cutler, was also reported as saying: "It's a political push to make way in New South Wales to lead the push for a republic. If they decide not to have a Governor and the public agrees with that, and Parliament agrees, and the queen agrees to it, that is a different matter, but while there is a Governor you have got to give him some respectability and credibility, because he is the host for the whole of New South Wales. For the life of me I cannot understand the logic of having a Governor who is part-time and doesn’t live at Government House. It is such a degrading of the office and of the Governor."
In October 2011, the new premier, Barry O'Farrell, announced that the governor, now Dame Marie Bashir, had agreed with O'Farrell's offer to move back into Government House: "A lot of people believe the Governor should live at Government House. That's what it was built for ... [A]t some stage a rural or regional governor will be appointed and we will need to provide accommodation at Government House so it makes sense to provide appropriate living areas". With the Governor's return, management of the residence reverted to the Office of the Governor in December 2013.
In addition to the primary Sydney vice-regal residence, many governors had also felt the need for a 'summer retreat' to escape the hard temperatures of the Sydney summers. In 1790, Governor Phillip had a secondary residence built in the township of Parramatta. In 1799 the second governor, John Hunter, had the remains of Arthur Phillip's cottage cleared away, and a more permanent building erected on the same site. This residence remained occupied until the completion of the primary Government House in 1845, however the hard summers and growing size of Sydney convinced successive governors of the need for a rural residence.
The governor from 1868 to 1872, The Earl Belmore, used Throsby Park in Moss Vale as his summer residence. His successor, Sir Hercules Robinson, often retired privately to the same area, in the Southern Highlands, for the same reason. In 1879 it was then decided that the colony should purchase a house at Sutton Forest for use as a permanent summer residence, and in 1881 the NSW Government purchased for £6000 a property known as "Prospect" that had been built by Robert Pemberton Richardson (of the firm Richardson & Wrench). This was renamed "Hillview", and became the primary summer governor's residence from 1885 to 1957. In 1957, seen as unnecessary and expensive, Hillview was put up for sale and purchased from the state government by Edwin Klein. Hillview was returned to the people of NSW in 1985 and is currently leased under the ownership of the Office of Environment and Heritage.
The viceregal household aids the governor in the execution of the royal constitutional and ceremonial duties and is managed by the Office of the Governor, whose current Official Secretary and Chief of Staff is Colonel Michael Miller RFD. These organised offices and support systems include aides-de-camp, press officers, financial managers, speech writers, trip organisers, event planners and protocol officers, chefs and other kitchen employees, waiters, and various cleaning staff, as well as tour guides. In this official and bureaucratic capacity, the entire household is often referred to as "Government House". These departments are funded through the annual budget, as is the governor's salary of A$181,555.
The following individuals have served as a governor of New South Wales:
Currently, three former governors are alive. The most recent governor to die was Gordon Samuels (1996–2001), on 10 December 2007. | https://en.wikipedia.org/wiki?curid=12850 |
Governor of Victoria
The Governor of Victoria is the representative in the Australian state of Victoria of its monarch, Elizabeth II, Queen of Australia and is one of the Governors of the Australian states. The governor performs the same constitutional and ceremonial functions at the state level as does the Governor-General of Australia at the federal level. The governor's office and official residence is Government House next to the Royal Botanic Gardens and surrounded by Kings Domain in Melbourne.
The Governor of Victoria is appointed by the Queen of Australia on the advice of the Premier of Victoria. The current Governor of Victoria is former judge Linda Dessau, Victoria's first female governor.
In accordance with the conventions of the Westminster system of parliamentary government, the governor nearly always acts solely on the advice of the head of the elected government, the Premier of Victoria. Nevertheless, the governor retains the reserve powers of the Crown, and has the right to dismiss the premier.
The Governor of Victoria is appointed by the Queen of Australia, on the advice of the Premier of Victoria, to act as her representative as head of state in Victoria. The Governor acts "at the Queen's pleasure", meaning that the term of the Governor can be terminated at any time by the Queen acting upon the advice of the premier.
Since the Australia Acts of 1986, it is the governor, and not the queen, who exercises all the powers of the head of state, and the governor is not subject to the direction or supervision of the monarch, but acts upon the advice of the premier. Upon appointment, he or she becomes a viceroy. The governor's main responsibilities fall into three categories – constitutional, ceremonial and community engagement.
The Personal Standard of the Governor of Victoria is the same design as the State Flag of Victoria, but with the blue background replaced by gold, and red stars depicting the Southern Cross. Above the Southern Cross is the Royal Crown.
The current standard has been in place since 1984. Previously, the standard used by Victorian governors after 1870 had been the Union Jack with the Badge of the State of Victoria emblazoned in the centre. Between 1903 and 1953, the Tudor Crown was used on the State Flag and Governor's Standard, and this was changed to the present crown in 1954.
The Governor’s Standard is flown at Government House and on vehicles conveying the governor. The Standard is lowered over Government House when the governor is absent from Victoria.
There is also a lieutenant-governor and an administrator. The Chief Justice of Victoria is "ex officio" the Administrator, unless he or she is the lieutenant-governor, in which case, the next most senior judge is the administrator. The lieutenant-governor takes on the responsibilities of the governor when that post is vacant or when the governor is out of the state or unable to act. The administrator takes on those duties if both the governor and lieutenant-governor are not able to act for the above reasons.
See Governors of the Australian states for a description and history of the office of governor.
As with the other states, until the 1986 Australia Acts, the office of Governor of Victoria was an appointment of the British Foreign Office although local advice was considered and sometimes accepted.
Until the appointment of Victorian-born Sir Henry Winneke in 1974, the Governors of Victoria were British. Since then, governors have been Australian although several were born overseas, namely Dr Davis McCaughey, born in Ireland, came to Australia for work; and Professor David de Kretser, born in Ceylon (now Sri Lanka), and Alex Chernov, born in Lithuania, both of whom came to Australia while at school.
Prior to the separation of the colony of Victoria from New South Wales in 1851, the area was called the Port Phillip District of New South Wales. The Governor of New South Wales appointed superintendents of the District. In 1839 Charles La Trobe was appointed superintendent. La Trobe became Lieutenant-Governor of Victoria on separation on 1 July 1851.
Between 1850 and 1861, the Governor of New South Wales was titled Governor-General of New South Wales, in an attempt to form a federal structure. Until Victoria obtained responsible government in 1855, the Governor-General of New South Wales appointed lieutenant-governors to Victoria. On Victoria obtaining responsible government in May 1855, the title of the then incumbent lieutenant-governor, Captain Sir Charles Hotham, became governor.
, four former governors are alive, the oldest being John Landy (2001–06, born 1930). The most recent governor to die was Davis McCaughey (1986–92), on 25 March 2005. The most recently serving governor to die was Richard McGarvie (1992–1997), on 24 May 2003.
There is also a lieutenant-governor and an administrator. The lieutenant-governor takes on the responsibilities of the governor when that post is vacant or when the governor is out of the state or unable to act. The lieutenant-governor is appointed by the governor on the advice of the Premier of Victoria. Appointment as lieutenant-governor does not of itself confers any powers or functions. If there is no governor or if the governor is unavailable to act for a substantial period, the lieutenant-governor assumes office as administrator and exercises all the powers and functions of the governor.
If expecting to be unavailable for a short period only, the governor with the consent of the premier, usually commissions the lieutenant-governor to act as deputy for the governor, performing some or all of the powers and functions of the governor.
The Chief Justice of Victoria is "ex officio" the administrator, unless he or she is the lieutenant-governor, in which case, the next most senior judge is the administrator. The administrator takes on the governor’s duties if both the governor and lieutenant-governor are not able to act for the above reasons.
The current lieutenant-governor is Ken Lay, who was appointed to the role on 9 November 2017 to succeed Marilyn Warren. | https://en.wikipedia.org/wiki?curid=12851 |
George Bernard Shaw
George Bernard Shaw (; 26 July 1856 – 2 November 1950), known at his insistence simply as Bernard Shaw, was an Irish playwright, critic, polemicist and political activist. His influence on Western theatre, culture and politics extended from the 1880s to his death and beyond. He wrote more than sixty plays, including major works such as "Man and Superman" (1902), "Pygmalion" (1912) and "Saint Joan" (1923). With a range incorporating both contemporary satire and historical allegory, Shaw became the leading dramatist of his generation, and in 1925 was awarded the Nobel Prize in Literature.
Born in Dublin, Shaw moved to London in 1876, where he struggled to establish himself as a writer and novelist, and embarked on a rigorous process of self-education. By the mid-1880s he had become a respected theatre and music critic. Following a political awakening, he joined the gradualist Fabian Society and became its most prominent pamphleteer. Shaw had been writing plays for years before his first public success, "Arms and the Man" in 1894. Influenced by Henrik Ibsen, he sought to introduce a new realism into English-language drama, using his plays as vehicles to disseminate his political, social and religious ideas. By the early twentieth century his reputation as a dramatist was secured with a series of critical and popular successes that included "Major Barbara", "The Doctor's Dilemma" and "Caesar and Cleopatra".
Shaw's expressed views were often contentious; he promoted eugenics and alphabet reform, and opposed vaccination and organised religion. He courted unpopularity by denouncing both sides in the First World War as equally culpable, and although not a republican, castigated British policy on Ireland in the postwar period. These stances had no lasting effect on his standing or productivity as a dramatist; the inter-war years saw a series of often ambitious plays, which achieved varying degrees of popular success. In 1938 he provided the screenplay for a filmed version of "Pygmalion" for which he received an Academy Award. His appetite for politics and controversy remained undiminished; by the late 1920s he had largely renounced Fabian Society gradualism and often wrote and spoke favourably of dictatorships of the right and left—he expressed admiration for both Mussolini and Stalin. In the final decade of his life he made fewer public statements, but continued to write prolifically until shortly before his death, aged ninety-four, having refused all state honours, including the Order of Merit in 1946.
Since Shaw's death scholarly and critical opinion about his works has varied, but he has regularly been rated among British dramatists as second only to Shakespeare; analysts recognise his extensive influence on generations of English-language playwrights. The word "Shavian" has entered the language as encapsulating Shaw's ideas and his means of expressing them.
Shaw was born at 3 Upper Synge Street in Portobello, a lower-middle-class part of Dublin. He was the youngest child and only son of George Carr Shaw (1814–1885) and Lucinda Elizabeth (Bessie) Shaw ("née" Gurly; 1830–1913). His elder siblings were Lucinda (Lucy) Frances (1853–1920) and Elinor Agnes (1855–1876). The Shaw family was of English descent and belonged to the dominant Protestant Ascendancy in Ireland; George Carr Shaw, an ineffectual alcoholic, was among the family's less successful members. His relatives secured him a sinecure in the civil service, from which he was pensioned off in the early 1850s; thereafter he worked irregularly as a corn merchant. In 1852 he married Bessie Gurly; in the view of Shaw's biographer Michael Holroyd she married to escape a tyrannical great-aunt. If, as Holroyd and others surmise, George's motives were mercenary, then he was disappointed, as Bessie brought him little of her family's money. She came to despise her ineffectual and often drunken husband, with whom she shared what their son later described as a life of "shabby-genteel poverty".
By the time of Shaw's birth, his mother had become close to George John Lee, a flamboyant figure well known in Dublin's musical circles. Shaw retained a lifelong obsession that Lee might have been his biological father; there is no consensus among Shavian scholars on the likelihood of this. The young Shaw suffered no harshness from his mother, but he later recalled that her indifference and lack of affection hurt him deeply. He found solace in the music that abounded in the house. Lee was a conductor and teacher of singing; Bessie had a fine mezzo-soprano voice and was much influenced by Lee's unorthodox method of vocal production. The Shaws' house was often filled with music, with frequent gatherings of singers and players.
In 1862, Lee and the Shaws agreed to share a house, No. 1 Hatch Street, in an affluent part of Dublin, and a country cottage on Dalkey Hill, overlooking Killiney Bay. Shaw, a sensitive boy, found the less salubrious parts of Dublin shocking and distressing, and was happier at the cottage. Lee's students often gave him books, which the young Shaw read avidly; thus, as well as gaining a thorough musical knowledge of choral and operatic works, he became familiar with a wide spectrum of literature.
Between 1865 and 1871, Shaw attended four schools, all of which he hated. His experiences as a schoolboy left him disillusioned with formal education: "Schools and schoolmasters", he later wrote, were "prisons and turnkeys in which children are kept to prevent them disturbing and chaperoning their parents." In October 1871 he left school to become a junior clerk in a Dublin firm of land agents, where he worked hard, and quickly rose to become head cashier. During this period, Shaw was known as "George Shaw"; after 1876, he dropped the "George" and styled himself "Bernard Shaw".
In June 1873, Lee left Dublin for London and never returned. A fortnight later, Bessie followed him; the two girls joined her. Shaw's explanation of why his mother followed Lee was that without the latter's financial contribution the joint household had to be broken up. Left in Dublin with his father, Shaw compensated for the absence of music in the house by teaching himself to play the piano.
Early in 1876 Shaw learned from his mother that Agnes was dying of tuberculosis. He resigned from the land agents, and in March travelled to England to join his mother and Lucy at Agnes's funeral. He never again lived in Ireland, and did not visit it for twenty-nine years.
Initially, Shaw refused to seek clerical employment in London. His mother allowed him to live free of charge in her house in South Kensington, but he nevertheless needed an income. He had abandoned a teenage ambition to become a painter, and had not yet thought of writing for a living, but Lee found a little work for him, ghost-writing a musical column printed under Lee's name in a satirical weekly, "The Hornet". Lee's relations with Bessie deteriorated after their move to London. Shaw maintained contact with Lee, who found him work as a rehearsal pianist and occasional singer.
Eventually Shaw was driven to applying for office jobs. In the interim he secured a reader's pass for the British Museum Reading Room (the forerunner of the British Library) and spent most weekdays there, reading and writing. His first attempt at drama, begun in 1878, was a blank-verse satirical piece on a religious theme. It was abandoned unfinished, as was his first try at a novel. His first completed novel, "Immaturity" (1879), was too grim to appeal to publishers and did not appear until the 1930s. He was employed briefly by the newly formed Edison Telephone Company in 1879–80, and as in Dublin achieved rapid promotion. Nonetheless, when the Edison firm merged with the rival Bell Telephone Company, Shaw chose not to seek a place in the new organisation. Thereafter he pursued a full-time career as an author.
For the next four years Shaw made a negligible income from writing, and was subsidised by his mother. In 1881, for the sake of economy, and increasingly as a matter of principle, he became a vegetarian. He grew a beard to hide a facial scar left by smallpox. In rapid succession he wrote two more novels: "The Irrational Knot" (1880) and "Love Among the Artists" (1881), but neither found a publisher; each was serialised a few years later in the socialist magazine "Our Corner".
In 1880 Shaw began attending meetings of the Zetetical Society, whose objective was to "search for truth in all matters affecting the interests of the human race". Here he met Sidney Webb, a junior civil servant who, like Shaw, was busy educating himself. Despite difference of style and temperament, the two quickly recognised qualities in each other and developed a lifelong friendship. Shaw later reflected: "You knew everything that I didn't know and I knew everything you didn't know ... We had everything to learn from one another and brains enough to do it".
Shaw's next attempt at drama was a one-act playlet in French, "Un Petit Drame", written in 1884 but not published in his lifetime. In the same year the critic William Archer suggested a collaboration, with a plot by Archer and dialogue by Shaw. The project foundered, but Shaw returned to the draft as the basis of "Widowers' Houses" in 1892, and the connection with Archer proved of immense value to Shaw's career.
On 5 September 1882 Shaw attended a meeting at the Memorial Hall, Farringdon, addressed by the political economist Henry George. Shaw then read George's book "Progress and Poverty", which awakened his interest in economics. He began attending meetings of the Social Democratic Federation (SDF), where he discovered the writings of Karl Marx, and thereafter spent much of 1883 reading "Das Kapital". He was not impressed by the SDF's founder, H. M. Hyndman, whom he found autocratic, ill-tempered and lacking leadership qualities. Shaw doubted the ability of the SDF to harness the working classes into an effective radical movement and did not join it—he preferred, he said, to work with his intellectual equals.
After reading a tract, "Why Are The Many Poor?", issued by the recently formed Fabian Society, Shaw went to the society's next advertised meeting, on 16 May 1884. He became a member in September, and before the year's end had provided the society with its first manifesto, published as Fabian Tract No. 2. He joined the society's executive committee in January 1885, and later that year recruited Webb and also Annie Besant, a fine orator.
From 1885 to 1889 Shaw attended the fortnightly meetings of the British Economic Association; it was, Holroyd observes, "the closest Shaw had ever come to university education." This experience changed his political ideas; he moved away from Marxism and became an apostle of gradualism. When in 1886–87 the Fabians debated whether to embrace anarchism, as advocated by Charlotte Wilson, Besant and others, Shaw joined the majority in rejecting this approach. After a rally in Trafalgar Square addressed by Besant was violently broken up by the authorities on 13 November 1887 ("Bloody Sunday"), Shaw became convinced of the folly of attempting to challenge police power. Thereafter he largely accepted the principle of "permeation" as advocated by Webb: the notion whereby socialism could best be achieved by infiltration of people and ideas into existing political parties.
Throughout the 1880s the Fabian Society remained small, its message of moderation frequently unheard among more strident voices. Its profile was raised in 1889 with the publication of "Fabian Essays in Socialism", edited by Shaw who also provided two of the essays. The second of these, "Transition", details the case for gradualism and permeation, asserting that "the necessity for cautious and gradual change must be obvious to everyone". In 1890 Shaw produced Tract No. 13, "What Socialism Is", a revision of an earlier tract in which Charlotte Wilson had defined socialism in anarchistic terms. In Shaw's new version, readers were assured that "socialism can be brought about in a perfectly constitutional manner by democratic institutions".
The mid-1880s marked a turning point in Shaw's life, both personally and professionally: he lost his virginity, had two novels published, and began a career as a critic. He had been celibate until his twenty-ninth birthday, when his shyness was overcome by Jane (Jenny) Patterson, a widow some years his senior. Their affair continued, not always smoothly, for eight years. Shaw's sex life has caused much speculation and debate among his biographers, but there is a consensus that the relationship with Patterson was one of his few non-platonic romantic liaisons.
The published novels, neither commercially successful, were his two final efforts in this genre: "Cashel Byron's Profession" written in 1882–83, and "An Unsocial Socialist", begun and finished in 1883. The latter was published as a serial in "ToDay" magazine in 1884, although it did not appear in book form until 1887. "Cashel Byron" appeared in magazine and book form in 1886.
In 1884 and 1885, through the influence of Archer, Shaw was engaged to write book and music criticism for London papers. When Archer resigned as art critic of "The World" in 1886 he secured the succession for Shaw. The two figures in the contemporary art world whose views Shaw most admired were William Morris and John Ruskin, and he sought to follow their precepts in his criticisms. Their emphasis on morality appealed to Shaw, who rejected the idea of art for art's sake, and insisted that all great art must be didactic.
Of Shaw's various reviewing activities in the 1880s and 1890s it was as a music critic that he was best known. After serving as deputy in 1888, he became musical critic of "The Star" in February 1889, writing under the pen-name Corno di Bassetto. In May 1890 he moved back to "The World", where he wrote a weekly column as "G.B.S." for more than four years. In the 2016 version of the "Grove Dictionary of Music and Musicians", Robert Anderson writes, "Shaw's collected writings on music stand alone in their mastery of English and compulsive readability." Shaw ceased to be a salaried music critic in August 1894, but published occasional articles on the subject throughout his career, his last in 1950.
From 1895 to 1898, Shaw was the theatre critic for "The Saturday Review", edited by his friend Frank Harris. As at "The World", he used the by-line "G.B.S." He campaigned against the artificial conventions and hypocrisies of the Victorian theatre and called for plays of real ideas and true characters. By this time he had embarked in earnest on a career as a playwright: "I had rashly taken up the case; and rather than let it collapse I manufactured the evidence".
After using the plot of the aborted 1884 collaboration with Archer to complete "Widowers' Houses" (it was staged twice in London, in December 1892), Shaw continued writing plays. At first he made slow progress; "The Philanderer", written in 1893 but not published until 1898, had to wait until 1905 for a stage production. Similarly, "Mrs Warren's Profession" (1893) was written five years before publication and nine years before reaching the stage.
Shaw's first play to bring him financial success was "Arms and the Man" (1894), a mock-Ruritanian comedy satirising conventions of love, military honour and class. The press found the play overlong, and accused Shaw of mediocrity, sneering at heroism and patriotism, heartless cleverness, and copying W.S.Gilbert's style. The public took a different view, and the management of the theatre staged extra matinée performances to meet the demand. The play ran from April to July, toured the provinces and was staged in New York. It earned him £341 in royalties in its first year, a sufficient sum to enable him to give up his salaried post as a music critic. Among the cast of the London production was Florence Farr, with whom Shaw had a romantic relationship between 1890 and 1894, much resented by Jenny Patterson.
The success of "Arms and the Man" was not immediately replicated. "Candida", which presented a young woman making a conventional romantic choice for unconventional reasons, received a single performance in South Shields in 1895; in 1897 a playlet about Napoleon called "The Man of Destiny" had a single staging at Croydon. In the 1890s Shaw's plays were better known in print than on the West End stage; his biggest success of the decade was in New York in 1897, when Richard Mansfield's production of the historical melodrama "The Devil's Disciple" earned the author more than £2,000 in royalties.
In January 1893, as a Fabian delegate, Shaw attended the Bradford conference which led to the foundation of the Independent Labour Party. He was sceptical about the new party, and scorned the likelihood that it could switch the allegiance of the working class from sport to politics. He persuaded the conference to adopt resolutions abolishing indirect taxation, and taxing unearned income "to extinction". Back in London, Shaw produced what Margaret Cole, in her Fabian history, terms a "grand philippic" against the minority Liberal administration that had taken power in 1892. "To Your Tents, O Israel" excoriated the government for ignoring social issues and concentrating solely on Irish Home Rule, a matter Shaw declared of no relevance to socialism. In 1894 the Fabian Society received a substantial bequest from a sympathiser, Henry Hunt Hutchinson—Holroyd mentions £10,000. Webb, who chaired the board of trustees appointed to supervise the legacy, proposed to use most of it to found a school of economics and politics. Shaw demurred; he thought such a venture was contrary to the specified purpose of the legacy. He was eventually persuaded to support the proposal, and the London School of Economics and Political Science (LSE) opened in the summer of 1895.
By the later 1890s Shaw's political activities lessened as he concentrated on making his name as a dramatist. In 1897 he was persuaded to fill an uncontested vacancy for a "vestryman" (parish councillor) in London's St Pancras district. At least initially, Shaw took to his municipal responsibilities seriously; when London government was reformed in 1899 and the St Pancras vestry became the Metropolitan Borough of St Pancras, he was elected to the newly formed borough council.
In 1898, as a result of overwork, Shaw's health broke down. He was nursed by Charlotte Payne-Townshend, a rich Anglo-Irish woman whom he had met through the Webbs. The previous year she had proposed that she and Shaw should marry. He had declined, but when she insisted on nursing him in a house in the country, Shaw, concerned that this might cause scandal, agreed to their marriage. The ceremony took place on 1 June 1898, in the register office in Covent Garden. The bride and bridegroom were both aged forty-one. In the view of the biographer and critic St John Ervine, "their life together was entirely felicitous". There were no children of the marriage, which it is generally believed was never consummated; whether this was wholly at Charlotte's wish, as Shaw liked to suggest, is less widely credited. In the early weeks of the marriage Shaw was much occupied writing his Marxist analysis of Wagner's "Ring" cycle, published as "The Perfect Wagnerite" late in 1898. In 1906 the Shaws found a country home in Ayot St Lawrence, Hertfordshire; they renamed the house "Shaw's Corner", and lived there for the rest of their lives. They retained a London flat in the Adelphi and later at Whitehall Court.
During the first decade of the twentieth century, Shaw secured a firm reputation as a playwright. In 1904 J. E. Vedrenne and Harley Granville-Barker established a company at the Royal Court Theatre in Sloane Square, Chelsea to present modern drama. Over the next five years they staged fourteen of Shaw's plays. The first, "John Bull's Other Island", a comedy about an Englishman in Ireland, attracted leading politicians and was seen by Edward VII, who laughed so much that he broke his chair. The play was withheld from Dublin's Abbey Theatre, for fear of the affront it might provoke, although it was shown at the city's Royal Theatre in November 1907. Shaw later wrote that William Butler Yeats, who had requested the play, "got rather more than he bargained for... It was uncongenial to the whole spirit of the neo-Gaelic movement, which is bent on creating a new Ireland after its own ideal, whereas my play is a very uncompromising presentment of the real old Ireland." Nonetheless, Shaw and Yeats were close friends; Yeats and Lady Gregory tried unsuccessfully to persuade Shaw to take up the vacant co-directorship of the Abbey Theatre after J. M. Synge's death in 1909. Shaw admired other figures in the Irish Literary Revival, including George Russell and James Joyce, and was a close friend of Seán O'Casey, who was inspired to become a playwright after reading "John Bull's Other Island".
"Man and Superman", completed in 1902, was a success both at the Royal Court in 1905 and in Robert Loraine's New York production in the same year. Among the other Shaw works presented by Vedrenne and Granville-Barker were "Major Barbara" (1905), depicting the contrasting morality of arms manufacturers and the Salvation Army; "The Doctor's Dilemma" (1906), a mostly serious piece about professional ethics; and "Caesar and Cleopatra", Shaw's counterblast to Shakespeare's "Antony and Cleopatra", seen in New York in 1906 and in London the following year.
Now prosperous and established, Shaw experimented with unorthodox theatrical forms described by his biographer Stanley Weintraub as "discussion drama" and "serious farce". These plays included "Getting Married" (premiered 1908), "The Shewing-Up of Blanco Posnet" (1909), "Misalliance" (1910), and "Fanny's First Play" (1911). "Blanco Posnet" was banned on religious grounds by the Lord Chamberlain (the official theatre censor in England), and was produced instead in Dublin; it filled the Abbey Theatre to capacity. "Fanny's First Play", a comedy about suffragettes, had the longest initial run of any Shaw play—622 performances.
"Androcles and the Lion" (1912), a less heretical study of true and false religious attitudes than "Blanco Posnet", ran for eight weeks in September and October 1913. It was followed by one of Shaw's most successful plays, "Pygmalion", written in 1912 and staged in Vienna the following year, and in Berlin shortly afterwards. Shaw commented, "It is the custom of the English press when a play of mine is produced, to inform the world that it is not a play—that it is dull, blasphemous, unpopular, and financially unsuccessful. ... Hence arose an urgent demand on the part of the managers of Vienna and Berlin that I should have my plays performed by them first." The British production opened in April 1914, starring Sir Herbert Tree and Mrs Patrick Campbell as, respectively, a professor of phonetics and a cockney flower-girl. There had earlier been a romantic liaison between Shaw and Campbell that caused Charlotte Shaw considerable concern, but by the time of the London premiere it had ended. The play attracted capacity audiences until July, when Tree insisted on going on holiday, and the production closed. His co-star then toured with the piece in the US.
In 1899, when the Boer War began, Shaw wished the Fabians to take a neutral stance on what he deemed, like Home Rule, to be a "non-Socialist" issue. Others, including the future Labour prime minister Ramsay MacDonald, wanted unequivocal opposition, and resigned from the society when it followed Shaw. In the Fabians' war manifesto, "Fabianism and the Empire" (1900), Shaw declared that "until the Federation of the World becomes an accomplished fact we must accept the most responsible Imperial federations available as a substitute for it".
As the new century began, Shaw became increasingly disillusioned by the limited impact of the Fabians on national politics. Thus, although a nominated Fabian delegate, he did not attend the London conference at the Memorial Hall, Farringdon Street in February 1900, that created the Labour Representation Committee—precursor of the modern Labour Party. By 1903, when his term as borough councillor expired, he had lost his earlier enthusiasm, writing: "After six years of Borough Councilling I am convinced that the borough councils should be abolished". Nevertheless, in 1904 he stood in the London County Council elections. After an eccentric campaign, which Holroyd characterises as "[making] absolutely certain of not getting in", he was duly defeated. It was Shaw's final foray into electoral politics. Nationally, the 1906 general election produced a huge Liberal majority and an intake of 29 Labour members. Shaw viewed this outcome with scepticism; he had a low opinion of the new prime minister, Sir Henry Campbell-Bannerman, and saw the Labour members as inconsequential: "I apologise to the Universe for my connection with such a body".
In the years after the 1906 election, Shaw felt that the Fabians needed fresh leadership, and saw this in the form of his fellow-writer H. G. Wells, who had joined the society in February 1903. Wells's ideas for reform—particularly his proposals for closer cooperation with the Independent Labour Party—placed him at odds with the society's "Old Gang", led by Shaw. According to Cole, Wells "had minimal capacity for putting [his ideas] across in public meetings against Shaw's trained and practised virtuosity". In Shaw's view, "the Old Gang did not extinguish Mr Wells, he annihilated himself". Wells resigned from the society in September 1908; Shaw remained a member, but left the executive in April 1911. He later wondered whether the Old Gang should have given way to Wells some years earlier: "God only knows whether the Society had not better have done it". Although less active—he blamed his advancing years—Shaw remained a Fabian.
In 1912 Shaw invested £1,000 for a one-fifth share in the Webbs' new publishing venture, a socialist weekly magazine called "The New Statesman", which appeared in April 1913. He became a founding director, publicist, and in due course a contributor, mostly anonymously. He was soon at odds with the magazine's editor, Clifford Sharp, who by 1916 was rejecting his contributions—"the only paper in the world that refuses to print anything by me", according to Shaw.
After the First World War began in August 1914, Shaw produced his tract "Common Sense About the War", which argued that the warring nations were equally culpable. Such a view was anathema in an atmosphere of fervent patriotism, and offended many of Shaw's friends; Ervine records that "[h]is appearance at any public function caused the instant departure of many of those present."
Despite his errant reputation, Shaw's propagandist skills were recognised by the British authorities, and early in 1917 he was invited by Field Marshal Haig to visit the Western Front battlefields. Shaw's 10,000-word report, which emphasised the human aspects of the soldier's life, was well received, and he became less of a lone voice. In April 1917 he joined the national consensus in welcoming America's entry into the war: "a first class moral asset to the common cause against junkerism".
Three short plays by Shaw were premiered during the war. "The Inca of Perusalem", written in 1915, encountered problems with the censor for burlesquing not only the enemy but the British military command; it was performed in 1916 at the Birmingham Repertory Theatre. "O'Flaherty V.C.", satirising the government's attitude to Irish recruits, was banned in the UK and was presented at a Royal Flying Corps base in Belgium in 1917. "Augustus Does His Bit", a genial farce, was granted a licence; it opened at the Royal Court in January 1917.
Shaw had long supported the principle of Irish Home Rule within the British Empire (which he thought should become the British Commonwealth).
In April 1916 he wrote scathingly in "The New York Times" about militant Irish nationalism: "In point of learning nothing and forgetting nothing these fellow-patriots of mine leave the Bourbons nowhere." Total independence, he asserted, was impractical; alliance with a bigger power (preferably England) was essential. The Dublin Easter Rising later that month took him by surprise. After its suppression by British forces, he expressed horror at the summary execution of the rebel leaders, but continued to believe in some form of Anglo-Irish union. In "How to Settle the Irish Question" (1917), he envisaged a federal arrangement, with national and imperial parliaments. Holroyd records that by this time the separatist party Sinn Féin was in the ascendency, and Shaw's and other moderate schemes were forgotten.
In the postwar period, Shaw despaired of the British government's coercive policies towards Ireland, and joined his fellow-writers Hilaire Belloc and G. K. Chesterton in publicly condemning these actions. The Anglo-Irish Treaty of December 1921 led to the partition of Ireland between north and south, a provision that dismayed Shaw. In 1922 civil war broke out in the south between its pro-treaty and anti-treaty factions, the former of whom had established the Irish Free State. Shaw visited Dublin in August, and met Michael Collins, then head of the Free State's Provisional Government. Shaw was much impressed by Collins, and was saddened when, three days later, the Irish leader was ambushed and killed by anti-treaty forces. In a letter to Collins's sister, Shaw wrote: "I met Michael for the first and last time on Saturday last, and am very glad I did. I rejoice in his memory, and will not be so disloyal to it as to snivel over his valiant death". Shaw remained a British subject all his life, but took dual British-Irish nationality in 1934.
Shaw's first major work to appear after the war was "Heartbreak House", written in 1916–17 and performed in 1920. It was produced on Broadway in November, and was coolly received; according to "The Times": "Mr Shaw on this occasion has more than usual to say and takes twice as long as usual to say it". After the London premiere in October 1921 "The Times" concurred with the American critics: "As usual with Mr Shaw, the play is about an hour too long", although containing "much entertainment and some profitable reflection". Ervine in "The Observer" thought the play brilliant but ponderously acted, except for Edith Evans as Lady Utterword.
Shaw's largest-scale theatrical work was "Back to Methuselah", written in 1918–20 and staged in 1922. Weintraub describes it as "Shaw's attempt to fend off 'the bottomless pit of an utterly discouraging pessimism'". This cycle of five interrelated plays depicts evolution, and the effects of longevity, from the Garden of Eden to the year 31,920 AD. Critics found the five plays strikingly uneven in quality and invention. The original run was brief, and the work has been revived infrequently. Shaw felt he had exhausted his remaining creative powers in the huge span of this "Metabiological Pentateuch". He was now sixty-seven, and expected to write no more plays.
This mood was short-lived. In 1920 Joan of Arc was proclaimed a saint by Pope Benedict XV; Shaw had long found Joan an interesting historical character, and his view of her veered between "half-witted genius" and someone of "exceptional sanity". He had considered writing a play about her in 1913, and the canonisation prompted him to return to the subject. He wrote "Saint Joan" in the middle months of 1923, and the play was premiered on Broadway in December. It was enthusiastically received there, and at its London premiere the following March. In Weintraub's phrase, "even the Nobel prize committee could no longer ignore Shaw after Saint Joan". The citation for the literature prize for 1925 praised his work as "... marked by both idealism and humanity, its stimulating satire often being infused with a singular poetic beauty". He accepted the award, but rejected the monetary prize that went with it, on the grounds that "My readers and my audiences provide me with more than sufficient money for my needs".
After "Saint Joan", it was five years before Shaw wrote a play. From 1924, he spent four years writing what he described as his "magnum opus", a political treatise entitled "The Intelligent Woman's Guide to Socialism and Capitalism". The book was published in 1928 and sold well. At the end of the decade Shaw produced his final Fabian tract, a commentary on the League of Nations. He described the League as "a school for the new international statesmanship as against the old Foreign Office diplomacy", but thought that it had not yet become the "Federation of the World".
Shaw returned to the theatre with what he called "a political extravaganza", "The Apple Cart", written in late 1928. It was, in Ervine's view, unexpectedly popular, taking a conservative, monarchist, anti-democratic line that appealed to contemporary audiences. The premiere was in Warsaw in June 1928, and the first British production was two months later, at Sir Barry Jackson's inaugural Malvern Festival. The other eminent creative artist most closely associated with the festival was Sir Edward Elgar, with whom Shaw enjoyed a deep friendship and mutual regard. He described "The Apple Cart" to Elgar as "a scandalous Aristophanic burlesque of democratic politics, with a brief but shocking sex interlude".
During the 1920s Shaw began to lose faith in the idea that society could be changed through Fabian gradualism, and became increasingly fascinated with dictatorial methods. In 1922 he had welcomed Mussolini's accession to power in Italy, observing that amid the "indiscipline and muddle and Parliamentary deadlock", Mussolini was "the right kind of tyrant". Shaw was prepared to tolerate certain dictatorial excesses; Weintraub in his ODNB biographical sketch comments that Shaw's "flirtation with authoritarian inter-war regimes" took a long time to fade, and Beatrice Webb thought he was "obsessed" about Mussolini.
Shaw's enthusiasm for the Soviet Union dated to the early 1920s when he had hailed Lenin as "the one really interesting statesman in Europe". Having turned down several chances to visit, in 1931 he joined a party led by Nancy Astor. The carefully managed trip culminated in a lengthy meeting with Stalin, whom Shaw later described as "a Georgian gentleman" with no malice in him. At a dinner given in his honour, Shaw told the gathering: "I have seen all the 'terrors' and I was terribly pleased by them". In March 1933 Shaw was a co-signatory to a letter in "The Manchester Guardian" protesting at the continuing misrepresentation of Soviet achievements: "No lie is too fantastic, no slander is too stale ... for employment by the more reckless elements of the British press."
Shaw's admiration for Mussolini and Stalin demonstrated his growing belief that dictatorship was the only viable political arrangement. When the Nazi Party came to power in Germany in January 1933, Shaw described Hitler as "a very remarkable man, a very able man", and professed himself proud to be the only writer in England who was "scrupulously polite and just to Hitler". His principal admiration was for Stalin, whose regime he championed uncritically throughout the decade. Shaw saw the 1939 Molotov–Ribbentrop Pact as a triumph for Stalin who, he said, now had Hitler under his thumb.
Shaw's first play of the decade was "Too True to be Good", written in 1931 and premiered in Boston in February 1932. The reception was unenthusiastic. Brooks Atkinson of "The New York Times" commenting that Shaw had "yielded to the impulse to write without having a subject", judged the play a "rambling and indifferently tedious conversation". The correspondent of "The New York Herald Tribune" said that most of the play was "discourse, unbelievably long lectures" and that although the audience enjoyed the play it was bewildered by it.
During the decade Shaw travelled widely and frequently. Most of his journeys were with Charlotte; she enjoyed voyages on ocean liners, and he found peace to write during the long spells at sea. Shaw met an enthusiastic welcome in South Africa in 1932, despite his strong remarks about the racial divisions of the country. In December 1932 the couple embarked on a round-the-world cruise. In March 1933 they arrived at San Francisco, to begin Shaw's first visit to the US. He had earlier refused to go to "that awful country, that uncivilized place", "unfit to govern itself... illiberal, superstitious, crude, violent, anarchic and arbitrary". He visited Hollywood, with which he was unimpressed, and New York, where he lectured to a capacity audience in the Metropolitan Opera House. Harried by the intrusive attentions of the press, Shaw was glad when his ship sailed from New York harbour. New Zealand, which he and Charlotte visited the following year, struck him as "the best country I've been in"; he urged its people to be more confident and loosen their dependence on trade with Britain. He used the weeks at sea to complete two plays—"The Simpleton of the Unexpected Isles" and "The Six of Calais"—and begin work on a third, "The Millionairess".
Despite his contempt for Hollywood and its aesthetic values, Shaw was enthusiastic about cinema, and in the middle of the decade wrote screenplays for prospective film versions of "Pygmalion" and "Saint Joan". The latter was never made, but Shaw entrusted the rights to the former to the unknown Gabriel Pascal, who produced it at Pinewood Studios in 1938. Shaw was determined that Hollywood should have nothing to do with the film, but was powerless to prevent it from winning one Academy Award ("Oscar"); he described his award for "best-written screenplay" as an insult, coming from such a source. He became the first person to have been awarded both a Nobel Prize and an Oscar. In a 1993 study of the Oscars, Anthony Holden observes that "Pygmalion" was soon spoken of as having "lifted movie-making from illiteracy to literacy".
Shaw's final plays of the 1930s were "Cymbeline Refinished" (1936), "Geneva" (1936) and "In Good King Charles's Golden Days" (1939). The first, a fantasy reworking of Shakespeare, made little impression, but the second, a satire on European dictators, attracted more notice, much of it unfavourable. In particular, Shaw's parody of Hitler as "Herr Battler" was considered mild, almost sympathetic. The third play, an historical conversation piece first seen at Malvern, ran briefly in London in May 1940. James Agate commented that the play contained nothing to which even the most conservative audiences could take exception, and though it was long and lacking in dramatic action only "witless and idle" theatregoers would object. After their first runs none of the three plays were seen again in the West End during Shaw's lifetime.
Towards the end of the decade, both Shaws began to suffer ill health. Charlotte was increasingly incapacitated by Paget's disease of bone, and he developed pernicious anaemia. His treatment, involving injections of concentrated animal liver, was successful, but this breach of his vegetarian creed distressed him and brought down condemnation from militant vegetarians.
Although Shaw's works since "The Apple Cart" had been received without great enthusiasm, his earlier plays were revived in the West End throughout the Second World War, starring such actors as Edith Evans, John Gielgud, Deborah Kerr and Robert Donat. In 1944 nine Shaw plays were staged in London, including "Arms and the Man" with Ralph Richardson, Laurence Olivier, Sybil Thorndike and Margaret Leighton in the leading roles. Two touring companies took his plays all round Britain. The revival in his popularity did not tempt Shaw to write a new play, and he concentrated on prolific journalism. A second Shaw film produced by Pascal, "Major Barbara" (1941), was less successful both artistically and commercially than "Pygmalion", partly because of Pascal's insistence on directing, to which he was unsuited.
Following the outbreak of war on 3 September 1939 and the rapid conquest of Poland, Shaw was accused of defeatism when, in a "New Statesman" article, he declared the war over and demanded a peace conference. Nevertheless, when he became convinced that a negotiated peace was impossible, he publicly urged the neutral United States to join the fight. The London blitz of 1940–41 led the Shaws, both in their mid-eighties, to live full-time at Ayot St Lawrence. Even there they were not immune from enemy air raids, and stayed on occasion with Nancy Astor at her country house, Cliveden. In 1943, the worst of the London bombing over, the Shaws moved back to Whitehall Court, where medical help for Charlotte was more easily arranged. Her condition deteriorated, and she died in September.
Shaw's final political treatise, "Everybody's Political What's What", was published in 1944. Holroyd describes this as "a rambling narrative ... that repeats ideas he had given better elsewhere and then repeats itself". The book sold well—85,000 copies by the end of the year. After Hitler's suicide in May 1945, Shaw approved of the formal condolences offered by the Irish Taoiseach, Éamon de Valera, at the German embassy in Dublin. Shaw disapproved of the postwar trials of the defeated German leaders, as an act of self-righteousness: "We are all potential criminals".
Pascal was given a third opportunity to film Shaw's work with "Caesar and Cleopatra" (1945). It cost three times its original budget and was rated "the biggest financial failure in the history of British cinema". The film was poorly received by British critics, although American reviews were friendlier. Shaw thought its lavishness nullified the drama, and he considered the film "a poor imitation of Cecil B. de Mille".
In 1946, the year of Shaw's ninetieth birthday, he accepted the freedom of Dublin and became the first honorary freeman of the borough of St Pancras, London. In the same year the government asked Shaw informally whether he would accept the Order of Merit. He declined, believing that an author's merit could only be determined by the posthumous verdict of history. 1946 saw the publication, as "The Crime of Imprisonment", of the preface Shaw had written 20 years previously to a study of prison conditions. It was widely praised; a reviewer in the "American Journal of Public Health" considered it essential reading for any student of the American criminal justice system.
Shaw continued to write into his nineties. His last plays were "Buoyant Billions" (1947), his final full-length work; "Farfetched Fables" (1948) a set of six short plays revisiting several of his earlier themes such as evolution; a comic play for puppets, "Shakes versus Shav" (1949), a ten-minute piece in which Shakespeare and Shaw trade insults; and "Why She Would Not" (1950), which Shaw described as "a little comedy", written in one week shortly before his ninety-fourth birthday.
During his later years, Shaw enjoyed tending the gardens at Shaw's Corner. He died at the age of ninety-four of renal failure precipitated by injuries incurred when falling while pruning a tree. He was cremated at Golders Green Crematorium on 6 November 1950. His ashes, mixed with those of Charlotte, were scattered along footpaths and around the statue of Saint Joan in their garden.
Shaw published a collected edition of his plays in 1934, comprising forty-two works. He wrote a further twelve in the remaining sixteen years of his life, mostly one-act pieces. Including eight earlier plays that he chose to omit from his published works, the total is sixty-two.
Shaw's first three full-length plays dealt with social issues. He later grouped them as "Plays Unpleasant". "Widower's Houses" (1892) concerns the landlords of slum properties, and introduces the first of Shaw's New Women—a recurring feature of later plays. "The Philanderer" (1893) develops the theme of the New Woman, draws on Ibsen, and has elements of Shaw's personal relationships, the character of Julia being based on Jenny Patterson. In a 2003 study Judith Evans describes "Mrs Warren's Profession" (1893) as "undoubtedly the most challenging" of the three Plays Unpleasant, taking Mrs Warren's profession—prostitute and, later, brothel-owner—as a metaphor for a prostituted society.
Shaw followed the first trilogy with a second, published as "Plays Pleasant". "Arms and the Man" (1894) conceals beneath a mock-Ruritanian comic romance a Fabian parable contrasting impractical idealism with pragmatic socialism. The central theme of "Candida" (1894) is a woman's choice between two men; the play contrasts the outlook and aspirations of a Christian Socialist and a poetic idealist. The third of the Pleasant group, "You Never Can Tell" (1896), portrays social mobility, and the gap between generations, particularly in how they approach social relations in general and mating in particular.
The "Three Plays for Puritans"—comprising "The Devil's Disciple" (1896), "Caesar and Cleopatra" (1898) and "Captain Brassbound's Conversion" (1899)—all centre on questions of empire and imperialism, a major topic of political discourse in the 1890s. The three are set, respectively, in 1770s America, Ancient Egypt, and 1890s Morocco. "The Gadfly", an adaptation of the popular novel by Ethel Voynich, was unfinished and unperformed. "The Man of Destiny" (1895) is a short curtain raiser about Napoleon.
Shaw's major plays of the first decade of the twentieth century address individual social, political or ethical issues. "Man and Superman" (1902) stands apart from the others in both its subject and its treatment, giving Shaw's interpretation of creative evolution in a combination of drama and associated printed text. "The Admirable Bashville" (1901), a blank verse dramatisation of Shaw's novel "Cashel Byron's Profession", focuses on the imperial relationship between Britain and Africa. "John Bull's Other Island" (1904), comically depicting the prevailing relationship between Britain and Ireland, was popular at the time but fell out of the general repertoire in later years. "Major Barbara" (1905) presents ethical questions in an unconventional way, confounding expectations that in the depiction of an armaments manufacturer on the one hand and the Salvation Army on the other the moral high ground must invariably be held by the latter. "The Doctor's Dilemma" (1906), a play about medical ethics and moral choices in allocating scarce treatment, was described by Shaw as a tragedy. With a reputation for presenting characters who did not resemble real flesh and blood, he was challenged by Archer to present an on-stage death, and here did so, with a deathbed scene for the anti-hero.
"Getting Married" (1908) and "Misalliance" (1909)—the latter seen by Judith Evans as a companion piece to the former—are both in what Shaw called his "disquisitionary" vein, with the emphasis on discussion of ideas rather than on dramatic events or vivid characterisation. Shaw wrote seven short plays during the decade; they are all comedies, ranging from the deliberately absurd "Passion, Poison, and Petrifaction" (1905) to the satirical "Press Cuttings" (1909).
In the decade from 1910 to the aftermath of the First World War Shaw wrote four full-length plays, the third and fourth of which are among his most frequently staged works. "Fanny's First Play" (1911) continues his earlier examinations of middle-class British society from a Fabian viewpoint, with additional touches of melodrama and an epilogue in which theatre critics discuss the play. "Androcles and the Lion" (1912), which Shaw began writing as a play for children, became a study of the nature of religion and how to put Christian precepts into practice. "Pygmalion" (1912) is a Shavian study of language and speech and their importance in society and in personal relationships. To correct the impression left by the original performers that the play portrayed a romantic relationship between the two main characters Shaw rewrote the ending to make it clear that the heroine will marry another, minor character. Shaw's only full-length play from the war years is "Heartbreak House" (1917), which in his words depicts "cultured, leisured Europe before the war" drifting towards disaster. Shaw named Shakespeare ("King Lear") and Chekhov ("The Cherry Orchard") as important influences on the piece, and critics have found elements drawing on Congreve ("The Way of the World") and Ibsen ("The Master Builder").
The short plays range from genial historical drama in "The Dark Lady of the Sonnets" and "Great Catherine" (1910 and 1913) to a study of polygamy in "Overruled"; three satirical works about the war ("The Inca of Perusalem", "O'Flaherty V.C." and "Augustus Does His Bit", 1915–16); a piece that Shaw called "utter nonsense" ("The Music Cure", 1914) and a brief sketch about a "Bolshevik empress" ("Annajanska", 1917).
"Saint Joan" (1923) drew widespread praise both for Shaw and for Sybil Thorndike, for whom he wrote the title role and who created the part in Britain. In the view of the commentator Nicholas Grene, Shaw's Joan, a "no-nonsense mystic, Protestant and nationalist before her time" is among the 20th century's classic leading female roles. "The Apple Cart" (1929) was Shaw's last popular success. He gave both that play and its successor, "Too True to Be Good" (1931), the subtitle "A political extravaganza", although the two works differ greatly in their themes; the first presents the politics of a nation (with a brief royal love-scene as an interlude) and the second, in Judith Evans's words, "is concerned with the social mores of the individual, and is nebulous." Shaw's plays of the 1930s were written in the shadow of worsening national and international political events. Once again, with "On the Rocks" (1933) and "The Simpleton of the Unexpected Isles" (1934), a political comedy with a clear plot was followed by an introspective drama. The first play portrays a British prime minister considering, but finally rejecting, the establishment of a dictatorship; the second is concerned with polygamy and eugenics and ends with the Day of Judgement.
"The Millionairess" (1934) is a farcical depiction of the commercial and social affairs of a successful businesswoman. "Geneva" (1936) lampoons the feebleness of the League of Nations compared with the dictators of Europe. "In Good King Charles's Golden Days" (1939), described by Weintraub as a warm, discursive high comedy, also depicts authoritarianism, but less satirically than "Geneva". As in earlier decades, the shorter plays were generally comedies, some historical and others addressing various political and social preoccupations of the author. Ervine writes of Shaw's later work that although it was still "astonishingly vigorous and vivacious" it showed unmistakable signs of his age. "The best of his work in this period, however, was full of wisdom and the beauty of mind often displayed by old men who keep their wits about them."
Shaw's collected musical criticism, published in three volumes, runs to more than 2,700 pages. It covers the British musical scene from 1876 to 1950, but the core of the collection dates from his six years as music critic of "The Star" and "The World" in the late 1880s and early 1890s. In his view music criticism should be interesting to everyone rather than just the musical élite, and he wrote for the non-specialist, avoiding technical jargon—"Mesopotamian words like 'the dominant of D major'". He was fiercely partisan in his columns, promoting the music of Wagner and decrying that of Brahms and those British composers such as Stanford and Parry whom he saw as Brahmsian. He campaigned against the prevailing fashion for performances of Handel oratorios with huge amateur choirs and inflated orchestration, calling for "a chorus of twenty capable artists". He railed against opera productions unrealistically staged or sung in languages the audience did not speak.
In Shaw's view, the London theatres of the 1890s presented too many revivals of old plays and not enough new work. He campaigned against "melodrama, sentimentality, stereotypes and worn-out conventions". As a music critic he had frequently been able to concentrate on analysing new works, but in the theatre he was often obliged to fall back on discussing how various performers tackled well-known plays. In a study of Shaw's work as a theatre critic, E. J. West writes that Shaw "ceaselessly compared and contrasted artists in interpretation and in technique". Shaw contributed more than 150 articles as theatre critic for "The Saturday Review", in which he assessed more than 212 productions. He championed Ibsen's plays when many theatregoers regarded them as outrageous, and his 1891 book "Quintessence of Ibsenism" remained a classic throughout the twentieth century. Of contemporary dramatists writing for the West End stage he rated Oscar Wilde above the rest: "... our only thorough playwright. He plays with everything: with wit, with philosophy, with drama, with actors and audience, with the whole theatre". Shaw's collected criticisms were published as "Our Theatres in the Nineties" in 1932.
Shaw maintained a provocative and frequently self-contradictory attitude to Shakespeare (whose name he insisted on spelling "Shakespear"). Many found him difficult to take seriously on the subject; Duff Cooper observed that by attacking Shakespeare, "it is Shaw who appears a ridiculous pigmy shaking his fist at a mountain." Shaw was, nevertheless, a knowledgeable Shakespearian, and in an article in which he wrote, "With the single exception of Homer, there is no eminent writer, not even Sir Walter Scott, whom I can despise so entirely as I despise Shakespear when I measure my mind against his," he also said, "But I am bound to add that I pity the man who cannot enjoy Shakespear. He has outlasted thousands of abler thinkers, and will outlast a thousand more". Shaw had two regular targets for his more extreme comments about Shakespeare: undiscriminating "Bardolaters", and actors and directors who presented insensitively cut texts in over-elaborate productions. He was continually drawn back to Shakespeare, and wrote three plays with Shakespearean themes: "The Dark Lady of the Sonnets", "Cymbeline Refinished" and "Shakes versus Shav". In a 2001 analysis of Shaw's Shakespearian criticisms, Robert Pierce concludes that Shaw, who was no academic, saw Shakespeare's plays—like all theatre—from an author's practical point of view: "Shaw helps us to get away from the Romantics' picture of Shakespeare as a titanic genius, one whose art cannot be analyzed or connected with the mundane considerations of theatrical conditions and profit and loss, or with a specific staging and cast of actors."
Shaw's political and social commentaries were published variously in Fabian tracts, in essays, in two full-length books, in innumerable newspaper and journal articles and in prefaces to his plays. The majority of Shaw's Fabian tracts were published anonymously, representing the voice of the society rather than of Shaw, although the society's secretary Edward Pease later confirmed Shaw's authorship. According to Holroyd, the business of the early Fabians, mainly under the influence of Shaw, was to "alter history by rewriting it". Shaw's talent as a pamphleteer was put to immediate use in the production of the society's manifesto—after which, says Holroyd, he was never again so succinct.
After the turn of the twentieth century, Shaw increasingly propagated his ideas through the medium of his plays. An early critic, writing in 1904, observed that Shaw's dramas provided "a pleasant means" of proselytising his socialism, adding that "Mr Shaw's views are to be sought especially in the prefaces to his plays". After loosening his ties with the Fabian movement in 1911, Shaw's writings were more personal and often provocative; his response to the furore following the issue of "Common Sense About the War" in 1914, was to prepare a sequel, "More Common Sense About the War". In this, he denounced the pacifist line espoused by Ramsay MacDonald and other socialist leaders, and proclaimed his readiness to shoot all pacifists rather than cede them power and influence. On the advice of Beatrice Webb, this pamphlet remained unpublished.
"The Intelligent Woman's Guide", Shaw's main political treatise of the 1920s, attracted both admiration and criticism. MacDonald considered it the world's most important book since the Bible; Harold Laski thought its arguments outdated and lacking in concern for individual freedoms. Shaw's increasing flirtation with dictatorial methods is evident in many of his subsequent pronouncements. A "New York Times" report dated 10 December 1933 quoted a recent Fabian Society lecture in which Shaw had praised Hitler, Mussolini and Stalin: "[T]hey are trying to get something done, [and] are adopting methods by which it is possible to get something done". As late as the Second World War, in "Everybody's Political What's What", Shaw blamed the Allies' "abuse" of their 1918 victory for the rise of Hitler, and hoped that, after defeat, the Führer would escape retribution "to enjoy a comfortable retirement in Ireland or some other neutral country". These sentiments, according to the Irish philosopher-poet Thomas Duddy, "rendered much of the Shavian outlook passé and contemptible".
"Creative evolution", Shaw's version of the new science of eugenics, became an increasing theme in his political writing after 1900. He introduced his theories in "The Revolutionist's Handbook" (1903), an appendix to "Man and Superman", and developed them further during the 1920s in "Back to Methuselah". A 1946 "Life" magazine article observed that Shaw had "always tended to look at people more as a biologist than as an artist". By 1933, in the preface to "On the Rocks", he was writing that "if we desire a certain type of civilization and culture we must exterminate the sort of people who do not fit into it"; critical opinion is divided on whether this was intended as irony. In an article in the American magazine "Liberty" in September 1938, Shaw included the statement: "There are many people in the world who ought to be liquidated". Many commentators assumed that such comments were intended as a joke, although in the worst possible taste. Otherwise, "Life" magazine concluded, "this silliness can be classed with his more innocent bad guesses".
Shaw's fiction-writing was largely confined to the five unsuccessful novels written in the period 1879–1885. "Immaturity" (1879) is a semi-autobiographical portrayal of mid-Victorian England, Shaw's "own "David Copperfield"" according to Weintraub. "The Irrational Knot" (1880) is a critique of conventional marriage, in which Weintraub finds the characterisations lifeless, "hardly more than animated theories". Shaw was pleased with his third novel, "Love Among the Artists" (1881), feeling that it marked a turning point in his development as a thinker, although he had no more success with it than with its predecessors. "Cashel Byron's Profession" (1882) is, says Weintraub, an indictment of society which anticipates Shaw's first full-length play, "Mrs Warren's Profession". Shaw later explained that he had intended "An Unsocial Socialist" as the first section of a monumental depiction of the downfall of capitalism. Gareth Griffith, in a study of Shaw's political thought, sees the novel as an interesting record of conditions, both in society at large and in the nascent socialist movement of the 1880s.
Shaw's only subsequent fiction of any substance was his 1932 novella "The Adventures of the Black Girl in Her Search for God", written during a visit to South Africa in 1932. The eponymous girl, intelligent, inquisitive, and converted to Christianity by insubstantial missionary teaching, sets out to find God, on a journey that after many adventures and encounters, leads her to a secular conclusion. The story, on publication, offended some Christians and was banned in Ireland by the Board of Censors.
Shaw was a prolific correspondent throughout his life. His letters, edited by Dan H. Laurence, were published between 1965 and 1988. Shaw once estimated his letters would occupy twenty volumes; Laurence commented that, unedited, they would fill many more. Shaw wrote more than a quarter of a million letters, of which about ten per cent have survived; 2,653 letters are printed in Laurence's four volumes. Among Shaw's many regular correspondents were his childhood friend Edward McNulty; his theatrical colleagues (and "amitiés amoureuses") Mrs Patrick Campbell and Ellen Terry; writers including Lord Alfred Douglas, H. G. Wells and G. K. Chesterton; the boxer Gene Tunney; the nun Laurentia McLachlan; and the art expert Sydney Cockerell. In 2007 a 316-page volume consisting entirely of Shaw's letters to "The Times" was published.
Shaw's diaries for 1885–1897, edited by Weintraub, were published in two volumes, with a total of 1,241 pages, in 1986. Reviewing them, the Shaw scholar Fred Crawford wrote: "Although the primary interest for Shavians is the material that supplements what we already know about Shaw's life and work, the diaries are also valuable as a historical and sociological document of English life at the end of the Victorian age." After 1897, pressure of other writing led Shaw to give up keeping a diary.
Through his journalism, pamphlets and occasional longer works, Shaw wrote on many subjects. His range of interest and enquiry included vivisection, vegetarianism, religion, language, cinema and photography, on all of which he wrote and spoke copiously. Collections of his writings on these and other subjects were published, mainly after his death, together with volumes of "wit and wisdom" and general journalism.
Despite the many books written about him (Holroyd counts 80 by 1939) Shaw's autobiographical output, apart from his diaries, was relatively slight. He gave interviews to newspapers—"GBS Confesses", to "The Daily Mail" in 1904 is an example—and provided sketches to would-be biographers whose work was rejected by Shaw and never published. In 1939 Shaw drew on these materials to produce "Shaw Gives Himself Away", a miscellany which, a year before his death, he revised and republished as "Sixteen Self Sketches" (there were seventeen). He made it clear to his publishers that this slim book was in no sense a full autobiography.
Throughout his lifetime Shaw professed many beliefs, often contradictory. This inconsistency was partly an intentional provocation—the Spanish scholar-statesman Salvador de Madariaga describes Shaw as "a pole of negative electricity set in a people of positive electricity". In one area at least Shaw was constant: in his lifelong refusal to follow normal English forms of spelling and punctuation. He favoured archaic spellings such as "shew" for "show"; he dropped the "u" in words like "honour" and "favour"; and wherever possible he rejected the apostrophe in contractions such as "won't" or "that's". In his will, Shaw ordered that, after some specified legacies, his remaining assets were to form a trust to pay for fundamental reform of the English alphabet into a phonetic version of forty letters. Though Shaw's intentions were clear, his drafting was flawed, and the courts initially ruled the intended trust void. A later out-of-court agreement provided a sum of £8,300 for spelling reform; the bulk of his fortune went to the residuary legatees—the British Museum, the Royal Academy of Dramatic Art and the National Gallery of Ireland. Most of the £8,300 went on a special phonetic edition of "Androcles and the Lion" in the Shavian alphabet, published in 1962 to a largely indifferent reception.
Shaw's views on religion and Christianity were less consistent. Having in his youth proclaimed himself an atheist, in middle age he explained this as a reaction against the Old Testament image of a vengeful Jehovah. By the early twentieth century, he termed himself a "mystic", although Gary Sloan, in an essay on Shaw's beliefs, disputes his credentials as such. In 1913 Shaw declared that he was not religious "in the sectarian sense", aligning himself with Jesus as "a person of no religion". In the preface (1915) to "Androcles and the Lion", Shaw asks "Why not give Christianity a chance?" contending that Britain's social order resulted from the continuing choice of Barabbas over Christ. In a broadcast just before the Second World War, Shaw invoked the Sermon on the Mount, "a very moving exhortation, and it gives you one first-rate tip, which is to do good to those who despitefully use you and persecute you". In his will, Shaw stated that his "religious convictions and scientific views cannot at present be more specifically defined than as those of a believer in creative revolution". He requested that no one should imply that he accepted the beliefs of any specific religious organisation, and that no memorial to him should "take the form of a cross or any other instrument of torture or symbol of blood sacrifice".
Shaw espoused racial equality, and inter-marriage between people of different races. Despite his expressed wish to be fair to Hitler, he called anti-Semitism "the hatred of the lazy, ignorant fat-headed Gentile for the pertinacious Jew who, schooled by adversity to use his brains to the utmost, outdoes him in business". In "The Jewish Chronicle" he wrote in 1932, "In every country you can find rabid people who have a phobia against Jews, Jesuits, Armenians, Negroes, Freemasons, Irishmen, or simply foreigners as such. Political parties are not above exploiting these fears and jealousies."
In 1903 Shaw joined in a controversy about vaccination against smallpox. He called vaccination "a peculiarly filthy piece of witchcraft"; in his view immunisation campaigns were a cheap and inadequate substitute for a decent programme of housing for the poor, which would, he declared, be the means of eradicating smallpox and other infectious diseases. Less contentiously, Shaw was keenly interested in transport; Laurence observed in 1992 a need for a published study of Shaw's interest in "bicycling, motorbikes, automobiles, and planes, climaxing in his joining the Interplanetary Society in his nineties". Shaw published articles on travel, took photographs of his journeys, and submitted notes to the Royal Automobile Club.
Shaw strove throughout his adult life to be referred to as "Bernard Shaw" rather than "George Bernard Shaw", but confused matters by continuing to use his full initials—G.B.S.—as a by-line, and often signed himself "G.Bernard Shaw". He left instructions in his will that his executor (the Public Trustee) was to license publication of his works only under the name Bernard Shaw. Shaw scholars including Ervine, Judith Evans, Holroyd, Laurence and Weintraub, and many publishers have respected Shaw's preference, although the Cambridge University Press was among the exceptions with its 1988 "Cambridge Companion to George Bernard Shaw".
Shaw did not found a school of dramatists as such, but Crawford asserts that today "we recognise [him] as second only to Shakespeare in the British theatrical tradition ... the proponent of the theater of ideas" who struck a death-blow to 19th-century melodrama. According to Laurence, Shaw pioneered "intelligent" theatre, in which the audience was required to think, thereby paving the way for the new breeds of twentieth-century playwrights from Galsworthy to Pinter.
Crawford lists numerous playwrights whose work owes something to that of Shaw. Among those active in Shaw's lifetime he includes Noël Coward, who based his early comedy "The Young Idea" on "You Never Can Tell" and continued to draw on the older man's works in later plays. T. S. Eliot, by no means an admirer of Shaw, admitted that the epilogue of "Murder in the Cathedral", in which Becket's slayers explain their actions to the audience, might have been influenced by "Saint Joan". The critic Eric Bentley comments that Eliot's later play "The Confidential Clerk" "had all the earmarks of Shavianism ... without the merits of the real Bernard Shaw". Among more recent British dramatists, Crawford marks Tom Stoppard as "the most Shavian of contemporary playwrights"; Shaw's "serious farce" is continued in the works of Stoppard's contemporaries Alan Ayckbourn, Henry Livings and Peter Nichols.
Shaw's influence crossed the Atlantic at an early stage. Bernard Dukore notes that he was successful as a dramatist in America ten years before achieving comparable success in Britain. Among many American writers professing a direct debt to Shaw, Eugene O'Neill became an admirer at the age of seventeen, after reading "The Quintessence of Ibsenism". Other Shaw-influenced American playwrights mentioned by Dukore are Elmer Rice, for whom Shaw "opened doors, turned on lights, and expanded horizons"; William Saroyan, who empathised with Shaw as "the embattled individualist against the philistines"; and S. N. Behrman, who was inspired to write for the theatre after attending a performance of "Caesar and Cleopatra": "I thought it would be agreeable to write plays like that".
Assessing Shaw's reputation in a 1976 critical study, T. F. Evans described Shaw as unchallenged in his lifetime and since as the leading English-language dramatist of the (twentieth) century, and as a master of prose style. The following year, in a contrary assessment, the playwright John Osborne castigated "The Guardian"s theatre critic Michael Billington for referring to Shaw as "the greatest British dramatist since Shakespeare". Osborne responded that Shaw "is the most fraudulent, inept writer of Victorian melodramas ever to gull a timid critic or fool a dull public". Despite this hostility, Crawford sees the influence of Shaw in some of Osborne's plays, and concludes that though the latter's work is neither imitative nor derivative, these affinities are sufficient to classify Osborne as an inheritor of Shaw.
In a 1983 study, R. J. Kaufmann suggests that Shaw was a key forerunner—"godfather, if not actually finicky paterfamilias"—of the Theatre of the Absurd. Two further aspects of Shaw's theatrical legacy are noted by Crawford: his opposition to stage censorship, which was finally ended in 1968, and his efforts which extended over many years to establish a National Theatre. Shaw's short 1910 play "The Dark Lady of the Sonnets", in which Shakespeare pleads with Queen Elizabeth I for the endowment of a state theatre, was part of this campaign.
Writing in "The New Statesman" in 2012 Daniel Janes commented that Shaw's reputation had declined by the time of his 150th anniversary in 2006 but had recovered considerably. In Janes's view, the many current revivals of Shaw's major works showed the playwright's "almost unlimited relevance to our times". In the same year, Mark Lawson wrote in "The Guardian" that Shaw's moral concerns engaged present-day audiences, and made him—like his model, Ibsen—one of the most popular playwrights in contemporary British theatre.
The Shaw Festival in Niagara-on-the-Lake, Ontario, Canada is the second largest repertory theatre company in North America. It produces plays by or written during the lifetime of Shaw as well as some contemporary works. The Gingold Theatrical Group, founded in 2006, presents works by Shaw and others in New York City that feature the humanitarian ideals that his work promoted. It became the first theatre group to present all of Shaw's stage work through its monthly concert series "Project Shaw".
In the 1940s the author Harold Nicolson advised the National Trust not to accept the bequest of Shaw's Corner, predicting that Shaw would be totally forgotten within fifty years. In the event, Shaw's broad cultural legacy, embodied in the widely used term "Shavian", has endured and is nurtured by Shaw Societies in various parts of the world. The original society was founded in London in 1941 and survives; it organises meetings and events, and publishes a regular bulletin "The Shavian". The Shaw Society of America began in June 1950; it foundered in the 1970s but its journal, adopted by Penn State University Press, continued to be published as "Shaw: The Annual of Bernard Shaw Studies" until 2004. A second American organisation, founded in 1951 as "The Bernard Shaw Society", remains active . More recent societies have been established in Japan and India.
Besides his collected music criticism, Shaw has left a varied musical legacy, not all of it of his choosing. Despite his dislike of having his work adapted for the musical theatre ("my plays set themselves to a verbal music of their own") two of his plays were turned into musical comedies: "Arms and the Man" was the basis of "The Chocolate Soldier" in 1908, with music by Oscar Straus, and "Pygmalion" was adapted in 1956 as "My Fair Lady" with book and lyrics by Alan Jay Lerner and music by Frederick Loewe. Although he had a high regard for Elgar, Shaw turned down the composer's request for an opera libretto, but played a major part in persuading the BBC to commission Elgar's Third Symphony, and was the dedicatee of "The Severn Suite" (1930).
The substance of Shaw's political legacy is uncertain. In 1921 Shaw's erstwhile collaborator William Archer, in a letter to the playwright, wrote: "I doubt if there is any case of a man so widely read, heard, seen, and known as yourself, who has produced so little effect on his generation." Margaret Cole, who considered Shaw the greatest writer of his age, professed never to have understood him. She thought he worked "immensely hard" at politics, but essentially, she surmises, it was for fun—"the fun of a brilliant artist". After Shaw's death, Pearson wrote: "No one since the time of Tom Paine has had so definite an influence on the social and political life of his time and country as Bernard Shaw."
In its obituary tribute to Shaw, "The Times Literary Supplement" concluded: | https://en.wikipedia.org/wiki?curid=12855 |
Galvanization
Galvanization or galvanizing (also spelled galvanisation or galvanising) is the process of applying a protective zinc coating to steel or iron, to prevent rusting. The most common method is hot-dip galvanizing, in which the parts are submerged in a bath of molten hot zinc.
Galvanizing protects the underlying iron or steel in the following main ways:
The earliest known example of galvanized iron was encountered by Europeans on 17th-century Indian armour in the Royal Armouries Museum collection.
The etymology of galvanisation is via French from the name of Italian scientist Luigi Galvani. However this is an obscure back-formation; Galvani had no involvement in zinc coating.
The earliest use of the term was in late 18th-century scientific research and medical practice by Galvani and meant the stimulation of a muscle by the application of an electric current. Although Galvani was the first to study this, it was Alessandro Volta who then developed a better understanding of its cause and effect. Galvani's explanation of 'animal electricity' as a cause was replaced by Volta's invention of the electric battery and its use to stimulate animal tissue. Despite the superseding of his experimental results, it was Galvani's name rather than Volta's which became associated with the field.
The term "galvanized" continues to be used metaphorically of any stimulus which results in activity by a person or group of people, such as to "galvanize into action" meaning stimulating a complacent person or group to take action.
In modern usage, the term "galvanizing" has largely come to be associated with zinc coatings, to the exclusion of other metals. Galvanic paint, a precursor to hot-dip galvanizing, was patented by Stanislas Sorel, of Paris, in December 1837, as an adoption of a term from a highly fashionable field of contemporary science, despite having no evident relation to it.
Hot-dip galvanizing deposits a thick, robust layer of zinc iron alloys on the surface of a steel item. In the case of automobile bodies, where additional decorative coatings of paint will be applied, a thinner form of galvanizing is applied by electrogalvanizing. The hot-dip process generally does not reduce strength on a measurable scale, with the exception of high-strength steels (>1100 MPa) where hydrogen embrittlement can become a problem. This deficiency is a consideration affecting the manufacture of wire rope and other highly stressed products.
The protection provided by hot-dip galvanizing is insufficient for products that will be constantly exposed to corrosive materials such as acids, including acid rain in outdoor uses. For these applications, more expensive stainless steel is preferred. Some nails made today are galvanized. Nonetheless, electroplating is used on its own for many outdoor applications because it is cheaper than hot-dip zinc coating and looks good when new. Another reason not to use hot-dip zinc coating is that for bolts and nuts of size M10 (US 3/8") or smaller, the thick hot-dipped coating fills in too much of the threads, which reduces strength (because the dimension of the steel prior to coating must be reduced for the fasteners to fit together). This means that for cars, bicycles, and many other light mechanical products, the practical alternative to electroplating bolts and nuts is not hot-dip zinc coating, but making the fasteners from stainless steel or titanium.
The size of crystallites (grains) in galvanized coatings is a visible and aesthetic feature, known as "spangle". By varying the number of particles added for heterogeneous nucleation and the rate of cooling in a hot-dip process, the spangle can be adjusted from an apparently uniform surface (crystallites too small to see with the naked eye) to grains several centimetres wide. Visible crystallites are rare in other engineering materials, even though they are usually present.
Thermal diffusion galvanizing, or Sherardizing, provides a zinc diffusion coating on iron- or copper-based materials. Parts and zinc powder are tumbled in a sealed rotating drum. Around , zinc will diffuse into the substrate to form a zinc alloy. The advance surface preparation of the goods can be carried out by shot blasting. The process is also known as "dry galvanizing", because no liquids are involved; this can avoid possible problems caused by hydrogen embrittlement. The dull-grey crystal structure of the zinc diffusion coating has a good adhesion to paint, powder coatings, or rubber. It is a preferred method for coating small, complex-shaped metals, and for smoothing rough surfaces on items formed with sintered metal.
Although galvanizing will inhibit attack of the underlying steel, rusting will be inevitable after some decades' exposure to weather, especially if exposed to acidic conditions. For example, corrugated iron sheet roofing will start to degrade within a few years despite the protective action of the zinc coating. Marine and salty environments also lower the lifetime of galvanized iron because the high electrical conductivity of sea water increases the rate of corrosion, primarily through converting the solid zinc to soluble zinc chloride which simply washes away. Galvanized car frames exemplify this; they corrode much faster in cold environments due to road salt, though they will last longer than unprotected steel.
Galvanized steel can last for many decades if other supplementary measures are maintained, such as paint coatings and additional sacrificial anodes. The rate of corrosion in non-salty environments is caused mainly by levels of sulfur dioxide in the air. In the most benign natural environments, such as inland low population areas, galvanized steel can last without rust for over 100 years.
This is the most common use for galvanized metal, and hundreds of thousands of tons of steel products are galvanized annually worldwide. In developed countries most larger cities have several galvanizing factories, and many items of steel manufacture are galvanized for protection. Typically these include: street furniture, building frameworks, balconies, verandahs, staircases, ladders, walkways, and more. Hot dip galvanized steel is also used for making steel frames as a basic construction material for steel frame buildings.
In the early 20th century, galvanized piping replaced previously-used cast iron and lead in cold-water plumbing. Typically, galvanized piping rusts from the inside out, building up layers of plaque on the inside of the piping, causing both water pressure problems and eventual pipe failure. These plaques can flake off, leading to visible impurities in water and a slight metallic taste. The life expectancy of galvanized piping is about 70 years, but it may vary by region due to impurities in the water supply and the proximity of electrical grids for which interior piping acts as a pathway (the flow of electricity can accelerate chemical corrosion). Pipe longevity also depends on the thickness of zinc in the original galvanizing, which ranges on a scale from G40 to G210, and whether the pipe was galvanized on both the inside and outside, or just the outside.
Since World War II, copper and plastic piping have replaced galvanized piping for interior drinking water service, but galvanized steel pipes are still used in outdoor applications requiring steel's superior mechanical strength.
The use of galvanized pipes lends some truth to the urban myth that water purity in outdoor water faucets is lower, but the actual impurities (iron, zinc, calcium) are harmless.
The presence of galvanized piping detracts from the appraised value of housing stock because piping can fail, increasing the risk of water damage. Galvanized piping will eventually need to be replaced if housing stock is to outlast a 50 to 70 year life expectancy, and some jurisdictions require galvanized piping to be replaced before sale. One option to extend the life expectancy of existing galvanized piping is to line it with an epoxy resin. | https://en.wikipedia.org/wiki?curid=12858 |
Golden Rule
The Golden Rule is the principle of treating others as you want to be treated. It is a maxim that is found in many religions and cultures. It can be considered an ethic of reciprocity in some religions, although different religions treat it differently.
The maxim may appear as a positive or negative injunction governing conduct:
The idea dates at least to the early Confucian times (551–479 BC), according to Rushworth Kidder, who identifies that this concept appears prominently in Buddhism, Christianity, Hinduism, Judaism, Taoism, Zoroastrianism, and "the rest of the world's major religions". 143 leaders of the world's major faiths endorsed the Golden Rule as part of the 1993 "Declaration Toward a Global Ethic". According to Greg M. Epstein, it is "a concept that essentially no religion misses entirely", but belief in God is not necessary to endorse it. Simon Blackburn also states that the Golden Rule can be "found in some form in almost every ethical tradition".
The term "Golden Rule", or "Golden law", began to be used widely in the early 17th century in Britain by Anglican theologians and preachers; the earliest known usage is that of Anglicans Charles Gibbon and Thomas Jackson in 1604.
Possibly the earliest affirmation of the maxim of reciprocity, reflecting the ancient Egyptian goddess Ma'at, appears in the story of The Eloquent Peasant, which dates to the Middle Kingdom (c. 2040–1650 BC): "Now this is the command: Do to the doer to make him do." This proverb embodies the "do ut des" principle. A Late Period (c. 664–323 BC) papyrus contains an early negative affirmation of the Golden Rule: "That which you hate to be done to you, do not do to another."
In "Mahābhārata", the ancient epic of India, there is a discourse in which sage Brihaspati tells the king Yudhishthira the following
The Mahābhārata is usually dated to the period between 400 BC and 400 AD.
In Chapter 32 in the Part on Virtue of the Tirukkuṛaḷ (c. 1st century BC), Valluvar says: "Do not do to others what you know has hurt yourself" (kural 316); "Why does one hurt others knowing what it is to be hurt?" (kural 318). He furthermore opined that it is the determination of the spotless (virtuous) not to do evil, even in return, to those who have cherished enmity and done them evil (kural 312). The (proper) punishment to those who have done evil (to you), is to put them to shame by showing them kindness, in return and to forget both the evil and the good done on both sides (kural 314).
The Golden Rule in its prohibitive (negative) form was a common principle in ancient Greek philosophy. Examples of the general concept include:
The Pahlavi Texts of Zoroastrianism (c. 300 BC–1000 AD) were an early source for the Golden Rule: "That nature alone is good which refrains from doing to another whatsoever is not good for itself." Dadisten-I-dinik, 94,5, and "Whatever is disagreeable to yourself do not do unto others." Shayast-na-Shayast 13:29
Seneca the Younger (c. 4 BC–65 AD), a practitioner of Stoicism (c. 300 BC–200 AD) expressed the Golden Rule in his essay regarding the treatment of slaves: "Treat your inferior as you would wish your superior to treat you."
According to Simon Blackburn, the Golden Rule "can be found in some form in almost every ethical tradition".
A rule of altruistic reciprocity was stated positively in a well-known Torah verse (Hebrew: ):
Hillel the Elder (c. 110 BC – 10 AD), used this verse as a most important message of the Torah for his teachings. Once, he was challenged by a gentile who asked to be converted under the condition that the Torah be explained to him while he stood on one foot. Hillel accepted him as a candidate for conversion to Judaism but, drawing on , briefed the man:
Hillel recognized brotherly love as the fundamental principle of Jewish ethics. Rabbi Akiva agreed and suggested that the principle of love must have its foundation in Genesis chapter 1, which teaches that all men are the offspring of Adam, who was made in the image of God (Sifra, Ḳedoshim, iv.; Yer. Ned. ix. 41c; Genesis Rabba 24). According to Jewish rabbinic literature, the first man Adam represents the "unity of mankind". This is echoed in the modern preamble of the Universal Declaration of Human Rights. And it is also taught, that Adam is last in order according to the evolutionary character of God's creation:Why was only a single specimen of man created first? To teach us that he who destroys a single soul destroys a whole world and that he who saves a single soul saves a whole world; furthermore, so no race or class may claim a nobler ancestry, saying, 'Our father was born first'; and, finally, to give testimony to the greatness of the Lord, who caused the wonderful diversity of mankind to emanate from one type. And why was Adam created last of all beings? To teach him humility; for if he be overbearing, let him remember that the little fly preceded him in the order of creation.
The Jewish Publication Society's edition of Leviticus states:Thou shalt not hate thy brother. in thy heart; thou shalt surely rebuke thy neighbour, and not bear sin because of him. 18 Thou shalt not take vengeance, nor bear any grudge against the children of thy people, but thou shalt love thy neighbour as thyself: I am the .This Torah verse represents one of several versions of the "Golden Rule", which itself appears in various forms, positive and negative. It is the earliest written version of that concept in a positive form.
At the turn of the eras, the Jewish rabbis were discussing the scope of the meaning of Leviticus 19:18 and 19:34 extensively:
Commentators summed up foreigners (= Samaritans), proselytes (= 'strangers who resides with you') (Rabbi Akiva, bQuid 75b) or Jews (Rabbi Gamaliel, yKet 3, 1; 27a) to the scope of the meaning.
On the verse, "Love your fellow as yourself", the classic commentator Rashi quotes from Torat Kohanim, an early Midrashic text regarding the famous dictum of Rabbi Akiva: "Love your fellow as yourself – Rabbi Akiva says this is a great principle of the Torah."
Israel's postal service quoted from the previous Leviticus verse when it commemorated the Universal Declaration of Human Rights on a 1958 postage stamp.
The "Golden Rule" of Leviticus 19:18 was quoted by Jesus of Nazareth (; see also ) and described by him as the second great commandment. The common English phrasing is "Do unto others as you would have them do unto you". A similar form of the phrase appeared in a Catholic catechism around 1567 (certainly in the reprint of 1583).
The Golden Rule is stated positively numerous times in the Old Testament: ("Thou shalt not avenge, nor bear any grudge against the children of thy people, but thou shalt love thy neighbour as thyself: I am the LORD."; see also Great Commandment) and ("But treat them just as you treat your own citizens. Love foreigners as you love yourselves, because you were foreigners one time in Egypt. I am the Lord your God.").
The Old Testament Deuterocanonical books of Tobit and Sirach, accepted as part of the Scriptural canon by Catholic Church, Eastern Orthodoxy, and the Non-Chalcedonian Churches, express a negative form of the golden rule:
Two passages in the New Testament quote Jesus of Nazareth espousing the positive form of the Golden rule:
A similar passage, a parallel to the Great Commandment, is
The passage in the book of Luke then continues with Jesus answering the question, "Who is my neighbor?", by telling the parable of the Good Samaritan, indicating that "your neighbor" is anyone in need. This extends to all, including those who are generally considered hostile.
Jesus' teaching goes beyond the negative formulation of not doing what one would not like done to themselves, to the positive formulation of actively doing good to another that, if the situations were reversed, one would desire that the other would do for them. This formulation, as indicated in the parable of the Good Samaritan, emphasizes the needs for positive action that brings benefit to another, not simply restraining oneself from negative activities that hurt another.
In one passage of the New Testament, Paul the Apostle refers to the golden rule:
St. Paul also comments on the golden rule in the book of Romans:
“The commandments, ‘You shall not commit adultery,’ ‘You shall not murder,’ ‘You shall not steal,’ ‘You shall not covet,’ and whatever other command there may be, are summed up in this one command: ‘Love your neighbor as yourself.’” Romans 13:8-9 (NIV).
The Arabian peninsula was known to not practice the golden rule prior to the advent of Islam. According to Th. Emil Homerin: "Pre-Islamic Arabs regarded the survival of the tribe, as most essential and to be ensured by the ancient rite of blood vengeance." Homerin goes on to say:
From the hadith, the collected oral and written accounts of Muhammad and his teachings during his lifetime:
Ali ibn Abi Talib (4th Caliph in Sunni Islam, and first Imam in Shia Islam) says:
The writings of the Bahá'í Faith encourages everyone to treat others as they would treat themselves and even prefer others over oneself:
Also,
Buddha (Siddhartha Gautama, c. 623–543 BC) made this principle one of the cornerstones of his ethics in the 6th century BC. It occurs in many places and in many forms throughout the Tripitaka.
The Golden Rule is paramount in the Jainist philosophy and can be seen in the doctrines of Ahimsa and Karma. As part of the prohibition of causing any living beings to suffer, Jainism forbids inflicting upon others what is harmful to oneself.
The following quotation from the Acaranga Sutra sums up the philosophy of Jainism:
Saman Suttam of Jinendra Varni gives further insight into this precept:-
The same idea is also presented in V.12 and VI.30 of the "Analects" (c. 500 BC), which can be found in the online Chinese Text Project. The phraseology differs from the Christian version of the Golden Rule. It does not presume to do anything unto others, but merely to avoid doing what would be harmful. It does not preclude doing good deeds and taking moral positions.
Mozi regarded the golden rule as a corollary to the cardinal virtue of impartiality, and encouraged egalitarianism and selflessness in relationships.
Do not do unto others whatever is injurious to yourself. – Shayast-na-Shayast 13.29
"The Way to Happiness" expresses the Golden Rule both in its negative/prohibitive form and in its positive form. The negative/prohibitive form is expressed in Precept 19 as:
The positive form is expressed in Precept 20 as:
The "Declaration Toward a Global Ethic" from the Parliament of the World’s Religions (1993) proclaimed the Golden Rule ("We must treat others as we wish others to treat us") as the common principle for many religions. The Initial Declaration was signed by 143 leaders from all of the world's major faiths, including Baha'i Faith, Brahmanism, Brahma Kumaris, Buddhism, Christianity, Hinduism, Indigenous, Interfaith, Islam, Jainism, Judaism, Native American, Neo-Pagan, Sikhism, Taoism, Theosophist, Unitarian Universalist and Zoroastrian. In the folklore of several cultures the Golden Rule is depicted by the allegory of the long spoons.
In the view of Greg M. Epstein, a Humanist chaplain at Harvard University, " 'do unto others' ... is a concept that essentially no religion misses entirely. "But not a single one of these versions of the golden rule requires a God"". Various sources identify the Golden Rule as a humanist principle:
According to Marc H. Bornstein, and William E. Paden, the Golden Rule is arguably the most essential basis for the modern concept of human rights, in which each individual has a right to just treatment, and a reciprocal responsibility to ensure justice for others.
However Leo Damrosch argued that the notion that the Golden Rule pertains to "rights" per se is a contemporary interpretation and has nothing to do with its origin. The development of human "rights" is a modern political ideal that began as a philosophical concept promulgated through the philosophy of Jean Jacques Rousseau in 18th century France, among others. His writings influenced Thomas Jefferson, who then incorporated Rousseau's reference to "inalienable rights" into the United States Declaration of Independence in 1776. Damrosch argued that to confuse the Golden Rule with human rights is to apply contemporary thinking to ancient concepts.
There has been research published arguing that some 'sense' of fair play and the Golden Rule may be stated and rooted in terms of neuroscientific and neuroethical principles.
The Golden Rule can also be explained from the perspectives of psychology, philosophy, sociology, human evolution, and economics. Psychologically, it involves a person empathizing with others. Philosophically, it involves a person perceiving their neighbor also as "I" or "self". Sociologically, "love your neighbor as yourself" is applicable between individuals, between groups, and also between individuals and groups. In evolution, "reciprocal altruism" is seen as a distinctive advance in the capacity of human groups to survive and reproduce, as their exceptional brains demanded exceptionally long childhoods and ongoing provision and protection even beyond that of the immediate family. In economics, Richard Swift, referring to ideas from David Graeber, suggests that "without some kind of reciprocity society would no longer be able to exist."
Philosophers, such as Immanuel Kant and Friedrich Nietzsche, have objected to the rule on a variety of grounds. The most serious among these is its application. How does one know how others want to be treated? The obvious way is to ask them, but this cannot be done if one assumes they have not reached a particular and relevant understanding.
George Bernard Shaw wrote, "Do not do unto others as you would that they should do unto you. Their tastes may not be the same." This suggests that if your values are not shared with others, the way you want to be treated will not be the way they want to be treated. Hence, the Golden Rule of "do unto others" is "dangerous in the wrong hands", according to philosopher Iain King, because "some fanatics have no aversion to death: the Golden Rule might inspire them to kill others in suicide missions."
Immanuel Kant famously criticized the golden rule for not being sensitive to differences of situation, noting that a prisoner duly convicted of a crime could appeal to the golden rule while asking the judge to release him, pointing out that the judge would not want anyone else to send him to prison, so he should not do so to others. Kant's "Categorical Imperative", introduced in "Groundwork of the Metaphysic of Morals", is often confused with the Golden Rule.
Walter Terence Stace, in "The Concept of Morals" (1937), wrote:
Marcus George Singer observed that there are two importantly different ways of looking at the golden rule: as requiring (1) that you perform specific actions that you want others to do to you or (2) that you guide your behavior in the same general ways that you want others to. Counter-examples to the golden rule typically are more forceful against the first than the second.
In his book on the golden rule, Jeffrey Wattles makes the similar observation that such objections typically arise while applying the golden rule in certain general ways (namely, ignoring differences in taste, in situation, and so forth). But if we apply the golden rule to our own method of using it, asking in effect if we would want other people to apply the golden rule in such ways, the answer would typically be no, since it is quite predictable that others' ignoring of such factors will lead to behavior which we object to. It follows that we should not do so ourselves—according to the golden rule. In this way, the golden rule may be self-correcting. An article by Jouni Reinikainen develops this suggestion in greater detail.
It is possible, then, that the golden rule can itself guide us in identifying which differences of situation are morally relevant. We would often want other people to ignore any prejudice against our race or nationality when deciding how to act towards us, but would also want them to not ignore our differing preferences in food, desire for aggressiveness, and so on. This principle of "doing unto others, wherever possible, as "they" would be done by..." has sometimes been termed the platinum rule.
Charles Kingsley's "The Water Babies" (1863) includes a character named Mrs Do-As-You-Would-Be-Done-By (and another, Mrs Be-Done-By-As-You-Did). | https://en.wikipedia.org/wiki?curid=12859 |
Glasnevin
Glasnevin (, also known as "Glas Naedhe", meaning "stream of O'Naeidhe" after a local stream and an ancient chieftain) is a neighbourhood of Dublin, Ireland, situated on the River Tolka. While primarily residential, Glasnevin is also home to the National Botanic Gardens, national meteorological office and a range of other State bodies, and Dublin City University has its main campus and other facilities in and near the area.
Glasnevin is also a civil parish in the ancient barony of Coolock.
A mainly residential neighbourhood, Glasnevin is located on the Northside of the city of Dublin (about 3 km north of Dublin city centre). It was established on the northern bank of the River Tolka where the stream for which it may be named joins, and now extends north and south of the river. Three watercourses flow into the Tolka in the area. Two streams can be seen near the Catholic "pyramid church", the Claremont Stream or Nevin Stream, flowing south from Poppintree and Jamestown Industrial Estate branches, and what is sometimes called the "Cemetery Drain" coming north from the southern edge of Glasnevin Cemetery. In addition, a major diversion from the Wad River comes from the Ballymun area, joining near the Claremont Stream.
The boundaries of Glasnevin stretch from the Royal Canal to Glasnevin Avenue and from the Finglas Road to the edges of Drumcondra. It is bordered to the northwest by Finglas, northeast by Ballymun and Santry, Whitehall to the east, Phibsboro and Drumcondra to the south and Cabra to the southwest.
Glasnevin was reputedly founded by Saint Mobhi (sometimes known as St Berchan) in the sixth (or perhaps fifth) century as a monastery. His monastery continued to be used for many years afterwards - St. Colman is recorded as having paid homage to its founder when he returned from abroad to visit Ireland a century after St Mobhi's death in 544. St. Columba of Iona is thought to have studied under St. Mobhi, but left Glasnevin following an outbreak of plague and journeyed north to open the House at Derry; there is a long street (Iona Road) in Glasnevin named in his honour and the church on Iona Road is called Saint Columba's.
A settlement grew up around the monastery, which survived until the Viking invasions in the eighth century. After raids on monasteries at Glendalough and Clondalkin, the monasteries at Glasnevin and Finglas were attacked and destroyed.
By 822 Glasnevin, along with Grangegorman and Clonken or Clonkene (now known as Deansgrange), had become parts of the grange (farm) of Christ Church Cathedral and it seems to have maintained this connection up to the time of the Reformation.
The Battle of Clontarf was fought on the banks of the River Tolka in 1014 (a field called the "bloody acre" is supposed to be part of the site). The Irish defeated the Danes in a battle, in which 7,000 Danes and 4,000 Irish died.
The 12th century saw the Normans (who had conquered England and Wales in the eleventh century) invade Ireland. As local rulers continued fighting amongst themselves the Norman King of England Henry II was invited to intervene. He arrived in 1171, took control of much land, and then parcelled it out amongst his supporters. Glasnevin ended up under the jurisdiction of Finglas Abbey. Later, Laurence O'Toole, Archbishop of Dublin, took responsibility for Glasnevin and it became the property of the Priory of the Most Holy Trinity (Christ Church Cathedral).
In 1240 a church and tower was reconstructed on the site of the Church of St. Mobhi in the monastery. The returns of the church for 1326 stated that 28 tenants resided in Glasnevin. The church was enlarged in 1346, along with a small hall known as the Manor Hall.
When King Henry VIII broke from Rome an era of religious repression began. During the Dissolution of the Monasteries, Catholic Church property and land was appropriated to the new Church of England, and monasteries (including the one at Glasnevin) were forcibly closed and fell into ruin. Glasnevin had at this stage developed as a village, with its principal landmark and focal point being its "bull-ring" noted in 1542.
By 1667 Glasnevin had expanded - but not by very much; it is recorded as containing 24 houses. The development of the village was given a fresh impetus when Sir John Rogerson built his country residence - "The Glen" or "Glasnevin House" - outside the village.
The plantations of Ireland saw the settlement of Protestant English families on land previously held by Catholics. Lands at Glasnevin were leased to such families and a Protestant church was erected there in 1707. It was built on the site of the old Catholic Church and was named after St. Mobhi. The church was largely rebuilt in the mid-18th century. The attached churchyard became a graveyard for both Protestants and Catholics. It is said that Robert Emmet is buried there, this claim being made because once somebody working in the graveyard there dug up a headless body.
By now Glasnevin was an area for families of distinction - in spite of a comment attributed to the Protestant Archbishop King of Dublin that ""when any couple had a mind to be wicked, they would retire to Glasnevin"". In a letter, dated 1725 he described Glasnevin as ""the receptacle for thieves and rogues [..] The first search when anything was stolen, was there, and when any couple had a mind to retire to be wicked there was their harbour. But since the church was built, and service regularly settled, all these evils are banished. Good houses are built in it, and the place civilised.""
Glasnevin National School was also built during this period.
In the 1830s, the civil parish population was recorded as 1,001, of whom 559 resided in the village. Glasnevin was described as a parish in the barony of Coolock, pleasantly situated and the residence of many families of distinction.
On 1 June 1832, Charles Lindsay, Bishop of Kildare and Leighlin and the William John released their holdings of Sir John Rogerson's lands at Glasnevin, (including Glasnevin House) to George Hayward Lindsay. This transfer included the sum of 1,500 Pounds Sterling. Although this does not specifically cite the marriage of George Hayward Lindsay to Lady Mary Catherine Gore, George Lindsay almost certainly came into the lands at Glasnevin as a result of his marriage.
When Drumcondra began to rapidly expand in the 1870s, the residents of Glasnevin sought to protect their district and opposed being merged with the neighbouring suburb. One of the objectors was the property-owner, Dr Gogarty, the father of the Irish poet, Oliver St. John Gogarty.
Glasnevin became a township in 1878 and became part of the City of Dublin in 1900 under the Dublin Boundaries Act, which received the Royal Assent on August 6, 1900.
George Hayward Lindsay's eldest son, Lieutenant Colonel Henry Gore Lindsay, was in possession of his father's lands at Glasnevin when the area began to be developed at the beginning of the twentieth-century. The development of his lands after 1903/04 marked the start of the gradual development of the area.
Glasnevin remained relatively undeveloped until the opening up of the Carroll Estate in 1914, which saw the creation of the redbrick residential roads running down towards Drumcondra. The process was accelerated by Dublin Corporation in the 1920s and the present shape of the suburb was firmly in place by 1930. Nevertheless, until comparatively recent years, a short stroll up the Old Finglas Road brought you rapidly into open countryside.
The start of the 20th century also saw the opening of a short lived railway station on the Drumcondra and North Dublin Link Railway line from Glasnevin Junction to Connolly Station (then Amiens Street). It opened in 1906 and closed at the end of 1907. Glasnevin railway station opened on 1 April 1901 and closed on 1 December 1910.
The village has changed a lot over the years, and is now part of Dublin city. It is now populated by a mix of young families, senior citizens and students attending Dublin City University.
As well as the amenities of the National Botanic Gardens (Ireland) and local parks, the national meteorological office Met Éireann, the Fisheries Board, the National Standards Authority of Ireland, Sustainable Energy Ireland, the National Metrology Laboratory (NML), the Department of Defence and the national enterprise and trade board Enterprise Ireland are all located in the area.
The house and lands of the poet Thomas Tickell were sold in 1790 to the Irish Parliament and given to the Royal Dublin Society for them to establish Ireland's first Botanic Gardens. The gardens were the first location in Ireland where the infection responsible for the 1845–1847 potato famine was identified. Throughout the famine research to stop the infection was undertaken at the gardens.
The which border the River Tolka also adjoin the Prospect Cemetery. In 2002 the Botanic Gardens gained a new two-storey complex which included a new cafe and a large lecture theatre. The Irish National Herbarium is also located at the botanic gardens.
Prospect Cemetery is located in Glasnevin, although better known as Glasnevin Cemetery, the most historically notable burial place in the country and the last resting place, among a host of historical figures, of Michael Collins, Éamon de Valera, Charles Stewart Parnell and also Arthur Griffith. This graveyard led to Glasnevin being known as "the dead centre of Dublin". It opened in 1832 and is the final resting place for thousands of ordinary citizens, as well as many Irish patriots.
Approaching Glasnevin via Phibsboro is what is known as "Hart's Corner" but which about 200 years ago was called Glasmanogue, and was then a well-known stage on the way to Finglas. At an earlier date the name possessed a wider signification and was applied to a considerable portion of the adjoining district.
At the start of the 18th century a large house, called Delville - known at first as The Glen - was built on the site of the present Bon Secours Hospital, Dublin. Its name was an amalgamation of the surnames of two tenants, Dr. Helsam and Dr. Patrick Delany (as Heldeville), both Fellows of Trinity College.
When Delany married his first wife he acquired sole ownership, but it became more well known as the home of Delany and his second wife - Mary Pendarves. She was a widow whom Delany married in 1743, and was an accomplished letter writer.
They couple were friends of Dean Jonathan Swift and, through him, of Alexander Pope. Pope encouraged the Delaneys to develop a garden in a style then becoming popular in England - moving away from the very formal, geometric layout that was common. He redesigned the house in the style of a villa and had the gardens laid out in the latest Dutch fashion creating what was almost certainly Ireland's first naturalistic garden.
The house was, under Mrs Delany, a centre of Dublin's intellectual life. Swift is said to have composed a number of his campaigning pamphlets while staying there. He and his life-long companion Stella were both in the habit of visiting, and Swift satirised the grounds which he considered too small for the size of the house. Through her correspondence with her sister, Mrs Dewes, Mary wrote of Swift in 1733: "he calls himself my master and corrects me when I speak bad English or do not pronounce my words distinctly".
Patrick Delany died in 1768 at the age of 82, prompting his widow to sell Delville and return to her native England until her death twenty years later.
Glasnevin is also a parish in the Fingal South West deanery of the Roman Catholic Archdiocese of Dublin. It is served by the Church of Lady of Dolours.
The church underwent some refurbishment work inside and in its grounds and car park during the first half of 2011. A timber church, which originally stood on Berkeley Road, was moved to a riverside site on Botanic Avenue early in the twentieth century. The altar in this church was from Newgate prison in Dublin. It served as the parish church until it was replaced, in 1972, by a structure resembling a pyramid when viewed from Botanic Avenue. The previous church was known locally as "The Woodener" or "The Wooden" and the new building is still known to older residents as "The new Woodener" or "The Wigwam".
In 1975 the new headquarters of Met Éireann, the Irish Meteorological Office, opened just off Glasnevin Hill, on the former site of Marlborough House. The Met Éireann building too was built in a somewhat pyramidal shape and is recognised as one of the most significant, smaller commercial buildings, to be erected in Dublin in the 1970s.
Griffith Avenue, which runs through Glasnevin, Drumcondra and Marino. The avenue spans three electoral constituencies. It was named after Arthur Griffith who was the founder and third leader of Sinn Féin and also served as President of Dáil Éireann. Arthur Griffith also was buried in Glasnevin Cemetery.
The Gaelic games of Gaelic football, hurling, camogie and Gaelic handball are all organised locally by Na Fianna CLG, while soccer is played by local clubs Tolka Rovers, Glasnevin FC and Glasnaion FC. Basketball is organised by Tolka Rovers. Tennis is played in Charleville Lawn Tennis Club which was founded in 1894 and took its name from the original location at the corner of the Charleville and Cabra Roads. The move to its present location on Whitworth Road took place in 1904. The club has a membership of 400 senior and junior members and the club has won many Dublin Lawn Tennis Council titles. Hockey is also played in Botanic Hockey club on the Old Finglas Road. Glasnevin Boxing Club and Football(soccer) club has a clubhouse on Mobhi road.
Scouting is represented in Glasnevin by the 1st Dublin (L.H.O) Scout Troop located on the corner of Griffith Avenue and Ballygall Road East.
There are several primary schools in Glasnevin, including Lindsay Road National School, Glasnevin National School, Glasnevin Educate Together National school, North Dublin National School Project, Scoil Mobhi, St. Brigids GNS, St. Columbas NS and St.Vincents CBS.
There are several Roman Catholic secondary schools in the area St Vincent's (Christian Brothers) School, Scoil Chaitríona and St Mary's Secondary School.
Billy Whelan, one of the eight Manchester United players who lost their lives in the Munich air disaster of 6 February 1958, was born locally on 1 April 1935. He is buried in Glasnevin Cemetery.
Glasnevin is part of the Dáil Éireann constituency of Dublin Central and Dublin North-West. | https://en.wikipedia.org/wiki?curid=12862 |
George Abbot (author)
George Abbot or Abbott (1604 — 2 February 1649) was an English lay writer, known as "The Puritan", and a politician who sat in the House of Commons in two periods between 1640 and 1649. He is known also for his part in defending Caldecote House against royalist forces in the early days of the English Civil War.
Abbott was the son of George Abbott of York (died 1607) and his wife Joan Penkeston. While "Alumni Cantabrigienses" states that he matriculated at King's College, Cambridge in 1622, the "Oxford Dictionary of National Biography" discounts the identification, for lack of evidence. He owned property in Baddesley Clinton, Warwickshire, and was a good friend of Richard Vines, minister at Caldecote some way to the east. In April 1640, he was elected Member of Parliament for Tamworth in the Short Parliament.
In the English Civil War, Abbot worked closely in Warwickshire with his stepfather William Purefoy, and made a notable defence, with his mother Joan, of the Purefoy house at Caldecote, Warwickshire, gaining the family coverage in the London press. On 15 August 1642, with eight men, his mother and maids, he held out for a time against Prince Rupert of the Rhine, with about 18 troops of horses and dragoons. In the aftermath of the Battle of Edgehill, in October of the same year, Richard Baxter moved to Coventry, and Abbot was one of those hearing him preach there. Baxter in writing on the Sabbath referred to "my dear friend Mr. George Abbot". In his memoirs "Reliquiæ Baxterianæ", Baxter placed Abbot's defence of Caldecote House, where barns were burnt, in local context: royalists under Spencer Compton, 2nd Earl of Northampton were attacking Warwick Castle, defended by John Bridges, and Coventry, defended by John Barker.
Abbot was re-elected MP for Tamworth in 1645 for the Long Parliament and held the seat until his death in 1649. He died unmarried in his 44th year, and was buried in Caldecote church where his monument describes his defence of Caldecote.
By his will, Abbot endowed a free school at Caldecote. It was supported by land left to it at Baddesley Ensor.
Abbot was a lay theologian and scholar. His "Whole Booke of Job Paraphrased, or made easy for any to understand" (1640), was written in a terse style, and his "Vindiciae Sabbathi" (1641) influenced the Sabbatarian controversy. His "The Whole Book of Psalms Paraphrased" (1650) was published posthumously by Richard Vines, and dedicated to Joan Purefoy, his mother.
Abbot has been confused with others of the same name and has been described as a clergyman, which he never was. His writings have been incorrectly attributed in the bibliographical authorities to a relation of George Abbot the archbishop of Canterbury. One of the sons of Sir Morris Abbot called George was also an MP in the Long Parliament but for the constituency of Guildford. | https://en.wikipedia.org/wiki?curid=12863 |
Globular cluster
A globular cluster is a spherical collection of stars that orbits a galactic core. Globular clusters are very tightly bound by gravity, which gives them their spherical shapes, and relatively high stellar densities toward their centers. The name of this category of star cluster is derived from the Latin, "globulus"—a small sphere. Occasionally, a globular cluster is known simply as a "globular".
Globular clusters are found in the halo of a galaxy and contain considerably more stars, and are much older than the less dense open clusters, which are found in the disk of a galaxy. Globular clusters are fairly common; there are about 150 to 158 currently known globular clusters in the Milky Way, with perhaps 10 to 20 more still undiscovered. Larger galaxies can have more: The Andromeda Galaxy, for instance, may have as many as 500. Some giant elliptical galaxies (particularly those at the centers of galaxy clusters), such as M87, have as many as 13,000 globular clusters.
Every galaxy of sufficient mass in the Local Group has an associated group of globular clusters, and almost every large galaxy surveyed has been found to possess a system of globular clusters. The Sagittarius Dwarf galaxy, and the disputed Canis Major Dwarf galaxy appear to be in the process of donating their associated globular clusters (such as Palomar 12) to the Milky Way. This demonstrates how many of this galaxy's globular clusters might have been acquired in the past.
Although it appears that globular clusters contain some of the first stars to be produced in the galaxy, their origins and their role in galactic evolution are still unclear. It does appear clear that globular clusters are significantly different from dwarf elliptical galaxies and were formed as part of the star formation of the parent galaxy, rather than as a separate galaxy.
The first known globular cluster, now called M 22, was discovered in 1665 by Abraham Ihle, a German amateur astronomer. | https://en.wikipedia.org/wiki?curid=12866 |
George Vancouver
Captain George Vancouver (22 June 1757 – 10 May 1798) was a British officer of the Royal Navy best known for his 1791–95 expedition, which explored and charted North America's northwestern Pacific Coast regions, including the coasts of what are now the American states of Alaska, Washington, and Oregon, as well as the Canadian province of British Columbia. He also explored the Hawaiian Islands and the southwest coast of Australia.
Vancouver Island and the city of Vancouver, both in British Columbia, are named for him, as is Vancouver, Washington, in the United States. Mount Vancouver, on the Canadian–American border between Yukon and Alaska, and New Zealand's sixth-highest mountain, also Mount Vancouver, are also named for him.
George Vancouver was born in the seaport town of King's Lynn (Norfolk, England) on 22 June 1757 as the sixth, and youngest, child of John Jasper Vancouver, a Dutch-born deputy collector of customs, and Bridget Berners.
In 1771, at the age of 13, Vancouver entered the Royal Navy as a "young gentleman," a future candidate for midshipman. He was selected to serve as a midshipman aboard , on James Cook's second voyage (1772–1775) searching for "Terra Australis". He also accompanied Cook's third voyage (1776–1780), this time aboard "Resolution"s companion ship, , and was present during the first European sighting and exploration of the Hawaiian Islands. Upon his return to Britain in October 1780, Vancouver was commissioned as a lieutenant and posted aboard the sloop initially on escort and patrol duty in the English Channel and North Sea. He accompanied the ship when it left Plymouth on 11 February 1782 for the West Indies. On 7 May 1782 he was appointed fourth lieutenant of the 74-gun ship of the line , which was at the time part of the British West Indies Fleet and assigned to patrolling the French-held Leeward Islands, and subsequently saw action at the Battle of the Saintes, wherein he distinguished himself. Vancouver returned to England in June 1783.
In the late 1780s the Spanish Empire commissioned an expedition to the Pacific Northwest. The 1789 the Nootka Crisis developed, and Spain and Britain came close to war over ownership of the Nootka Sound on contemporary Vancouver Island, and of greater importance, the right to colonise and settle the Pacific Northwest coast. Henry Roberts had recently taken command of the survey ship (a new vessel named in honour of the ship on Cook's voyage), which was to be used on another round-the-world voyage, and Roberts selected Vancouver as his first lieutenant, but they were then diverted to other warships due to the crisis. Vancouver went with Joseph Whidbey to the 74-gun ship of the line . When the first Nootka Convention ended the crisis in 1790, Vancouver was given command of "Discovery" to take possession of Nootka Sound and to survey the coasts.
Departing England with two ships, HMS "Discovery" and , on 1 April 1791, Vancouver commanded an expedition charged with exploring the Pacific region. In its first year the expedition travelled to Cape Town, Australia, New Zealand, Tahiti, and Hawaii, collecting botanical samples and surveying coastlines along the way. He formally claimed at Possession Point, King George Sound Western Australia, now the town of Albany, Western Australia for the British. Proceeding to North America, Vancouver followed the coasts of present-day Oregon and Washington northward. In April 1792 he encountered American Captain Robert Gray off the coast of Oregon just prior to Gray's sailing up the Columbia River.
Vancouver entered the Strait of Juan de Fuca, between Vancouver Island and the Washington state mainland, on 29 April 1792. His orders included a survey of every inlet and outlet on the west coast of the mainland, all the way north to Alaska. Most of this work was in small craft propelled by both sail and oar; manoeuvring larger sail-powered vessels in uncharted waters was generally impractical and dangerous.
Vancouver named many features for his officers, friends, associates, and his ship "Discovery", including:
Vancouver was the second European to enter Burrard Inlet on 13 June 1792, naming it for his friend Sir Harry Burrard. It is the present day main harbour area of the City of Vancouver beyond Stanley Park. He surveyed Howe Sound and Jervis Inlet over the next nine days. Then, on his 35th birthday on 22 June 1792, he returned to Point Grey, the present-day location of the University of British Columbia. Here he unexpectedly met a Spanish expedition led by Dionisio Alcalá Galiano and Cayetano Valdés y Flores. Vancouver was ""mortified"" ("his word") to learn they already had a crude chart of the Strait of Georgia based on the 1791 exploratory voyage of José María Narváez the year before, under command of Francisco de Eliza. For three weeks they cooperatively explored the Georgia Strait and the Discovery Islands area before sailing separately towards Nootka Sound.
After the summer surveying season ended, in August 1792, Vancouver went to Nootka, then the region's most important harbour, on contemporary Vancouver Island. Here he was to receive any British buildings and lands returned by the Spanish from claims by Francisco de Eliza for the Spanish crown. The Spanish commander, Juan Francisco Bodega y Quadra, was very cordial and he and Vancouver exchanged the maps they had made, but no agreement was reached; they decided to await further instructions. At this time, they decided to name the large island on which Nootka was now proven to be located as "Quadra and Vancouver Island". Years later, as Spanish influence declined, the name was shortened to simply Vancouver Island.
While at Nootka Sound Vancouver acquired Robert Gray's chart of the lower Columbia River. Gray had entered the river during the summer before sailing to Nootka Sound for repairs. Vancouver realised the importance of verifying Gray's information and conducting a more thorough survey. In October 1792, he sent Lieutenant William Robert Broughton with several boats up the Columbia River. Broughton got as far as the Columbia River Gorge, sighting and naming Mount Hood.
Vancouver sailed south along the coast of Spanish Alta California, visiting Chumash villages at Point Conception and near Mission San Buenaventura. Vancouver spent the winter in continuing exploration of the Sandwich Islands, the contemporary islands of Hawaii.
The next year, 1793, he returned to British Columbia and proceeded further north, unknowingly missing the overland explorer Alexander Mackenzie by only 48 days. He got to 56°30'N, having explored north from Point Menzies in Burke Channel to the northwest coast of Prince of Wales Island. He sailed around the latter island, as well as circumnavigating Revillagigedo Island and charting parts of the coasts of Mitkof, Zarembo, Etolin, Wrangell, Kuiu and Kupreanof Islands. With worsening weather, he sailed south to Alta California, hoping to find Bodega y Quadra and fulfil his territorial mission, but the Spaniard was not there. He again spent the winter in the Sandwich Islands.
In 1794, he first went to Cook Inlet, the northernmost point of his exploration, and from there followed the coast south. Boat parties charted the east coasts of Chichagof and Baranof Islands, circumnavigated Admiralty Island, explored to the head of Lynn Canal, and charted the rest of Kuiu Island and nearly all of Kupreanof Island. He then set sail for Great Britain by way of Cape Horn, returning in September 1795, thus completing a circumnavigation of South America.
Impressed by the view from Richmond Hill, Vancouver retired to Petersham, London.
Vancouver faced difficulties when he returned home to England. The accomplished and politically well-connected naturalist Archibald Menzies complained that his servant had been pressed into service during a shipboard emergency; sailing master Joseph Whidbey had a competing claim for pay as expedition astronomer; and Thomas Pitt, 2nd Baron Camelford, whom Vancouver had disciplined for numerous infractions and eventually sent home in disgrace, proceeded to harass him publicly and privately.
Pitt's allies, including his cousin, Prime Minister William Pitt the Younger, attacked Vancouver in the press. Thomas Pitt took a more direct approach; on 29 August 1796 he sent Vancouver a letter heaping many insults on the head of his former captain, and challenging him to a duel. Vancouver gravely replied that he was unable "in a private capacity to answer for his public conduct in his official duty," and offered instead to submit to formal examination by flag officers. Pitt chose instead to stalk Vancouver, ultimately assaulting him on a London street corner. The terms of their subsequent legal dispute required both parties to keep the peace, but nothing stopped Vancouver's civilian brother Charles from interposing and giving Pitt blow after blow until onlookers restrained the attacker. Charges and counter-charges flew in the press, with the wealthy Camelford faction having the greater firepower until Vancouver, ailing from his long naval service, died.
Vancouver, at one time amongst Britain's greatest explorers and navigators, died in obscurity on 10 May 1798 at the age of 40, less than three years after completing his voyages and expeditions. No official cause of death was stated, as the medical records pertaining to Vancouver were destroyed; one doctor named John Naish claimed Vancouver died from kidney failure, while others believed it was a hyperthyroid condition. His grave is in the churchyard of St Peter's Church, Petersham, in the London Borough of Richmond upon Thames, England. The Hudson's Bay Company placed a memorial plaque in the church in 1841. His grave in Portland stone, renovated in the 1960s, is now Grade II listed in view of its historical associations.
Vancouver determined that the Northwest Passage did not exist at the latitudes that had long been suggested. His charts of the North American northwest coast were so extremely accurate that they served as the key reference for coastal navigation for generations. Robin Fisher, the academic Vice-President of Mount Royal University in Calgary and author of two books on Vancouver, states:
However, Vancouver failed to discover two of the largest and most important rivers on the Pacific coast, the Fraser River and the Columbia River. He also missed the Skeena River near Prince Rupert in northern British Columbia. Vancouver did eventually learn of the river before he finished his survey—from Robert Gray, captain of the American merchant ship that conducted the first Euroamerican sailing of the Columbia River on 11 May 1792, after first sighting it on an earlier voyage in 1788. However it and the Fraser River never made it onto Vancouver's charts.
Stephen R. Bown, noted in "Mercator's World" magazine (November/December 1999) that:
While it is difficult to comprehend how Vancouver missed the Fraser River, much of this river's delta was subject to flooding and summer freshet which prevented the captain from spotting any of its great channels as he sailed the entire shoreline from Point Roberts, Washington, to Point Grey in 1792. The Spanish expeditions to the Pacific Northwest, with the 1791 Francisco de Eliza expedition preceding Vancouver by a year, had also missed the Fraser River although they knew from its muddy plume that there was a major river located nearby.
Vancouver generally established a good rapport with both Indigenous peoples and European trappers. Historical records show Vancouver enjoyed good relations with native leaders both in Hawaii – where King Kamehameha I ceded Hawaii to Vancouver in 1794 – as well as the Pacific Northwest and California. Vancouver's journals exhibit a high degree of sensitivity to natives. He wrote of meeting the Chumash people, and of his exploration of a small island on the Californian coast on which an important burial site was marked by a sepulchre of "peculiar character" lined with boards and fragments of military instruments lying near a square box covered with mats. Vancouver states:
Vancouver also displayed contempt in his journals towards unscrupulous western traders who provided guns to natives by writing:
Robin Fisher notes that Vancouver's "relationships with aboriginal groups were generally peaceful; indeed, his detailed survey would not have been possible if they had been hostile." While there were hostile incidents at the end of Vancouver's last season – the most serious of which involved a clash with Tlingits at Behm Canal in southeast Alaska in 1794 – these were the exceptions to Vancouver's exploration of the US and Canadian Northwest coast.
Despite a long history of warfare between Britain and Spain, Vancouver maintained excellent relations with his Spanish counterparts and even fêted a Spanish sea captain aboard his ship during his 1792 trip to the Vancouver region.
Many places around the world have been named after George Vancouver, including:
Many collections were made on the voyage: one was donated by Archibald Menzies to the British Museum 1796; another made by surgeon George Goodman Hewett (1765–1834) was donated by Augustus Wollaston Franks to the British Museum in 1891. An account of these has been published.
Canada Post issued a $1.55 postage stamp to commemorate the 250th anniversary of Vancouver's birth, on 22 June 2007. The stamp has an embossed image of Vancouver seen from behind as he gazes forward towards a mountainous coastline. This may be the first Canadian stamp not to show the subject's face.
The City of Vancouver in Canada organised a celebration to commemorate the 250th anniversary of Vancouver's birth, in June 2007 at the Vancouver Maritime Museum. The one-hour festivities included the presentation of a massive 63 by 114 centimetre carrot cake, the firing of a gun salute by the Royal Canadian Artillery's 15th Field Regiment and a performance by the Vancouver Firefighter's Band.
Vancouver's then-mayor, Sam Sullivan, officially declared 22 June 2007 to be "George Day".
The Musqueam (xʷməθkʷəy̓əm) Elder sɁəyeɬəq (Larry Grant) attended the festivities and acknowledged that some of his people might disapprove of his presence, but also noted:
There has been some debate about the origins of the Vancouver name. It is now commonly accepted that the name Vancouver derives from the expression van Coevorden, meaning "(originating) from Coevorden", a city in the northeast of the Netherlands. This city is apparently named after the "Coeverden" family of the 13th–15th century.
In the 16th century, a number of businessmen from the Coevorden area (and the rest of the Netherlands) moved to England. Some of them were known as "Van Coeverden". Others adopted the surname Oxford, as in oxen fording (a river), which is approximately the English translation of "Coevorden". However, it is not the exact name of the noble family mentioned in the history books that claim Vancouver's noble lineage: that name was Coeverden not Coevorden.
In the 1970s, Adrien Mansvelt, a former consul general of the Netherlands based in Vancouver, published a collation of information in both historical and genealogical journals and in the "Vancouver Sun" newspaper. Mansvelt's theory was later presented by the city during the Expo 86 World's Fair, as historical fact. The information was then used by historian W. Kaye Lamb in his book "A Voyage of Discovery to the North Pacific Ocean and Round the World, 1791–1795" (1984).
W. Kaye Lamb, in summarising Mansvelt's 1973 research, observes evidence of close family ties between the Vancouver family of Britain and the Van Coeverden family of the Netherlands as well as George Vancouver's own words from his diaries in referring to his Dutch ancestry:
In 2006 John Robson, a librarian at the University of Waikato, conducted his own research into George Vancouver's ancestry, which he published in an article in the "British Columbia History" journal. Robson theorises that Vancouver's forebears may have been Flemish rather than Dutch; he believes that Vancouver is descended from the Vangover family of Ipswich and Colchester in Essex. Those towns had a significant Flemish population in the 16th and 17th centuries.
George Vancouver named the south point of what is now Couverden Island, Alaska, "Point Couverden" during his exploration of the North American Pacific coast, in honour of his family's hometown of Coevorden. It is located at the western point of entry to Lynn Canal in southeastern Alaska.
The Admiralty instructed Vancouver to publish a narrative of his voyage which he started to write in early 1796 in Petersham. At the time of his death the manuscript covered the period up to mid-1795. The work, "A Voyage of Discovery to the North Pacific Ocean, and Round the World", was completed by his brother John and published in three volumes in the autumn of 1798. A second edition was published in 1801 in six volumes.
A modern annotated edition (1984) by W. Kaye Lamb was renamed "The Voyage of George Vancouver 1791–1795", and published in four volumes by the Hakluyt Society of London, England. | https://en.wikipedia.org/wiki?curid=12867 |
Great Vowel Shift
The Great Vowel Shift was a series of changes in the pronunciation of the English language that took place primarily between 1400 and 1700, beginning in southern England and today having influenced effectively all dialects of English. Through this vowel shift, the pronunciation of all Middle English long vowels was changed. Some consonant sounds changed as well, particularly those that became silent; the term "Great Vowel Shift" is sometimes used to include these consonant changes.
English spelling began to become standardised in the 15th and 16th centuries, and the Great Vowel Shift is the major reason English spellings now often deviate considerably from how they represent pronunciations. The Great Vowel Shift was first studied by Otto Jespersen (1860–1943), a Danish linguist and Anglicist, who coined the term.
The causes of the Great Vowel Shift have been a source of intense scholarly debate, and, as yet, there is no firm consensus. The greatest changes occurred during the 15th and 16th centuries.
The main difference between the pronunciation of Middle English in the year 1400 and Modern English (Received Pronunciation) is in the value of the long vowels.
Long vowels in Middle English had "continental" values, much like those in Italian and Standard German; in standard Modern English, they have entirely different pronunciations. The differing pronunciations of English vowel letters do not stem from the Great Shift as such but because English spelling did not adapt to the changes.
German had undergone vowel changes quite similar to the Great Shift in a slightly earlier period but the spelling was changed accordingly (e.g. Middle High German → modern German "to bite").
This timeline shows the main vowel changes that occurred between late Middle English in the year 1400 and Received Pronunciation in the mid-20th century by using representative words. The Great Vowel Shift occurred in the lower half of the table, between 1400 and 1600–1700.
The changes that happened after 1700 are not considered part of the Great Vowel Shift. Pronunciation is given in the International Phonetic Alphabet:
Before the Great Vowel Shift, Middle English in Southern England had seven long vowels, . The vowels occurred in, for example, the words "bite", "meet", "meat", "mate", "boat", "boot", and "out", respectively.
The words had very different pronunciations in Middle English from their pronunciations in Modern English.
In addition, Middle English had:
After around 1300, the long vowels of Middle English began changing in pronunciation as follows:
They occurred over several centuries and can be divided into two phases. The first phase affected the close vowels and the close-mid vowels : were raised to , and became the diphthongs or . The second phase affected the open vowel and the open-mid vowels : were raised, in most cases changing to .
The Great Vowel Shift changed vowels without merger so Middle English before the vowel shift had the same number of vowel phonemes as Early Modern English after the vowel shift.
After the Great Vowel Shift, some vowel phonemes began merging. Immediately after the Great Vowel Shift, the vowels of "meet" and "meat" were different, but they are merged in Modern English, and both words are pronounced as .
However, during the 16th and the 17th centuries, there were many different mergers, and some mergers can be seen in individual Modern English words like "great", which is pronounced with the vowel as in "mate" rather than the vowel as in "meat".
This is a simplified picture of the changes that happened between late Middle English (late ME), Early Modern English (EModE), and today's English (ModE). Pronunciations in 1400, 1500, 1600, and 1900 are shown. To hear recordings of the sounds, click the phonetic symbols.
Before labial consonants and also after , did not shift, and remains as in "soup" and "room" (its Middle English spelling was "roum").
The first phase of the Great Vowel Shift affected the Middle English close-mid vowels , as in "beet" and "boot", and the close vowels , as in "bite" and "out". The close-mid vowels became close , and the close vowels became diphthongs. The first phase was complete in 1500, meaning that by that time, words like "beet" and "boot" had lost their Middle English pronunciation, and were pronounced with the same vowels as in Modern English. The words "bite" and "out" were pronounced with diphthongs, but not the same diphthongs as in Modern English.
Scholars agree that the Middle English close vowels became diphthongs around the year 1500, but disagree about what diphthongs they changed to. According to Lass, the words "bite" and "out" after diphthongisation were pronounced as and , similar to American English "bait" and "oat" . Later, the diphthongs shifted to , then , and finally to Modern English . This sequence of events is supported by the testimony of orthoepists before Hodges in 1644.
However, many scholars such as , , and argue for theoretical reasons that, contrary to what 16th-century witnesses report, the vowels were actually immediately centralised and lowered to .
Evidence from northern English and Scots (see below) suggests that the close-mid vowels were the first to shift. As the Middle English vowels were raised towards , they forced the original Middle English out of place and caused them to become diphthongs . This type of sound change, in which one vowel's pronunciation shifts so that it is pronounced like a second vowel, and the second vowel is forced to change its pronunciation, is called a push chain.
However, according to professor Jürgen Handke, for some time, there was a phonetic split between words with the vowel and the diphthong , in words where the Middle English shifted to the Modern English . For an example, "high" was pronounced with the vowel , and "like" and "my" were pronounced with the diphthong . Therefore, for logical reasons, the close vowels could have diphthongised before the close-mid vowels raised. Otherwise, "high" would probably rhyme with "thee" rather than "my". This type of chain is called a drag chain.
The second phase of the Great Vowel Shift affected the Middle English open vowel , as in "mate", and the Middle English open-mid vowels , as in "meat" and "boat". Around 1550, Middle English was raised to . Then, after 1600, the new was raised to , with the Middle English open-mid vowels raised to close-mid .
During the first and the second phases of the Great Vowel Shift, long vowels were shifted without merging with other vowels, but after the second phase, several vowels merged. The later changes also involved the Middle English diphthong , as in "day", which had monophthongised to , and merged with Middle English as in "mate" or as in "meat".
During the 16th and 17th centuries, several different pronunciation variants existed among different parts of the population for words like "meet", "meat", "mate", and "day". In each pronunciation variant, different pairs or trios of words were merged in pronunciation. Four different pronunciation variants are shown in the table below. The fourth pronunciation variant gave rise to Modern English pronunciation. In Modern English, "meet" and "meat" are merged in pronunciation and both have the vowel , and "mate" and "day" are merged with the diphthong , which developed from the 16th-century long vowel .
Modern English typically has the "meet"–"meat" merger: both "meet" and "meat" are pronounced with the vowel . Words like "great" and "steak", however, have merged with "mate" and are pronounced with the vowel , which developed from the shown in the table above.
The Great Vowel Shift affected other dialects as well as the standard English of southern England but in different ways. In Northern England, the shift did not operate on the long back vowels because they had undergone an earlier shift. Similarly, the Scots language in Scotland had a different vowel system before the Great Vowel Shift, as had shifted to in Early Scots. In the Scots equivalent of the Great Vowel Shift, the long vowels , and shifted to , and by the Middle Scots period and remained unaffected.
The first step in the Great Vowel Shift in Northern and Southern English is shown in the table below. The Northern English developments of Middle English and were different from Southern English. In particular, the Northern English vowels in "bite", in "feet", and in "boot" shifted, while the vowel in "house" did not. These developments below fall under the label "older" to refer to Scots and a more conservative and increasingly rural Northern sound, while "younger" refers to a more mainstream Northern sound largely emerging just since the twentieth century.
The vowel systems of Northern and Southern Middle English immediately before the Great Vowel Shift were different in one way. In Northern Middle English, the back close-mid vowel in "boot" had already shifted to front (a sound change known as fronting), like the long "" in German "hear". Thus, Southern English had a back close-mid vowel , but Northern English did not:
In both Northern and Southern English, the first step of the Great Vowel Shift raised the close-mid vowels to become close. Northern Middle English had two close-mid vowels – in "feet" and in "boot" – which were raised to and . Later on, Northern English changed to in many dialects (though not in all, see ), so that "boot" has the same vowel as "feet". Southern Middle English had two close-mid vowels – in "feet" and in "boot" – which were raised to and .
In Southern English, the close vowels in "bite" and in "house" shifted to become diphthongs, but in Northern English, in "bite" shifted but in "house" did not.
If the difference between the Northern and Southern vowel shifts is caused by the vowel systems at the time of the Great Vowel Shift, did not shift because there was no back mid vowel in Northern English. In Southern English, shifting of to could have caused diphthongisation of original , but because Northern English had no back close-mid vowel to shift, the back close vowel did not diphthongise.
The printing press was introduced to England in the 1470s by William Caxton and later Richard Pynson. The adoption and use of the printing press accelerated the process of standardisation of English spelling, which continued into the 16th century. | https://en.wikipedia.org/wiki?curid=12872 |
Gilbert Arthur à Beckett
Gilbert Arthur à Beckett (1837 – October 15, 1891) was an English writer.
Beckett was born at Portland House Hammersmith, on 7 April 1837, the eldest son of the civil servant and humorist Gilbert Abbott à Beckett and the composer Mary Anne à Beckett, daughter of Joseph Glossop, clerk of the cheque to the hon. corps of gentlemen-at-arms.
His brother was Arthur William à Beckett.
He graduated from Christ Church, Oxford, as a Westminster scholar in 1860.
He was entered at Lincoln's Inn on 15 October 1857, but gave his attention chiefly to drama, producing "Diamonds and Hearts" at the Haymarket Theatre in 1867; this was followed by other light comedies.
His adaptation of a French operetta by Émile Jonas called "The Two Harlequins" opened the new Gaiety Theatre, London in 1868, together with his distant cousin, W. S. Gilbert's, "Robert the Devil" and another piece.
Beckett's pieces include numerous burlesques and pantomimes, the libretti of "Savonarola" (Hamburg, 1884) and "The Canterbury Pilgrims" (Drury Lane, 1884) for the music of Dr. C. V. Stanford.
With the composer Alfred Cellier, Beckett wrote the operetta "Two Foster Brothers" (St. George's Hall, 1877).
In 1879, he had been asked by Tom Taylor, the editor of "Punch", to follow the example of his younger brother Arthur, and become a regular member of the staff of "Punch."
Three years later he was 'appointed to the Table.'
The "Punch" dinners 'were his greatest pleasure, and he attended them with regularity, although the paralysis of the legs, the result of falling down the stairway of Gower Street station, rendered his locomotion, and especially the mounting of Mr. Punch's staircase, a matter of painful exertion'.
To "Punch" he contributed both prose and verse; he wrote, in greater part, the admirable parody of a boy's sensational shocker (March 1882), and he developed Jerrold's idea of humorous bogus advertisements under the heading 'How we advertise now.'
The idea of one of Sir John Tenniel's best cartoons for "Punch," entitled 'Dropping the Pilot,' illustrative of Bismarck's resignation in 1889, was due to him.
Apart from his work on 'Punch,' he wrote songs and music for the German Reeds' entertainment, while in 1873 and 1874 he was collaborator in two dramatic productions which evoked a considerable amount of public attention.
On 3 March 1873, "The Happy Land" was given at the Court Theatre, 1873, a daring political satire and burlesque of W. S. Gilbert's "The Wicked World".
In this amusing piece of banter three statesmen (Gladstone, Lowe, and Ayrton) were represented as visiting Fairyland in order to impart to the inhabitants the secrets of popular government. The actors representing 'Mr. G.,' 'Mr. L.,' and 'Mr. A.' were dressed so as to resemble the ministers satirised, and the representation elicited a question in the House of Commons and an official visit of the Lord Chamberlain to the theatre, with the result that the actors had to change their 'make-up.'
In the following year, he furnished the 'legend' to Herman Merivale's tragedy 'The White Pilgrim,' first given at the Court in February 1874.
At the close of his life he furnished the 'lyrics' and most of the book for the operetta "La Cigale", which at the time of his death was nearing its four hundredth performance at the Lyric Theatre.
In 1889, he suffered a great shock from the death by drowning of his only son, and he died in London on 15 October 1891, and was buried in Mortlake cemetery.
"Punch" devoted some appreciative stanzas to his memory, bearing the epigraph 'Wearing the white flower of a blameless life' (24 Oct. 1891). His portrait appeared in the well-known drawing of 'The Mahogany Tree' ("Punch", Jubilee Number, 18 July 1887), and likenesses were also given in the 'Illustrated London News' and in Spielmann's 'History of Punch' (1895).
He married Emily, eldest daughter of William Hunt, J.P., of Bath, and his only daughter Minna married in 1896 Mr. Hugh Clifford, C.M.G., governor of Labuan and British North Borneo. | https://en.wikipedia.org/wiki?curid=12874 |
George Hamilton-Gordon, 4th Earl of Aberdeen
George Hamilton-Gordon, 4th Earl of Aberdeen, (28 January 178414 December 1860), styled Lord Haddo from 1791 to 1801, was a British statesman, diplomat and Scottish landowner, successively a Tory, Conservative and Peelite politician and specialist in foreign affairs. He served as Prime Minister from 1852 until 1855 in a coalition between the Whigs and Peelites, with Radical and Irish support. The Aberdeen ministry was filled with powerful and talented politicians, whom Aberdeen was largely unable to control and direct. Despite his trying to avoid this happening, it took Britain into the Crimean War, and fell when its conduct became unpopular, after which Aberdeen retired from politics.
Born into a wealthy family with largest estates in Scotland, his personal life was marked by the loss of both parents by the time he was eleven, and of his first wife after only seven years of a happy marriage. His daughters died young, and his relations with his sons were difficult. He travelled extensively in Europe, including Greece, and he had a serious interest in the classical civilisations and their archaeology. His Scottish estates having been neglected by his father, he devoted himself (when he came of age) to modernising them according to the latest standards.
After 1812 he became a diplomat, and in 1813, at age 29, was given the critically important embassy to Vienna, where he organized and financed the sixth and final coalition that defeated Napoleon. His rise in politics was equally rapid and lucky, and "two accidents — Canning's death and Wellington's impulsive acceptance of the Canningite resignations" led to his becoming Foreign Secretary for Prime Minister Wellington in 1828 despite "an almost ludicrous lack of official experience"; he had been a minister for less than six months. After holding the position for two years, followed by another cabinet role, by 1841 his experience led to his appointment as Foreign Secretary again under Robert Peel for a longer term. His diplomatic successes include organizing the coalition against Napoleon in 1812–14, normalizing relations with post-Napoleonic France, settling the old border dispute between Canada and the United States, and ending the First Opium War with China in 1842, whereby Hong Kong was obtained. Aberdeen was a poor speaker, but this scarcely mattered in the House of Lords. He exhibited a "dour, awkward, occasionally sarcastic exterior". His friend William Ewart Gladstone, said of him that he was "the man in public life of all others whom I have . I say emphatically . I have others, but never like him".
Born in Edinburgh on 28 January 1784, he was the eldest son of George Gordon, Lord Haddo, son of George Gordon, 3rd Earl of Aberdeen. His mother was Charlotte, youngest daughter of William Baird of Newbyth. He lost his father on 18 October 1791 and his mother in 1795, and he was brought up by Henry Dundas, 1st Viscount Melville and William Pitt the Younger. He was educated at Harrow, and St John's College, Cambridge, where he graduated with a Master of Arts in 1804. Before this, however, he had become Earl of Aberdeen on his grandfather's death in 1801, and had travelled all over Europe. On his return to Britain, he founded the Athenian Society. In 1805, he married Lady Catherine Elizabeth, daughter of John Hamilton, 1st Marquess of Abercorn.
In December 1805 Lord Aberdeen took his seat as a Tory Scottish representative peer in the House of Lords. In 1808, he was created a Knight of the Thistle. Following the death of his wife from tuberculosis in 1812 he joined the Foreign Service. He was appointed Ambassador Extraordinary and Minister Plenipotentiary to Austria, and signed the Treaty of Töplitz between Britain and Austria in Vienna in October 1813. In the company of the Austrian Emperor, Francis II, he was an observer at the decisive Coalition victory of the Battle of Leipzig in October 1813; he had met Napoleon in his earlier travels. He became one of the central diplomatic figures in European diplomacy at this time, and he was one of the British representatives at the Congress of Châtillon in February 1814, and at the negotiations which led to the Treaty of Paris in May of that year.
Aberdeen was greatly affected by the aftermath of war which he witnessed at first hand. He wrote home:
The near approach of war and its effects are horrible beyond what you can conceive. The whole road from Prague to [Teplitz] was covered with waggons full of wounded, dead, and dying. The shock and disgust and pity produced by such scenes are beyond what I could have supposed possible...the scenes of distress and misery have sunk deeper in my mind. I have been quite haunted by them.
Returning home he was created a peer of the United Kingdom as Viscount Gordon, of Aberdeen in the County of Aberdeen (1814), and made a member of the Privy Council. In July 1815 he married his former sister-in-law Harriet, daughter of John Douglas, and widow of James Hamilton, Viscount Hamilton; the marriage was much less happy than his first. During the ensuing thirteen years Aberdeen took a less prominent part in public affairs.
Lord Aberdeen served as Chancellor of the Duchy of Lancaster between January and June 1828 and subsequently as Foreign Secretary until 1830 under the Duke of Wellington. He resigned with Wellington over the Reform Bill of 1832.
He was Secretary of State for War and the Colonies between 1834 and 1835, and again Foreign Secretary between 1841 and 1846 under Sir Robert Peel. It was during his second stint as Foreign Secretary that he had the harbor settlement of 'Little Hong Kong', on the south side of Hong Kong Island, named after him. It was probably the most productive period of his career; he settled two disagreements with the US: the northeast boundary dispute by the Webster-Ashburton Treaty (1842), and the Oregon dispute by the Oregon Treaty of 1846. He enjoyed the trust of Queen Victoria, which was still important for a Foreign Secretary. He worked closely with Henry Bulwer, his ambassador to Madrid, to help arrange marriages for Queen Isabella and her younger sister the Infanta Luisa Fernanda. They helped stabilize Spain's internal and external relations. He sought better relations with France, relying on his friendship with Guizot. However Britain was annoyed with France on a series of issues, especially French colonial policies, the right to search slave ships, the French desire to control Belgium, disputes in the Pacific, and French intervention in Morocco.
Aberdeen again followed his leader and resigned with Peel over the issue of the Corn Laws. After Peel's death in 1850 he became the recognised leader of the Peelites. In July 1852, a general election of Parliament was held which resulted in the election of 325 Tory/Conservative party members to Parliament. This represented 42.7% of the seats in Parliament. The main opposition to the Tory/Conservative Party was the Whig Party, which elected 292 members of the party to the Parliament in July 1852. Although occupying fewer seats than the Tory/Conservatives, the Whigs had a chance to draw support from the minor parties and independents who were also elected in July 1852. Lord Aberdeen was one of 38 Peelites elected to members of Parliament independently of the Tory/Conservative Party.
While the Peelites agreed with the Whigs on issues dealing with international trade, there were other issues on which the Peelites disagreed with the Whigs. Indeed, Lord Aberdeen's own dislike of the Ecclesiastical Titles Assumption Bill, the rejection of which he failed to secure in 1851, prevented him from joining the Whig government of Lord John Russell in 1851. Additionally, 113 of the members of Parliament elected in July 1852 were Free Traders. These members agreed with the Peelites on the repeal of the "Corn Laws," but they felt that the tariffs on "all" consumer products should be removed.
Furthermore, 63 members of Parliament elected in 1852 were members of the "Irish Brigade," who voted with the Peelites and the Whigs for the repeal of the Corn Laws because they sought an end the Great Irish Famine by means of cheaper wheat and bread prices for the poor and middle classes in Ireland. Currently, however, the Free Traders and the Irish Brigade had disagreements with the Whigs that prevented them from joining with the Whigs to form a government. Accordingly, the Tory/Conservative Party leader the Earl of Derby was asked to form a "minority government". Derby appointed Benjamin Disraeli as the Chancellor of the Exchequer for the minority government.
When in December 1852 Disraeli submitted his budget to Parliament on behalf of the minority government, the Peelites, the Free Traders, and the Irish Brigade were all alienated by the proposed budget. Accordingly, those groups suddenly forgot their differences with the Whig Party and voted with the Whigs against the proposed budget. The vote was 286 in favour of the budget and 305 votes against the budget. Because the leadership of the minority government had made the vote on the budget vote a "vote of confidence", the defeat of the Disraeli budget was a "vote of no confidence" in the minority government and meant its downfall. Accordingly, Lord Aberdeen was asked to form a new government.
Following the downfall of the Tory/Conservative minority government under Lord Derby in December 1852, Lord Aberdeen formed a new government from the coalition of Free Traders, Peelites, and Whigs that had voted no confidence in the minority government. Lord Aberdeen was able to put together a coalition that held 53.8% of the seats of Parliament. Thus Lord Aberdeen, a Peelite, became Prime Minister and headed a coalition ministry of Whigs and Peelites.
Although united on international trade issues and on questions of domestic reform, his cabinet also contained Lord Palmerston and Lord John Russell, who were certain to differ on questions of foreign policy. Charles Greville wrote in his "Memoirs", "In the present cabinet are five or six first-rate men of equal, or nearly equal, pretensions, none of them likely to acknowledge the superiority or defer to the opinions of any other, and every one of these five or six considering himself abler and more important than their premier"; and Sir James Graham wrote, "It is a powerful team, but it will require good driving", which Aberdeen was unable to provide. During the administration, much trouble was caused by the rivalry between Palmerston and Russell, and over the course of it Palmerston managed to out-manoeuvre Russell to emerge as the Whig heir apparent. The cabinet also included a single Radical, Sir William Molesworth, but much later, when justifying to the Queen his own new appointments, Gladstone told her: "For instance, even in Ld Aberdeen's Govt, in 52, Sir William Molesworth had been selected, at that time, a very advanced Radical, but who was perfectly harmless, & took little, or no part... He said these people generally became very moderate, when they were in office", which she admitted had been the case.
One of the foreign policy issues on which Palmerston and Russell disagreed was the type of relationship that Britain should have with France and especially France's ruler, Louis-Napoléon Bonaparte. Bonaparte was the nephew of the famous Napoleon Bonaparte, who had become dictator and then Emperor of France from 1804 until 1814. The younger Bonaparte had been elected to a three-year term as President of the Second Republic of France on 20 December 1848. The Constitution of the Second Republic limited the President to a single term in office. Thus, Louis Bonaparte would be unable to succeed himself and after 20 December 1851 would no longer be President. Consequently, on 2 December 1851, shortly before the end of his single three-year term in office was to expire, Bonaparte staged a coup against the Second Republic in France, disbanded the elected Constituent Assembly, arrested some of the Republican leaders, and declared himself Emperor Napoleon III of France. This coup upset many democrats in England as well as in France. Some British government officials felt that Louis Bonaparte was seeking foreign adventure in the spirit of his uncle, Napoleon I. Consequently, these officials felt that any close association with Bonaparte would eventually lead Britain into another series of wars, like the wars with France and Napoleon dating from 1793 until 1815. British relations with France had scarcely improved since 1815. As prime minister, the Earl of Aberdeen was one of these officials who feared France and Bonaparte.
However, other British government officials were beginning to worry more about the rising political dominance of the Russian Empire in eastern Europe and the corresponding decline of the Ottoman Empire. Lord Palmerston at the time of Louis Bonaparte's 2 December 1851 coup was serving as the Secretary of State for Foreign Affairs in the Whig government of Prime Minister Lord John Russell. Without informing the rest of the cabinet or Queen Victoria, Palmerston had sent a private note to the French ambassador endorsing Louis Bonaparte's coup and congratulating Louis Bonaparte himself on the coup. Queen Victoria and members of the Russell government demanded that Palmerston be dismissed as Foreign Minister. Russell requested Palmerston's resignation and Palmerston reluctantly provided it.
In February 1852, Palmerston took revenge on Russell by voting with the Conservatives in a "no confidence" vote against the Russell government. This brought an end to the Russell Whig government and set the stage for a general election in July 1852 which eventually brought the Conservatives to power in a minority government under the Earl of Derby. Later in the year, another problem facing the Earl of Aberdeen in the formation of his own new government in December 1852 was Lord John Russell himself. Russell was the leader of the Whig Party, the largest group in the coalition government. Consequently, Lord Aberdeen, was required to appoint Russell as the Secretary of State for Foreign Affairs, which he had done on 29 December 1852. However, Russell sometimes liked to use this position to speak for the whole government, as if he were the prime minister. In 1832, Russell had been nicknamed "Finality John" because of his statement that the 1832 Reform Act had just been approved by both the House of Commons and the House of Lords would be the "final" expansion of the vote in Britain. There would be no further extension of the ballot to the common people of Britain. However, as political pressure in favour of further reform had risen over the twenty years since 1832, Russell had changed his mind. Russell had said, in January 1852, that he intended to introduce a new reform bill into the House of Commons which would equalise the populations of the districts from which members of Parliament were elected. Probably as a result of their continuing feud, Palmerston declared himself against this Reform Bill of 1852. As a result, support for the bill dwindled and Russell was forced to change his mind again and not introduce any Reform Bill in 1852.
In order to form the coalition government, the Earl of Aberdeen had been required to appoint both Palmerston and Russell to his cabinet. Because of the controversy surrounding Palmerston's removal as Secretary of State for Foreign Affairs, Palmerston could not now be appointed Foreign Minister again so soon after his removal from that position. Accordingly, on 28 December 1852, Aberdeen appointed Palmerston as Home Secretary and appointed Russell as Foreign Minister.
Given the differences of opinion within the Lord Aberdeen cabinet over the direction of foreign policy with regard to relations between Britain and France under Napoleon III, it is not surprising that debate raged within the government as Louis Bonaparte, now assuming the title of Emperor Napoleon III. As Prime Minister of the Peelite/Whig coalition government, Aberdeen eventually led Britain into war on the side of the French and Ottomans against the Russian Empire. This war would eventually be called the Crimean War, but throughout the foreign policy negotiations surrounding the dismemberment of the Ottoman Empire, which would continue throughout the middle and end of nineteenth century, the problem would be referred to as the "Eastern Question".
The cabinet was bitterly divided. Palmerston stirred up anti-reform feeling in Parliament and pro-war public opinion to out-maneuver Russell. The result was that the weak Aberdeen government went to war with Russia as the result of internal British political rivalries. Aberdeen accepted Russian arguiments at face value because he sympathised with Russian interests against French pressure and.was not in favour of the Crimean War. However, he was unable to resist the pressure that was being exerted on him by Palmerston's faction. In the end, the Crimean War proved to be the downfall of his government.
The Eastern Question flared up on 2 December 1852, with the Napoleon's coup against the Second Republic. As Napoleon III was forming his new imperial government, he sent an ambassador to the Ottoman Empire with instructions to assert France's right to protect Christian sites in Jerusalem and the Holy Land. The Ottoman Empire agreed to this condition to avoid conflict or even war with France. Aberdeen, as Foreign Secretary in 1845, had himself tacitly authorised the construction of the first Anglican church in Jerusalem, following his predecessor's commission in 1838 of the first European Consul in Jerusalem on Britain's behalf, which lead to series of successive appointments by other nations. Both resulted from Lord Shaftesbury's canvassing with substantial public support.
Nevertheless, Britain became increasingly worried about the situation in Turkey, and Prime Minister Aberdeen sent Lord Stratford de Redcliffe, a diplomat with vast experience in Turkey, as a special envoy to the Ottoman Empire to guard British interests. Russia protested the Turkish agreement with the French as a violation of the Treaty of Küçük Kaynarca of 1778, which ended the Russo-Turkish War (1768–1774). Under the treaty, the Russians had been granted the exclusive right to protect the Christian sites in the Holy Land. Accordingly, on 7 May 1853, the Russians sent Prince Alexander Sergeyevich Menshikov, one their premier statesmen, to negotiate a settlement of the issue. Prince Menshikov called the attention of the Turks to the fact that during the Russo-Turkish War, the Russians had occupied the Turkish-controlled provinces of Wallachia and Moldavia on the north bank of the Danube River, and he reminded them that pursuant to the Treaty of Küçük Kaynarca, the Russians had returned these "Danubian provinces" to Ottoman control in exchange for the right to protect the Christian sites in the Holy Land. Accordingly, the Turks reversed themselves and agreed with the Russians.
The French sent one of their premier ships-of-the-line, the "Charlemagne", to the Black Sea as a show of force. In light of the French show of force, the Turks, again, reversed themselves and recognised the French right to protect the Christian sites. Lord Stratford de Redcliffe was advising the Ottomans during this time, and later it was alleged that he had been instrumental in persuading the Turks to reject the Russian arguments.
As war became inevitable, Aberdeen wrote to Russell:
The abstract justice of the cause, although indisputable, is but a poor consolation for the inevitable calamities of all war, or for a decision which I am not without fear may prove to have been impolitic and unwise. My conscience upbraids me the more, because seeing, as I did from the first, all that was to be apprehended, it is possible that by a little more energy and vigour, not on the Danube, but in Downing Street, it might have been prevented.
In response this latest change of mind by the Ottomans, the Russians on 2 July 1853 occupied the Turkish satellite states of Wallachia and Moldavia, as they had during the Russo-Turkish War of 1768–1774. Almost immediately, the Russian troops deployed along the northern banks of the Danube River, implying that they might cross the river. Aberdeen ordered the British Fleet to Constantinople and later into the Black Sea. On 23 October 1853, the Ottoman Empire declared war on Russia. A Russian naval raid on Sinope, on 30 November 1853, resulted in the destruction of the Turkish fleet in the battle of Sinope. When Russia ignored an Anglo-French ultimatum to abandon the Danubian provinces, Britain and France declared war on Russia on 28 March 1854. In September 1854, British and French troops landed on the Crimean peninsula at Eupatoria, north of Sevastopol. The Allied troops then moved across the Alma River on 20 September 1854 at the battle of Alma and set siege to the fort of Sevastopol.
A Russian attack on the allied supply base at Balaclava on 25 October 1854 was rebuffed. The Battle of Balaclava is noted for its famous (or rather infamous) Charge of the Light Brigade. On 5 November 1854, Russian forces tried to relieve the siege at Sevastopol and defeat the Allied armies in the field in the Battle of Inkerman. However, this attempt failed. Dissatisfaction as to the course of the war grew in England. As reports returned detailing the mismanagement of the conflict, Parliament began to investigate. On 29 January 1855, John Arthur Roebuck introduced a motion for the appointment of a select committee to enquire into the conduct of the war. This motion was carried by the large majority of 305 in favour and 148 against.
Treating this as a vote of no confidence in his government, Aberdeen resigned, and retired from active politics, speaking for the last time in the House of Lords in 1858. In visiting Windsor Castle to resign, he told the Queen: "Nothing could have been better, he said than the feeling of the members towards each other. Had it not been for the incessant attempts of Ld John Russell to keep up party differences, it must be acknowledged that the experiment of a coalition had succeeded admirably. We discussed future possibilities & agreed that nothing remained to be done, but to offer the Govt to Ld Derby...". The Queen continued to criticise Lord John Russell for his behaviour for the rest of his life; on his death in 1878 her journal records that he was "A man of much talent, who leaves a name behind him, kind, & good, with a great knowledge of the constitution, who behaved very well, on many trying occasions; but he was impulsive, very selfish (as shown on many occasions, especially during Ld Aberdeen's administration) vain, & often reckless & imprudent".
British-American relations have been troublesome under Palmerston, but Aberdeen proved much more conciliatory, and worked well with Daniel Webster, the American Secretary of State who was himself an Anglophile. In 1842, Aberdeen sent Lord Ashburton to Washington to settle all disputes, especially the border between Canada and Maine, the boundary along the Great Lakes, the Oregon boundary, the African slave trade, and the Caroline affair about boundaries in 1837 and the Creole case of 1841 involving a slave revolt on the high seas. The Webster–Ashburton Treaty of 1842 solved the most of the problems amicably. Thus Maine got most of the disputed land, but Canada obtained a vitally strategically strip of land connecting it to a warm water port. Aberdeen helped solve the Oregon dispute amicably in 1846. However, as prime minister, Aberdeen had trouble with the United States. In 1854 an American naval vessel bombarded the mosquito port of Greytown, Nicaragua in retaliation for an insult; Britain protested. Later in 1846, the United States announced its intention of annexing Hawaii, and Britain not only complained but sent a naval force to make the point. Negotiations for reciprocal trade agreement between the United States and Canada dragged on for eight years until a reciprocity treaty was reached in 1854.
Aberdeen was generally successful as a hard-working diplomat, but his reputation has suffered greatly because of the lack of military success in the Crimean War and from the ridicule of enemies such as Disraeli who regarded him as weak, inefficient, and cold. Before the Crimean debacle that ended his career he scored numerous diplomatic triumphs, starting in 1813-14 when as ambassador to the Austrian Empire he negotiated the alliances and financing that led to the defeat of Napoleon. In Paris, he normalized relations with the newly-restored Bourbon government and convinced London it could be trusted. He worked well with top European diplomats such as his friends Klemens von Metternich in Vienna and François Guizot in Paris. He brought Britain into the center of Continental diplomacy on critical issues, such as the local wars in Greece, Portugal, and Belgium. Simmering troubles on numerous issues with the United States were ended by friendly compromises. He played a central role in winning the Opium Wars against China, gaining control of Hong Kong in the process.
Lord Aberdeen married Lady Catherine Elizabeth Hamilton (10 January 1784 – 29 February 1812; daughter of Lord Abercorn) on 28 July 1805. They had four children.
He remarried Harriet Douglas (paternal granddaughter of James Douglas, 14th Earl of Morton and maternal granddaughter of Edward Lascelles, 1st Earl of Harewood) on 8 July 1815. They had five children:
The Countess of Aberdeen died in August 1833. Lord Aberdeen died at Argyll House, St. James's, London, on 14 December 1860, and was buried in the family vault at Stanmore church. In 1994 the novelist, columnist, and politician Ferdinand Mount used George Gordon's life as the basis for a historical novel, "Umbrella".
Apart from his political career, Aberdeen was also a scholar of the classical civilisations, who published "An Inquiry into the Principles of Beauty in Grecian Architecture" (London, 1822) and was referred to by his cousin Lord Byron in his "English Bards and Scotch Reviewers" (1809) as "the travell'd thane, Athenian Aberdeen." He was appointed Chancellor of the University of Aberdeen in 1827 and was President of the Society of Antiquaries of London.
Aberdeen's biographer Muriel Chamberlain summarises, "Religion never came easy to him". In his Scots landowning capacity "North of the border, he considered himself "ex officio" a Presbyterian". In England "he privately considered himself an Anglican"; as early as 1840 he told Gladstone he preferred what Aberdeen called "the sister church [of England]" and when in London worshipped at St James's Piccadilly. He was ultimately buried in the Anglican parish church at Stanmore, Middlesex.
He was a member of the General Assembly of the Church of Scotland from 1818 to 1828 and exercised his existing rights to present ministers to parishes on his Scottish estates through a time when the right of churches to veto the appointment or 'call' of a minister became so contentious as to lead in 1843 to the schism known as "the Disruption" when a third of ministers broke away to form the Free Church of Scotland. In the House of Lords, in 1840 and 1843, he raised two Compromise Bills to allow presbyteries but not congregations the right of veto. The first failed to pass (and was voted against by the General Assembly) but the latter, raised post-schism, became law for Scotland and remained in force until patronage of Scots livings was abolished in 1874.
It was under his prime ministership that the revival of the Convocations of Canterbury and York began, though they did not obtain their potential power till 1859.
He is said in the last few months of his life, after the Crimean War, to have declined to contribute to building a church on his Scotland estates because of a sense of guilt in having "shed much blood", citing biblically King David's being forbidden to build the Temple in Jerusalem. | https://en.wikipedia.org/wiki?curid=12879 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.