text stringlengths 10 951k | source stringlengths 39 44 |
|---|---|
Pai gow poker
Pai gow poker (also called double-hand poker) is a version of pai gow that is played with playing cards, instead of traditional pai gow's Chinese dominoes. The game of pai gow poker was created in 1985 in the United States by Sam Torosian, owner of the Bell Card Club.
The game is played with a standard 52-card deck, plus a single joker. It is played on a table set for six players, plus the dealer. Each player attempts to defeat the banker (who may be the casino dealer, one of the other players at the table, or a player acting in tandem with the dealer as co-bankers).
The object of pai gow poker is to create a five card poker hand and a two card poker hand from seven cards that beat both of the bank's hands. The five-card hand's rank must exceed that of the two-card hand, and it is for this reason that the two-card hand is often called the hand "in front", "on top", "hair", or the "small", "minor", or "low" hand. The five-card hand is called the hand "behind", or the "bottom", "high", or "big", as they are placed that way in front of the player, when the player is done setting them.
The cards are shuffled, and then dealt to the table in seven face-down piles of seven cards per pile. Four cards are unused regardless of the number of people playing.
Betting positions are assigned a number from 1 to 7, starting with whichever player is acting as banker that hand, and counting counter-clockwise around the table. A number from 1 to 7 is randomly chosen (either electronically or manually with dice), then the deal begins with the corresponding position and proceeds counter-clockwise. One common way of using dice to determine the dealer starting number is to roll three six-sided dice, and then count betting spots clockwise from the first position until the number on the dice is reached.
If a player is not sitting on a particular spot, the hand is still assigned, but then placed on the discard pile with the four unused cards. In some casinos, such as the Golden Nugget and Palms in Las Vegas, Nevada, an extra "dragon hand" is dealt if a seat is vacant. After all players have set their original hand they are asked in turn if they would like to place another bet to play the dragon hand. Generally the bet on the dragon hand can be the table minimum up to the amount the player bet on their original hand. The first player to accept the dragon hand receives it; this player is effectively playing two separate hands. Rules vary from casino to casino, but generally the dealer turns over the dragon hand and sets it using the house way. This is because the player has already seen 7 cards (their original hand) which could affect the way they would set the dragon hand.
The only two-card hands are one pair and high cards.
Five-card hands use standard poker hand rankings with one exception: in most casinos, the "wheel" (the hand A-2-3-4-5) is the second-highest straight. At most casinos in California and Michigan this rule doesn't apply, and A-2-3-4-5 is the lowest possible straight.
The joker plays as a bug, that is, in the five-card hand it can be used to complete a straight or flush if possible; otherwise it is an ace. In the two-card hand it always plays as an ace, except in several southern Californian casinos where the joker is wild.
If each of the player's hands beats each of the banker's corresponding hands, then he wins the bet. If only one of his hands beats the banker then he pushes (ties) in which case neither he nor the banker wins the bet. If both of his hands lose to the banker then he loses.
On each hand, ties go to the banker (for example, if a player's five-card hand loses to the banker and his two-card hand ties the banker then the player loses); this gives the banker a small advantage. If the player fouls his hand, meaning that his two-card hand outranks his five-card hand, or that there are an incorrect number of cards in each hand, there will usually be a penalty: either re-arrangement of the hand according to house rules or forfeiture of the hand.
In casino-banked games, the banker is generally required to set his hand in a pre-specified manner, called the "house way", so that the dealer does not have to implement any strategy in order to beat the players. When a player is banking, he is free to set the hand however he chooses; however, players have the option of "co-banking" with the house, and if this option is chosen then the player's hand must also be set in the house way.
California casinos typically charge a flat fee per hand (such as 5 cents or one dollar) to play, win or lose. Other casinos take a 5% commission out of the winnings, which is usually known as the "rake".
There are a number of variations of Pai Gow poker that are popular in casinos today. These variations were mainly formulated in 2004 — 2009. Pai Gow Mania was the first variation to be created which allows for two side bets instead of the traditional one side bet per hand. Fortune Pai Gow is another variation which allows players to make a side bet on a poker hand ranking of trips or better. This is one of the most popular variations. Similar to Fortune Pai Gow is Emperors Challenge, which also allows a side bet on a 7 card pai gow (no hand). Shuffle Master introduced a variation of the game in 2006, adding a progressive jackpot side bet, named Progressive Fortune Pai Gow. Part or all of the jackpot may be won by placing a side bet and landing one of the hands specified on the payout table. The hand that wins 100% of the jackpot is a combined seven card straight flush.
Advantage play refers to legal methods used to gain an advantage while gambling. In pai gow poker, a player may be able to gain an advantage in certain circumstances by banking as often as possible, taking advantage of unskilled players while banking, and dealer errors when not banking.
Sam Torosian, owner of the Bell Card Club in Los Angeles, invented the game of Pai Gow Poker in 1985. The idea for the game came to Torosian after being told about the game Pusoy by an elderly Filipino customer. He figured that the 13 card game with players arranging 3 hands would be too slow, but a simplified 2 hand version with only 7 cards would be faster and easier for players to learn. The game quickly became popular and by the late 1980s was being played on the Las Vegas strip, and eventually worldwide. Torosian famously failed to patent the game he invented after being given bad advice by an attorney he consulted, and noted poker author Mike Caro, both of whom told him that the game was not patentable. | https://en.wikipedia.org/wiki?curid=24239 |
Protoscience
In philosophy of science, there are several definitions of protoscience.
Its simplest meaning (most closely reflecting its roots of "proto-" + "science") involves the earliest eras of the history of science, when the scientific method was still nascent.
Another meaning extends this idea into the present, with protoscience being an emerging field of study which is still not completely scientific, but later becomes a proper science. An example of it would the general theory of relativity, which started being a protoscience (a theoretical work which had not been tested), but later was experimentally verified and became fully scientitific. Protoscience in this sense is distinguished from pseudoscience by a genuine willingness to be changed through new evidence, as opposed to having a theory that can always find a way to rationalize a predetermined belief.
Philosopher of chemistry Jaap Brakel defines protoscience as "the study of "normative" criteria for the use of experimental technology in science." Thomas Kuhn said that protosciences "generate testable conclusions but ... nevertheless resemble philosophy and the arts rather than the established sciences in their developmental patterns. I think, for example, of fields like chemistry and electricity before the mid-18th century, of the study of heredity and phylogeny before the mid-nineteenth, or of many of the social sciences today." While noting that they meet the demarcation criteria of falsifiability from Popper, he questions whether the discussion in protoscience fields "result[s] in clear-cut progress". Kuhn concluded that protosciences, "like the arts and philosophy, lack some element which, in the mature sciences, permits the more obvious forms of progress. It is not, however, anything that a methodological prescription can provide. ... I claim no therapy to assist the transformation of a proto-science to a science, nor do I suppose anything of this sort is to be had".
The term "prescientific" means at root "relating to an era before science existed". For example, traditional medicine existed for thousands of years before medical science did, and thus many aspects of it can be described as prescientific. In a related sense, protoscientific topics (such as the alchemy of Newton's day) can be called prescientific, in which case the "proto-" and "pre-" labels can function more or less synonymously (the latter focusing more sharply on the idea that nothing but science is science).
Compare fringe science, which is considered highly speculative or even strongly refuted. Some protosciences go on to become an accepted part of mainstream science. | https://en.wikipedia.org/wiki?curid=24241 |
Pickelhaube
The Pickelhaube (plural "Pickelhauben"; from the German "Pickel", "point" or "pickaxe", and "Haube", "bonnet", a general word for "headgear"), also Pickelhelm, is a spiked helmet worn in the 19th and 20th centuries by Prussian and German military, firefighters and police. Although typically associated with the Prussian Army, which adopted it in 1842–43, the helmet was widely imitated by other armies during that period. It is still worn today as part of ceremonial wear in the militaries of certain countries, such as Sweden or Colombia.
The Pickelhaube was originally designed in 1842 by King Frederick William IV of Prussia, perhaps as a copy of similar helmets that were adopted at the same time by the Russian military. It is not clear whether this was a case of imitation, parallel invention, or if both were based on the earlier Napoleonic cuirassier. The early Russian type (known as "The Helmet of Yaroslav Mudry") was also used by cavalry, which had used the spike as a holder for a horsehair plume in full dress, a practice also followed with some Prussian models (see below).
Frederick William IV introduced the Pickelhaube for use by the majority of Prussian infantry on 23 October 1842 by a royal cabinet order. The use of the Pickelhaube spread rapidly to other German principalities. Oldenburg adopted it by 1849, Baden by 1870, and in 1887, the Kingdom of Bavaria was the last German state to adopt the Pickelhaube (since the Napoleonic Wars, they had had their own design of helmet called the , a Tarleton helmet). Amongst other European armies, that of Sweden adopted the Prussian version of the spiked helmet in 1845 and the Russian Army in 1846.
From the second half of the 19th century onwards, the armies of a number of nations besides Russia (including Argentina, Bolivia, Colombia, Chile, Ecuador, Mexico, Portugal, Norway, Sweden, and Venezuela) adopted the Pickelhaube or something very similar. The popularity of this headdress in Latin America arose from a period during the early 20th century when military missions from Imperial Germany were widely employed to train and organize national armies. Peru was the first to use the helmet for the Peruvian Army when some helmets were shipped to the country in the 1870s, but during the War of the Pacific the 6th Infantry Regiment "Chacabuco" of the Chilean Army became the first Chilean military unit to use them when its personnel used the helmets—which were seized from the Peruvians—in their red French-inspired uniforms. These sported the Imperial German eagles but in the 1900s the eagles were replaced by the national emblems of the countries that used them.
The Russian version initially had a horsehair plume fitted to the end of the spike, but this was later discarded in some units. The Russian spike was topped with a grenade motif. At the beginning of the Crimean War, such helmets were common among infantry and grenadiers, but soon fell out of place in favour of the forage cap. After 1862 the spiked helmet ceased to be generally worn by the Russian Army, although it was retained until 1914 by the Cuirassier regiments of the Imperial Guard and the Gendarmerie. The Russians prolonged the history of the pointed military headgear with their own cloth Budenovka in the early 20th century.
In 1847, the Household Cavalry, along with British dragoons and dragoon guards, adopted a helmet which was a hybrid between the Pickelhaube and the traditional dragoon helmet which it replaced. This "Albert Pattern" helmet was named after Albert, Prince Consort who took a keen interest in military uniforms, and featured a falling horsehair plume which could be removed when on campaign. It was adopted by other heavy cavalry regiments across the British Empire and remains in ceremonial use. The Pickelhaube also influenced the design of the British army Home Service helmet, as well as the custodian helmet still worn by police in England and Wales. The linkage between Pickelhaube and Home Service helmet was however not a direct one, since the British headdress was higher, had only a small spike and was made of stiffened cloth over a cork framework, instead of leather. Both the United States Army and Marine Corps wore helmets of the British pattern for full dress between 1881 and 1902.
The basic Pickelhaube was made of hardened (boiled) leather, given a glossy-black finish, and reinforced with metal trim (usually plated with gold or silver for officers) that included a metal spike at the crown. Early versions had a high crown, but the height gradually was reduced and the helmet became more fitted in form, in a continuing process of weight-reduction and cost-saving. In 1867 a further attempt at weight reduction by removing the metal binding of the front peak, and the metal reinforcing band on the rear of the crown (which also concealed the stitched rear seam of the leather crown), did not prove successful.
The version of the Pickelhaube worn by Prussian artillery units employed a ball-shaped finial rather than the pointed spike, a modification ordered in 1844 because of injuries to horses and damage to equipment caused by the latter. Prior to the outbreak of World War I in 1914 detachable black or white plumes were worn with the Pickelhaube in full dress by German generals, staff officers, dragoon regiments, infantry of the Prussian Guard and a number of line infantry regiments as a special distinction. This was achieved by unscrewing the spike (a feature of all Pickelhauben regardless of whether they bore a plume) and replacing it with a tall metal plume-holder known as a "trichter". For musicians of these units, and also for Bavarian Artillery and an entire cavalry regiment of the Saxon Guard, this plume was red.
Aside from the spike finial, perhaps the most recognizable feature of the Pickelhaube was the ornamental front plate, which denoted the regiment's province or state. The most common plate design consisted of a large, spread-winged eagle, the emblem used by Prussia. Different plate designs were used by Bavaria, Württemberg, Baden, and the other German states. The Russians used the traditional double-headed eagle.
German military Pickelhauben also mounted two round, colored cockades behind the chinstraps attached to the sides of the helmet. The right cockade, the national cockade, was red, black and white. The left cockade was used to denote the state of the soldier (Prussia: black and white; Bavaria: white and blue; etc.).
All-metal versions of the Pickelhaube were worn mainly by cuirassiers, and often appear in portraits of high-ranking military and political figures (such as Otto von Bismarck, pictured above). These helmets were sometimes referred to as lobster-tail helmets, due to their distinctive articulated neck guard. The design of these is based on cavalry helmets in common use since the 16th century, but with some features taken from the leather helmets. The version worn by the Prussian Gardes du Corps was of tombac (copper and zinc alloy) with silver mountings. That worn by the cuirassiers of the line since 1842 was of polished steel with brass mountings,
In 1892, a light brown cloth helmet cover, the M1892 Überzug, became standard issue for all Pickelhauben for manoeuvres and active service. The Überzug was intended to protect the helmet from dirt and reduce its combat visibility, as the brass and silver fittings on the Pickelhaube proved to be highly reflective. Regimental numbers were sewn or stenciled in red (green from August 1914) onto the front of the cover, other than in units of the Prussian Guards, which never carried regimental numbers or other adornments on the Überzug. With exposure to the sun, the Überzug faded into a tan shade. In October 1916 the colour was changed to feldgrau (field grey), although by that date the plain metal Stahlhelm was standard issue for most troops.
All helmets produced for the infantry before and during 1914 were made of leather. As the war progressed, Germany's leather stockpiles dwindled. After extensive imports from South America, particularly Argentina, the German government began producing ersatz Pickelhauben made of other materials. In 1915, some Pickelhauben began to be made from thin sheet steel. However, the German high command needed to produce an even greater number of helmets, leading to the usage of pressurized felt and even paper to construct Pickelhauben. The Pickelhaube was discontinued in 1916.
During the early months of World War I, it was soon discovered that the Pickelhaube did not measure up to the demanding conditions of trench warfare. The leather helmets offered little protection against shell fragments and shrapnel and the conspicuous spike made its wearer a target. These shortcomings, combined with material shortages, led to the introduction of the simplified model 1915 helmet described above, with a detachable spike. In September 1915 it was ordered that the new helmets were to be worn without spikes when in the front line.
Beginning in 1916, the Pickelhaube was slowly replaced by a new German steel helmet (the "Stahlhelm") intended to offer greater head protection from shell fragments. The German steel helmet decreased German head wound fatalities by 70%. After the adoption of the Stahlhelm, the Pickelhaube was reduced to limited ceremonial wear by senior officers away from the war zones; plus the "Leibgendarmerie S.M. des Kaisers" whose role as an Imperial/Royal escort led them to retain peacetime full dress throughout the war. With the collapse of the German Empire in 1918, the Pickelhaube ceased to be part of the military uniform, and even the police adopted shakos of a Jäger style. In modified forms the new Stahlhelm helmet would continue to be worn by German troops into World War II.
The Pickelhaube is still part of the parade/ceremonial uniform of the Life Guards of Sweden, the National Republican Guard (GNR) of Portugal, the military academies of Chile, Colombia, Venezuela and Ecuador, the Military College of Bolivia, the Army Central Band and Army School Bands of Chile, the Chilean Army's 1st Cavalry and 1st Artillery Regiments, and the Presidential Guard Battalion and National Police of Colombia. The Blues and Royals, the Life Guards of the United Kingdom and traffic police in the Hashemite Kingdom of Jordan also use different forms of the Pickelhaube. The modern Romanian Gendarmerie ("Jandarmeria Româna") maintain a mounted detachment who wear a white plumed Pickelhaube of a model dating from the late 19th century, as part of their ceremonial uniform.
As early as 1844, the poet Heinrich Heine mocked the Pickelhaube as a symbol of reaction and an unsuitable head-dress. He cautioned that the spike could easily "draw modern lightnings down on your romantic head". The poem was part of his political satire on the contemporary monarchy, national chauvinism, and militarism, use aggressively against democratic movements, entitled "Germany. A Winter's Tale".
In the lead-up to the 2006 FIFA World Cup in Germany, a molded plastic version of the Pickelhaube was available as a fanware article. The common model was colored in the black-red-gold of the German flag, with a variety of other colors also available.
The spiked helmet remained part of a clichéd mental picture of Imperial Germany as late as the inter-war period even after the headdress had ceased to be worn. This was possibly because of the extensive use of the pickelhaube in Allied propaganda before and during World War I, although the helmet had been a well known icon of Imperial Germany even prior to 1914. Pickelhauben were popular targets for Allied souvenir hunters during the early months of the war. | https://en.wikipedia.org/wiki?curid=24243 |
Pope Gregory XIII
Pope Gregory XIII (; 7 January 1502 – 10 April 1585), born Ugo Boncompagni, was head of the Catholic Church and ruler of the Papal States from 13 May 1572 to his death in 1585. He is best known for commissioning and being the namesake for the Gregorian calendar, which remains the internationally accepted civil calendar to this day.
Ugo Boncompagni was born the son of Cristoforo Boncompagni (10 July 1470 – 1546) and of his wife Angela Marescalchi in Bologna, where he studied law and graduated in 1530. He later taught jurisprudence for some years, and his students included notable figures such as Cardinals Alexander Farnese, Reginald Pole and Charles Borromeo. He had an illegitimate son after an affair with Maddalena Fulchini, Giacomo Boncompagni, but before he took holy orders.
At the age of thirty-six he was summoned to Rome by Pope Paul III (1534–1549), under whom he held successive appointments as first judge of the capital, abbreviator, and vice-chancellor of the Campagna e Marittima.
Pope Paul IV (1555–1559) attached him as "datarius" to the suite of Cardinal Carlo Carafa, Pope Pius IV (1559–1565) made him Cardinal-Priest of "San Sisto Vecchio" and sent him to the Council of Trent.
He also served as a legate to Philip II of Spain (1556–1598), being sent by the Pope to investigate the Cardinal of Toledo. It was there that he formed a lasting and close relationship with the Spanish King, which was to become very important in his foreign policy as Pope.
Upon the death of Pope Pius V (1566–1572), the conclave chose Cardinal Boncompagni, who assumed the name of Gregory XIII in homage to the great reforming Pope, Gregory I (590–604), surnamed the Great. It was a very brief conclave, lasting less than 24 hours. Many historians have attributed this to the influence and backing of the Spanish King. Cardinal Borromeo and the cardinals wishing reform accepted Boncompagni's candidature and so supported him in the conclave while the Spanish faction also deemed him acceptable due to his success as a nuncio in Spain.
Gregory XIII's character seemed to be perfect for the needs of the church at the time. Unlike some of his predecessors, he was to lead a faultless personal life, becoming a model for his simplicity of life. Additionally, his legal brilliance and management abilities meant that he was able to respond and deal with major problems quickly and decisively, although not always successfully.
Once in the chair of Saint Peter, Gregory XIII's rather worldly concerns became secondary and he dedicated himself to reform of the Catholic Church. He committed himself to putting into practice the recommendations of the Council of Trent. He allowed no exceptions for cardinals to the rule that bishops must take up residence in their sees, and designated a committee to update the Index of Forbidden Books. He was the patron of a new and greatly improved edition of the "Corpus juris canonici". In a time of considerable centralisation of power, Gregory XIII abolished the Cardinals Consistories, replacing them with Colleges, and appointing specific tasks for these colleges to work on. He was renowned for having a fierce independence; some confidants noted that he neither welcomed interventions nor sought advice. The power of the papacy increased under him, whereas the influence and power of the cardinals substantially decreased.
Also noteworthy is his establishment of the Discalced Carmelites, an offshoot of the Carmelite Order, as a distinct unit or "province" within the former by the decree "Pia consideratione" dated 22 June 1580, ending a period of great difficulty between them and enabling the former to become a significant religious order in the Catholic Church.
A central part of the strategy of Gregory XIII's reform was to apply the recommendations of Trent. He was a liberal patron of the recently formed Society of Jesus throughout Europe, for which he founded many new colleges. The Roman College of the Jesuits grew substantially under his patronage, and became the most important centre of learning in Europe for a time. It is now named the Pontifical Gregorian University. Pope Gregory XIII also founded numerous seminaries for training priests, beginning with the German College at Rome, and put them in the charge of the Jesuits.
In 1575 he gave official status to the Congregation of the Oratory, a community of priests without vows, dedicated to prayer and preaching (founded by Saint Philip Neri). In 1580 he commissioned artists, including Ignazio Danti, to complete works to decorate the Vatican and commissioned The Gallery of Maps.
Also noteworthy during his pontificate as a further means of putting into practice the recommendations of the Council of Trent is the transformation in 1580 of the Dominican studium founded in the 13th century at Rome into the College of St. Thomas, the precursor of the Pontifical University of St. Thomas Aquinas, "Angelicum".
Pope Gregory XIII is best known for his commissioning of the calendar after being initially authored by the doctor/astronomer Aloysius Lilius and with the aid of Jesuit priest/astronomer Christopher Clavius who made the final modifications. The reason for the reform was that the average length of the year in the Julian calendar was too long – as it treated each year as 365 days, 6 hours in length, whereas calculations showed that the actual mean length of a year is slightly less (365 days, 5 hours and 49 minutes). As a result, the date of the vernal equinox had slowly (over the course of 13 centuries) slipped to 10 March, while the computus (calculation) of the date of Easter still followed the traditional date of 21 March.
That was verified by the observations of Clavius, and the new calendar was instituted when Gregory decreed, by the papal bull "Inter gravissimas" of 24 February 1582, that the day after Thursday, 4 October 1582 would be not Friday, 5 October, but Friday, 15 October 1582. The new calendar duly replaced the Julian calendar, in use since 45 BC, and has since come into nearly universal use. Because of Gregory's involvement, the reformed Julian calendar came to be known as the Gregorian calendar.
The switchover was bitterly opposed by much of the populace, who feared it was an attempt by landlords to cheat them out of a week and a half's rent. However, the Catholic countries of Spain, Portugal, Poland, and Italy complied. France, some states of the Dutch Republic and various Catholic states in Germany and Switzerland (both countries were religiously split) followed suit within a year or two, Austria and Hungary followed in 1587.
However, more than a century passed before Protestant Europe accepted the new calendar. Denmark, the remaining states of the Dutch Republic, and the Protestant states of the Holy Roman Empire and Switzerland, adopted the Gregorian reform in 1700–01. By that time, the calendar trailed the seasons by 11 days. Great Britain and its American colonies adopted the reformed calendar in 1752, where Wednesday, 2 September 1752 was immediately followed by Thursday, 14 September 1752; they were joined by the last Protestant holdout, Sweden, on 1 March 1753.
The Gregorian calendar was not accepted in eastern Christendom for several hundred years, and then only as the civil calendar.
Though he expressed the conventional fears of the danger from the Turks, Gregory XIII's attentions were more consistently directed to the dangers from the Protestants. He also encouraged the plans of Philip II to dethrone Elizabeth I of England (reigned from 1558–1603), thus helping to develop an atmosphere of subversion and imminent danger among English Protestants, who looked on any Catholic as a potential traitor.
In 1578, to further the plans of exiled English and Irish Catholics such as Nicholas Sanders, William Allen, and James Fitzmaurice FitzGerald, Gregory outfitted adventurer Thomas Stukeley with a ship and an army of 800 men to land in Ireland to aid the Catholics against the Protestant plantations. To his dismay, Stukeley joined his forces with those of King Sebastian of Portugal against Emperor Abdul Malik of Morocco instead.
Another papal expedition sailed to Ireland in 1579 with a mere 50 soldiers under the command of Fitzmaurice, accompanied by Sanders as papal legate. All of the soldiers and sailors on board, as well as the women and children who accompanied them, were beheaded or hanged on landing in Kerry, in the Smerwick Massacre. Gregory's greatest success came in his patronage of colleges and seminaries which he founded on the Continent for the Irish and English, among others.
In 1580, he was persuaded by English Jesuits to moderate or suspend the Bull "Regnans in Excelsis" (1570) which had excommunicated Queen Elizabeth I of England. Catholics were advised to obey the queen outwardly in all civil matters, until such time as a suitable opportunity presented itself for her overthrow.
Pope Gregory XIII had no connection with the plot of Henry, Duke of Guise, and his brother, Charles, Duke of Mayenne, to assassinate Elizabeth I in 1582.
After the St. Bartholomew's Day Massacres of Huguenots in France in 1572, Pope Gregory celebrated a "Te Deum" mass. However, some hold that he was ignorant of the nature of the plot at the time, having been told the Huguenots had tried to take over the government but failed. Three frescoes in the Sala Regia hall of the Vatican depicting the events were painted by Giorgio Vasari, and a commemorative medal was issued with Gregory's portrait and on the obverse a chastising angel, sword in hand and the legend UGONOTTORUM STRAGES ("Overthrow of the Huguenots").
In Rome Gregory XIII built the magnificent Gregorian chapel in the Basilica of St. Peter, and extended the Quirinal Palace in 1580. He also turned the Baths of Diocletian into a granary in 1575.
He appointed his illegitimate son Giacomo, born to his mistress at Bologna before his papacy, castellan of Sant'Angelo and Gonfalonier of the Church; Venice, anxious to please, enrolled him among its nobles. Philip II of Spain appointed him general in his army. Gregory also helped his son to become a powerful feudatary through the acquisition of the Duchy of Sora, on the border between the Papal States and the Kingdom of Naples.
In order to raise funds for these and similar objects, he confiscated a large proportion of the houses and properties throughout the states of the Church. This measure enriched his treasury for a time, but alienated a great body of the nobility and gentry, revived old factions, and created new ones.
The pope canonized four saints during his pontificate and in 1584 beatified his predecessor Pope Gregory VII.
During his pontificate, the pope created 34 cardinals in eight consistories; this included naming his nephew Filippo Boncompagni to the cardinalate in the pope's first consistory in 1572. Gregory XIII also named four of his successors as cardinals all in 1583: Giovanni Battista Castagna (Urban VII), Niccolò Sfondrati (Gregory XIV), Giovanni Antonio Facchinetti (Innocent IX), and Alessandro de' Medici (Leo XI).
The pope suffered from a fever on 5 April 1585 and on 7 April said his usual private Mass still in ill health. He seemed to recover enough that he was able to conduct meetings throughout 8–9 April, although it was observed he did not feel well. But a sudden change on 10 April saw him confined to bed and he was noticed to have a cold sweat and weak pulse; he received the Extreme Unction moments before he died. | https://en.wikipedia.org/wiki?curid=24244 |
Pankration
Pankration (; ) was a sporting event introduced into the Greek Olympic Games in 648 BC and was an empty-hand submission sport with scarcely any rules. The athletes used boxing and wrestling techniques, but also others, such as kicking and holds, locks and chokes on the ground.
The term comes from the Greek , literally meaning "all of power" from ("pan") "all" and ("kratos") "strength, might, power".
It was known in ancient times for its ferocity and allowance of such tactics as knees to the head and eye gouging.
One ancient account tells of a situation in which the judges were trying to determine the winner of a match. The difficulty lay in that fact that both men had died in the arena from their injuries, making it hard to determine a victor. Eventually, the judges decided the winner was the one who didn't have his eyes gouged out. Over time, however, maneuvers like eye gouging were discouraged to prevent such unpleasant incidents.
In Greek mythology, it was said that the heroes Heracles and Theseus invented pankration as a result of using both wrestling and boxing in their confrontations with opponents. Theseus was said to have utilized his extraordinary pankration skills to defeat the dreaded Minotaur in the Labyrinth. Heracles was said to have subdued the Nemean lion using pankration, and was often depicted in ancient artwork doing that.
In this context, pankration was also referred to as "pammachon" or "pammachion" (πάμμαχον or παμμάχιον), meaning "total combat", from πᾶν-, "pān-", "all-" or "total", and μάχη, "machē", "matter". The term pammachon was older, and would later become used less than the term pankration.
The mainstream academic view has been that pankration developed in the archaic Greek society of the 7th century BC, whereby, as the need for expression in violent sport increased, pankration filled a niche of "total contest" that neither boxing nor wrestling could. However, some evidence suggests that pankration, in both its sporting form and its combative form, may have been practiced in Greece already from the second millennium BC.
Pankration, as practiced in historical antiquity, was an athletic event that combined techniques of both boxing (pygmē/pygmachia – πυγμή/πυγμαχία) and wrestling (palē – πάλη), as well as additional elements, such as the use of strikes with the legs, to create a broad fighting sport similar to today's mixed martial arts competitions. There is evidence that, although knockouts were common, most pankration competitions were decided on the basis of submission (giving up). Pankratiasts were highly skilled grapplers and were extremely effective in applying a variety of takedowns, chokes and joint locks. In extreme cases a pankration competition could even result in the death of one of the opponents, which was considered a win.
However, pankration was more than just an event in the athletic competitions of the ancient Greek world; it was also part of the arsenal of Greek soldiers – including the famous Spartan hoplites and Alexander the Great's Macedonian phalanx. It is said that the Spartans at their immortal stand at Thermopylae fought with their bare hands and teeth once their swords and spears broke. Herodotus mentions that in the battle of Mycale between the Greeks and the Persians in 479 BC, those of the Greeks who fought best were the Athenians, and the Athenian who fought best was a distinguished pankratiast, Hermolycus, son of Euthynus. Polyaemus describes King Philip II, the father of Alexander the Great, practicing with another pankratiast while his soldiers watched.
The feats of the ancient pankratiasts became legendary in the annals of Greek athletics. Stories abound of past champions who were considered invincible beings. Arrhichion, Dioxippus, Polydamas of Skotoussa and Theogenes (often referred to as Theagenes of Thasos after the first century AD) are among the most highly recognized names. Their accomplishments defying the odds were some of the most inspiring of ancient Greek athletics and they served as inspiration to the Hellenic world for centuries, as Pausanias, the ancient traveller and writer indicates when he re-tells these stories in his narrative of his travels around Greece.
Dioxippus was an Athenian who had won the Olympic Games in 336 BC, and was serving in Alexander the Great's army in its expedition into Asia. As an admired champion, he naturally became part of the circle of Alexander the Great. In that context, he accepted a challenge from one of Alexander's most skilled soldiers named Coragus to fight in front of Alexander and the troops in armed combat. While Coragus fought with weapons and full armour, Dioxippus showed up armed only with a club and defeated Coragus without killing him, making use of his pankration skills. Later, however, Dioxippus was framed for theft, which led him to commit suicide.
In an odd turn of events, a pankration fighter named Arrhichion (Ἀρριχίων) of Phigalia won the pankration competition at the Olympic Games despite being dead. His opponent had locked him in a chokehold and Arrhichion, desperate to loosen it, broke his opponent's toe (some records say his ankle). The opponent nearly passed out from pain and submitted. As the referee raised Arrhichion's hand, it was discovered that he had died from the chokehold. His body was crowned with the olive wreath and returned to Phigaleia as a hero.
By the Imperial Period, the Romans had adopted the Greek combat sport (spelled in Latin as "pancratium") into their Games. In 393 A.D., the pankration, along with gladiatorial combat and all pagan festivals, was abolished by edict by the Christian Byzantine Emperor Theodosius I. Pankration itself was an event in the Olympic Games for some 1,400 years.
Pausanias mention the wrestler Leontiscus (Λεοντίσκος) from Messene. He wrote that his technique of wrestling was similar to the pankration of Sostratus the Sicyonian, because Leontiscus did not know how to throw his opponents, but won by bending their fingers.
There were neither weight divisions nor time limits in pankration competitions. However, there were two or three age groups in the competitions of antiquity. In the Olympic Games specifically there were only two such age groups:. men (andres – ἄνδρες) and boys (paides – παῖδες). The pankration event for boys was established at the Olympic Games in 200 B.C.. In pankration competitions, referees were armed with stout rods or switches to enforce the rules. In fact, there were only two rules regarding combat: no eye gouging or biting. Sparta was the only place eye gouging and biting was allowed. The contest itself usually continued uninterrupted until one of the combatants submitted, which was often signalled by the submitting contestant raising his index finger. The judges appear, however, to have had the right to stop a contest under certain conditions and award the victory to one of the two athletes; they could also declare the contest a tie.
Pankration competitions were held in tournaments, most being outside of the Olympics. Each tournament began with a ritual which would decide how the tournament would take place.. Grecophone satirist Lucian describes the process in detail:A sacred silver urn is brought, in which they have put bean-size lots. On two lots an alpha is inscribed, on two a beta, and on another two a gamma, and so on. If there are more athletes, two lots always have the same letter. Each athlete comes forth, prays to Zeus, puts his hand into the urn and draws out a lot. Following him, the other athletes do the same. Whip bearers are standing next to the athletes, holding their hands and not allowing them to read the letter they have drawn. When everyone has drawn a lot, the alytarch, or one of the Hellanodikai walks around and looks at the lots of the athletes as they stand in a circle. He then joins the athlete holding the alpha to the other who has drawn the alpha for wrestling or pankration, the one who has the beta to the other with the beta, and the other matching inscribed lots in the same manner.This process was apparently repeated every round until the finals.
If there was an odd number of competitors, there would be a bye (ἔφεδρος – ephedros "reserve") in every round until the last one. The same athlete could be an ephedros more than once, and this could of course be of great advantage to him as the ephedros would be spared the wear and tear of the rounds imposed on his opponent(s). To win a tournament without being an ephedros in any of the rounds (ἀνέφεδρος – anephedros "non-reserve") was thus an honorable distinction.
There is evidence that the major Games in Greek antiquity easily had four tournament rounds, that is, a field of sixteen athletes. Xanthos mentions the largest number—nine tournament rounds. If these tournament rounds were held in one competition, up to 512 contestants would participate in the tournament, which is difficult to believe for a single contest. Therefore, one can hypothesize that the nine rounds included those in which the athlete participated during regional qualification competitions that were held before the major games. Such preliminary contests were held prior to the major games to determine who would participate in the main event. This makes sense, as the 15–20 athletes competing in the major games could not have been the only available contestants. There is clear evidence of this in Plato, who refers to competitors in the Panhellenic Games, with opponents numbering in the thousands. Moreover, in the first century A.D., the Greco-Jewish philosopher Philo of Alexandria—who was himself probably a practitioner of pankration—makes a statement that could be an allusion to preliminary contests in which an athlete would participate and then collect his strength before coming forward fresh in the major competition.
The athletes engaged in a pankration competition—i.e., the pankratiasts (παγκρατιαστές)—employed a variety of techniques in order to strike their opponent as well as take him to the ground in order to use a submission technique. When the pankratiasts fought standing, the combat was called "Anō Pankration" ("ἄνω παγκράτιον", "upper Pankration"); and when they took the fight to the ground, that stage of pankration competition was called "katō pankration" (κάτω παγκράτιον "lower pankration"). Some of the techniques that would be applied in anō pankration and katō pankration, respectively, are known to us through depictions on ancient pottery and sculptures, as well as in descriptions in ancient literature. There were also strategies documented in ancient literature that were meant to be used to obtain an advantage over the competitor. For illustration purposes, below are examples of striking and grappling techniques (including examples of counters), as well as strategies and tactics, that have been identified from the ancient sources (visual arts or literature).
The pankratiast faces his opponent with a nearly frontal stance—only slightly turned sideways. This is an intermediate directional positioning, between the wrestler's more frontal positioning and the boxer's more sideways stance and is consistent with the need to preserve both the option of using striking and protecting the center line of the body and the option of applying grappling techniques. Thus, the left side of the body is slightly forward of the right side of the body and the left hand is more forward than the right one. Both hands are held high so that the tips of the fingers are at the level of the hairline or just below the top of the head. The hands are partially open, the fingers are relaxed, and the palms are facing naturally forward, down, and slightly towards each other. The front arm is nearly fully extended but not entirely so; the rear arm is more cambered than the front arm, but more extended than a modern-day boxer's rear arm. The back of the athlete is somewhat rounded, but not as much as a wrestler's would be. The body is only slightly leaning forward.
The weight is virtually all on the back (right) foot with the front (left) foot touching the ground with the ball of the foot. It is a stance in which the athlete is ready at the same time to give a kick with the front leg as well as defend against the opponent's low level kicks by lifting the front knee and blocking. The back leg is bent for stability and power and is facing slightly to the side, to go with the slightly sideways body position. The head and torso are behind the protecting two upper limbs and front leg.
Pankration uses boxing punches and other ancient boxing hand strikes.
Strikes delivered with the legs were an integral part of pankration and one of its most characteristic features. Kicking well was a great advantage to the pankratiast. Epiktētos is making a derogatory reference to a compliment one may give another: "μεγάλα λακτίζεις" ("you kick great"). Moreover, in an accolade to the fighting prowess of the pankratiast Glykon from Pergamo, the athlete is described as "wide foot". The characterization comes actually before the reference to his "unbeatable hands", implying at least as crucial a role for strikes with the feet as with the hands in pankration. That proficiency in kicking could carry the pankratiast to victory is indicated in a sarcastic passage of Galen, where he awards the winning prize in pankration to a donkey because of its excellence in kicking.
The straight kick with the bottom of the foot to the stomach (γαστρίζειν/λάκτισμα εἰς γαστέραν – "gastrizein" or "laktisma eis gasteran", "kicking in the stomach") was apparently a common technique, given the number of depictions of such kicks on vases. This type of kick is mentioned by Lucian.
"Counter": The athlete sidesteps the oncoming kick to the inside of the opponent's leg. He catches and lifts the heel/foot of the planted leg with his rear hand and with the front arm goes under the knee of the kicking leg, hooks it with the nook of his elbow, and lifts while advancing to throw the opponent backward. The athlete executing the counter has to lean forward to avoid hand strikes by the opponent. This counter is shown on a Panathenaic amphora now in Leiden. In another counter, the athlete sidesteps, but now to the outside of the oncoming kick and grasps the inside of the kicking leg from behind the knee with his front hand (overhand grip) and pulls up, which tends to unbalance the opponent so that he falls backward as the athlete advances. The back hand can be used for striking the opponent while he is preoccupied maintaining his balance.
Arm locks can be performed in many different situations using many different techniques.
The athlete is behind the opponent and has him leaning down, with the right knee of the opponent on the ground. The athlete has the opponent's right arm straightened out and extended maximally backward at the shoulder joint. With the opponent's right arm across his own torso, the athlete uses his left hand to keep the pressure on the opponent's right arm by grabbing and pressing down on it just above the wrist. The right hand of the athlete is pressing down at the (side of) the head of the opponent, thus not permitting him to rotate to his right to relieve the pressure on his shoulder. As the opponent could escape by lowering himself closer to the ground and rolling, the athlete steps with his left leg over the left leg of the opponent and wraps his foot around the ankle of the opponent stepping on his instep, while pushing his body weight on the back of the opponent.
In this technique, the position of the bodies is very similar to the one described just above. The athlete executing the technique is standing over his opponent's back, while the latter is down on his right knee. The left leg of the athlete is straddling the left thigh of the opponent—the left knee of the opponent is not on the floor—and is trapping the left foot of the opponent by stepping on it. The athlete uses his left hand to push down on the side/back of the head of the opponent while with his right hand he pulls the opponent's right arm back, against his midsection. This creates an arm bar on the right arm with the pressure now being mostly on the elbow. The fallen opponent cannot relieve it, because his head is being shoved the opposite way by the left hand of the athlete executing the technique.
In this technique, the athlete is again behind his opponent, has the left arm of his opponent trapped, and is pulling back on his right arm. The trapped left arm is bent, with the fingers and palm trapped inside the armpit of the athlete. To trap the left arm, the athlete has pushed (from outside) his own left arm underneath the left elbow of the opponent. The athlete's left hand ends up pressing down on the scapula region of his opponent's back. This position does not permit the opponent to pull out his hand from the athlete's armpit and puts pressure on the left shoulder. The right arm of the athlete is pulling back at the opponent's right wrist (or forearm). In this way, the athlete keeps the right arm of his opponent straightened and tightly pulled against his right hip/lower abdomen area, which results in an arm bar putting pressure on the right elbow. The athlete is in full contact on top of the opponent, with his right leg in front of the right leg of the opponent to block him from escaping by rolling forward.
In executing this choking technique (ἄγχειν – anchein), the athlete grabs the tracheal area (windpipe and "Adam's apple") between his thumb and his four fingers and squeezes. This type of choke can be applied with the athlete being in front or behind his opponent. Regarding the hand grip to be used with this choke, the web area between the thumb and the index finger is to be quite high up the neck and the thumb is bent inward and downward, "reaching" behind the Adam's apple of the opponent. It is unclear if such a grip would have been considered gouging and thus illegal in the Panhellenic Games.
The athlete grabs the throat of the opponent with the four fingers on the outside of the throat and the tip of the thumb pressing in and down the hollow of the throat, putting pressure on the trachea.
The Rear naked choke (RNC) is a chokehold in martial arts applied from an opponent's back. Depending on the context, the term may refer to one of two variations of the technique, either arm can be used to apply the choke in both cases. The term rear naked choke likely originated from the technique in Jujutsu and Judo known as the "Hadaka Jime", or "Naked Strangle." The word "naked" in this context suggests that, unlike other strangulation techniques found in Jujutsu/Judo, this hold does not require the use of a keikogi ("gi") or training uniform.
The choke has two variations:[1] in one version, the attacker's arm encircles the opponent's neck and then grabs his own biceps on the other arm (see below for details); in the second version, the attacker clasps his hands together instead after encircling the opponent's neck. These are deadly moves.
Counter:
A counter to the choke from behind involves the twisting of one of the fingers of the choking arm. This counter is mentioned by Philostratus. In case the choke was set together with a grapevine body lock, another counter was the one applied against that lock; by causing enough pain to the ankle of the opponent, the latter could give up his choke.
From a reverse waist lock set from the front, and staying with hips close to the opponent, the athlete lifts and rotates his opponent using the strength of his hips and legs (ἀναβαστάσαι εἰς ὕψος – "anabastasai eis hypsos", "high lifting"). Depending on the torque the athlete imparts, the opponent becomes more or less vertically inverted, facing the body of the athlete. If however the reverse waist lock is set from the back of the opponent, then the latter would face away from the athlete in the inverted position.
To finish the attack, the athlete has the option of either dropping his opponent head-first to the ground, or driving him into the ground while retaining the hold. To execute the latter option, the athlete bends one of his legs and goes down on that knee while the other leg remains only partially bent; this is presumably to allow for greater mobility in case the "pile driver" does not work. Another approach emphasizes less putting the opponent in an inverted vertical position and more the throw; it is shown in a sculpture in the metōpē (μετώπη) of the Hephaisteion in Athens, where Theseus is depicted heaving Kerkyōn.
The opponents are facing in opposite directions with the athlete at a higher level, over the back of his opponent. The athlete can get in this position after making a shallow sprawl to counter a tackle attempt. From here the athlete sets a waist lock by encircling, from the back, the torso of the opponent with his arms and securing a "handshake" grip close to the abdomen of the opponent. He then heaves the opponent back and up, using the muscles of his legs and his back, so that the opponent's feet rise in the air and he ends up inverted, perpendicular to the ground, and facing away from the athlete. The throw finishes with a "pile driver" or, alternatively, with a simple release of the opponent so that he falls to the ground.
The athlete passes to the back of his opponent, secures a regular waist lock, lifts and throws/ drops the opponent backwards and sideways. As a result of these moves, the opponent would tend to land on his side or face down. The athlete can follow the opponent to the ground and place himself on his back, where he could strike him or choke him from behind while holding him in the "grapevine" body lock (see above), stretching him face down on the ground. This technique is described by the Roman poet Statius in his account of a match between the hero Tydeus of Thebes and an opponent in the Thebaid. Tydeus is described to have followed this takedown with a choke while applying the "grapevine" body lock on the prone opponent.
As the pankration competitions were held outside and in the afternoon, appropriately positioning one's face "vis-a-vis" the low sun was a major tactical objective. The pankratiast, as well as the boxer, did not want to have to face the sun, as this would partly blind him to the blows of the opponent and make accurate delivery of strikes to specific targets difficult. Theocritus, in his narration of the (boxing) match between Polydeukēs and Amykos, noted that the two opponents struggled a lot, vying to see who would get the sun's rays on his back. In the end, with skill and cunning, Polydeukēs managed so that Amykos' face was struck with sunlight while his own was in the shade.
While this positioning was of paramount importance in boxing, which involved only upright striking (with the eyes facing straight), it was also important in pankration, especially in the beginning of the competition and as long as the athletes remained standing.
The decision to remain standing or go to the ground obviously depended on the relative strengths of the athlete, and differed between "anō" and "katō" pankration. However, there are indications that staying on one's feet was generally considered a positive thing, while touching the knee(s) to the ground or being put to the ground was overall considered disadvantageous. In fact, in antiquity as today, falling to one's knee(s) was a metaphor for coming to a disadvantage and putting oneself at risk of losing the fight, as argued persuasively by Michael B. Poliakoff.
Regarding the choice of attacking into the attack of the opponent versus defending and retreating, there are indications, e.g. from boxing, that it was preferable to attack. Dio Chrysostom notes that retreat under fear tends to result in even greater injuries, while attacking before the opponent strikes is less injurious and could very well end in victory.
As indicated by Plato in his "Laws", an important element of strategy was to understand if the opponent had a weak or untrained side and to force him to operate on that side and generally take advantage of that weakness. For example, if the athlete recognizes that the opponent is strictly right-handed, he could circle away from the right hand of the opponent and towards the left side of the opponent. Moreover, if the opponent is weak in his left-side throws, the athlete could aim to position himself accordingly. Training in ambidexterity was instrumental in both applying this strategy and not falling victim to it.
The basic instruction of pankration techniques was conducted by the paedotribae (παιδοτρίβαι, "physical trainers"), who were in charge of boys' physical education. High level athletes were also trained by special trainers who were called gymnastae (γυμνασταί), some of whom had been successful pankration competitors themselves. There are indications that the methods and techniques used by different athletes varied, i.e., there were different styles. While specific styles taught by different teachers, in the mode of Asian martial arts, cannot be excluded, it is very clear (including in Aristotle's "Nicomachean Ethics") that the objective of a teacher of combat sports was to help each of his athletes to develop his personal style that would fit his strengths and weaknesses.
The preparation of pankratiasts included a very wide variety of methods, most of which would be immediately recognizable by the trainers of modern high level athletes, including competitors in modern mixed martial arts competitions. These methods included among others the periodization of training; a wealth of regimens for the development of strength, speed-strength, speed, stamina, and endurance; specialized training for the different stages of competition (i.e., for anō pankration and katō pankration), and methods for learning and engraining techniques. Among the multitude of the latter were also training tools that appear to be very similar to Asian martial arts Forms or "kata", and were known as "cheironomia" (χειρονομία) and "anapale" (ἀναπάλη). Punching bags ("kōrykos" κώρυκος "leather sack") of different sizes and dummies were used for striking practice as well as for the hardening of the body and limbs. Nutrition, massage, and other recovery techniques were used very actively by pankratiasts.
At the time of the revival of the Olympic Games (1896), pankration was not reinstated as an Olympic event.
"Neo-pankration" (Modern Pankration) was first introduced to the martial arts community by Greek-American combat athlete Jim Arvanitis in 1969 and later exposed worldwide in 1973 when he was featured on the cover of "Black Belt". Arvanitis continually refined his reconstruction with reference to original sources. His efforts are also considered pioneering in what became mixed martial arts (MMA).
The International Olympic Committee (IOC) does not list pankration among Olympic sports, but the efforts of Savvidis E. A. Lazaros, founder of modern Pankration Athlima, the technical examination programma, the endyma, the shape of the Palaestra and the terminology of Pankration Athlima, in 2010 the sport was accepted by FILA, known today as United World Wrestling, which governs the Olympic wrestling codes, as an associated discipline and a "form of modern Mixed Martial Art". Pankration was first contested at the World Combat Games in 2010. Under UWW the pankration competitions have two styles:
There are also pro tournaments and federations like Modern Fighting Pankration (MFC). These competitions are similar to professional mixed martial arts. There are many UFC stars with pankration backgrounds like American former UFC champion Demetrious Johnson and Russians Ali Bagautinov and UFC Lightweight Champion Khabib Nurmagomedov. Johnson's coach, Matt Hume, is the founder and head trainer at AMC Pankration in Kirkland, Washington.
Pancrase, a Japanese MMA organization, is named in reference to pankration.
A few who currently hold rank in America in Pankration: Dave Sixel-9th Dan Red Belt, Sheldon Marr-8th Dan Red/Black Belt, Steve Crawford-8th Dan Red/Black Belt, Craig Pumphrey-7th Dan Red/Black Belt, Ivan Dale-6th Dan Black Belt, Michael Craycraft-3rd Dan Black Belt, Troy McDaniel-3rd Dan Black Belt, Josh Lee-2nd Dan Black Belt, Jason Hatfield-2nd Dan Black Belt, Eric Gregory-1st Dan Black Belt, Kyle Hall-1st Dan Black Belt | https://en.wikipedia.org/wiki?curid=24245 |
Province of Canada
The Province of Canada (or the United Province of Canada or the United Canadas) was a British colony in North America from 1841 to 1867. Its formation reflected recommendations made by John Lambton, 1st Earl of Durham in the Report on the Affairs of British North America following the Rebellions of 1837–1838.
The Act of Union 1840, passed on 23 July 1840 by the British Parliament and proclaimed by the Crown on 10 February 1841, merged the Colonies of Upper Canada and Lower Canada by abolishing their separate parliaments and replacing them with a single one with two houses, a Legislative Council as the upper chamber and the Legislative Assembly as the lower chamber. In the aftermath of the Rebellions of 1837–1838, unification of the two Canadas was driven by two factors. Firstly, Upper Canada was near bankruptcy because it lacked stable tax revenues, and needed the resources of the more populous Lower Canada to fund its internal transportation improvements. Secondly, unification was an attempt to swamp the French vote by giving each of the former provinces the same number of parliamentary seats, despite the larger population of Lower Canada.
Although Durham's report had called for the Union of the Canadas and for responsible government (a government accountable to an independent local legislature), only the first of the two recommendations was implemented in 1841. For the first seven years, the government was led by an appointed governor general accountable only to the British Crown and the Queen's Ministers. Responsible government was not to be achieved until the second LaFontaine–Baldwin ministry in 1849, when Governor General James Bruce, 8th Earl of Elgin agreed to request a cabinet be formed on the basis of party, effectively making the elected premier the head of the government and reducing the Governor General to a more symbolic role.
The Province of Canada ceased to exist at Canadian Confederation on 1 July 1867, when it was divided into the Canadian provinces of Ontario and Quebec. Ontario included the area occupied by the pre-1841 British colony of Upper Canada, while Quebec included the area occupied by the pre-1841 British colony of Lower Canada (which had included Labrador until 1809, when Labrador was transferred to the British colony of Newfoundland). Upper Canada was primarily English-speaking, whereas Lower Canada was primarily French-speaking.
The Province of Canada was divided into two parts: Canada East and Canada West.
Canada East was what became of the former colony of Lower Canada after being united into the Province of Canada. It became the province of Quebec after Confederation.
Canada West was what became of the former colony of Upper Canada after being united into the Province of Canada. It became the province of Ontario after Confederation.
The location of the capital city of the Province of Canada changed six times in its 26-year history. The first capital was in Kingston (1841–1844). The capital moved to Montreal (1844–1849) until rioters, spurred by a series of incendiary articles published in "The Gazette", protested against the Rebellion Losses Bill and burned down Montreal's parliament buildings. It then moved to Toronto (1849–1852). It moved to Quebec City from 1852 to 1856, then Toronto for one year (1858) before returning to Quebec City from 1859 to 1866. In 1857, Queen Victoria chose Ottawa as the permanent capital of the Province of Canada, initiating construction of Canada's first parliament buildings, on Parliament Hill. The first stage of this construction was completed in 1865, just in time to host the final session of the last parliament of the Province of Canada before Confederation.
The Governor General remained the head of the civil administration of the colony, appointed by the British government, and responsible to it, not to the local legislature. He was aided by the Executive Council and the Legislative Council. The Executive Council aided in administration, and the Legislative Council reviewed legislation produced by the elected Legislative Assembly.
Sydenham came from a wealthy family of timber merchants, and was an expert in finance, having served on the English Board of Trade which regulated banking (including the colony). He was promised a barony if he could successfully implement the union of the Canadas, and introduce a new form of municipal government, the District Council. The aim of both exercises in state-building was to strengthen the power of the Governor General, to minimise the impact of the numerically superior French vote, and to build a "middle party" that answered to him, rather than the Family Compact or the Reformers. Sydenham was a Whig who believed in rational government, not "responsible government". To implement his plan, he used widespread electoral violence through the Orange Order. His efforts to prevent the election of Louis LaFontaine, the leader of the French reformers, were foiled by David Willson, the leader of the Children of Peace, who convinced the electors of the 4th Riding of York to transcend linguistic prejudice and elect LaFontaine in an English-speaking riding in Canada West.
Bagot was appointed after the unexpected death of Thomson, with the explicit instructions to resist calls for responsible government. He arrived in the capital, Kingston, to find that Thomson's "middle party" had become polarised and he therefore could not form an executive. Even the Tories informed Bagot he could not form a cabinet without including LaFontaine and the French Party. LaFontaine demanded four cabinet seats, including one for Robert Baldwin. Bagot became severely ill thereafter, and Baldwin and Lafontaine became the first real premiers of the Province of Canada. However, to take office as ministers, the two had to run for re-election. While LaFontaine was easily re-elected in 4th York, Baldwin lost his seat in Hastings as a result of Orange Order violence. It was now that the pact between the two men was completely solidified, as LaFontaine arranged for Baldwin to run in Rimouski, Canada East. This was the union of the Canadas they sought, where LaFontaine overcame linguistic prejudice to gain a seat in English Canada, and Baldwin obtained his seat in French Canada.
The Baldwin–LaFontaine ministry barely lasted six months before Governor Bagot also died in March 1843. He was replaced by Charles Metcalfe, whose instructions were to check the "radical" reform government. Metcalfe reverted to the Thomson system of strong central autocratic rule. Metcalfe began appointing his own supporters to patronage positions without Baldwin and LaFontaine's approval, as joint premiers. They resigned in November 1843, beginning a constitutional crisis that would last a year. Metcalfe refused to recall the legislature to demonstrate its irrelevance; he could rule without it. This year-long crisis, in which the legislature was prorogued, "was the final signpost on Upper Canada's conceptual road to democracy. Lacking the scale of the American Revolution, it nonetheless forced a comparable articulation and rethinking of the basics of political dialogue in the province." In the ensuing election, however, the Reformers did not win a majority and thus were not called to form another ministry. Responsible government would be delayed until after 1848.
Cathcart had been a staff officer with Wellington in the Napoleonic Wars, and rose in rank to become commander of British forces in North America from June 1845 to May 1847. He was also appointed as Administrator then Governor General for the same period, uniting for the first time the highest Civil and military offices. The appointment of this military officer as Governor General was due to heightened tensions with the United States over the Oregon boundary dispute. Cathcart was deeply interested in the natural sciences, but ignorant of constitutional practice, and hence an unusual choice for Governor General. He refused to become involved in the day-to-day government of the conservative ministry of William Draper, thereby indirectly emphasising the need for responsible government. His primary focus was on redrafting the Militia Act of 1846. The signing of the Oregon Boundary Treaty in 1846 made him dispensable.
Elgin's second wife, Lady Mary Louisa Lambton, was the daughter of Lord Durham and niece of Lord Grey, making him an ideal compromise figure to introduce responsible government. On his arrival, the Reform Party won a decisive victory at the polls. Elgin invited LaFontaine to form the new government, the first time a Governor General requested cabinet formation on the basis of party. The party character of the ministry meant that the elected premier – and no longer the governor – would be the head of the government. The Governor General would become a more symbolic figure. The elected Premier in the Legislative Assembly would now become responsible for local administration and legislation. It also deprived the Governor of patronage appointments to the civil service, which had been the basis of Metcalfe's policy. The test of responsible government came in 1849, when the Baldwin–Lafontaine government passed the Rebellion Losses Bill, compensating French Canadians for losses suffered during the Rebellions of 1837. Lord Elgin granted royal assent to the bill despite heated Tory opposition and his own personal misgivings, sparking riots in Montreal, during which Elgin himself was assaulted by an English-speaking Orange Order mob and the Parliament buildings were burned down.
The appointment of Walker Head (a cousin of Francis Bond Head, whose inept governance of Upper Canada led to the Rebellion of 1837) is ironic. Some have argued that the Colonial Office meant to appoint Walker Head to be Lt. Governor of Upper Canada in 1836. The difference would have meant little. Both men were Assistant Poor Law Commissioners at the time. Walker Head's appointment in Wales led to the Chartist Newport Rising there in 1839. It was under Head, that true political party government was introduced with the Liberal-Conservative Party of John A. Macdonald and George-Étienne Cartier in 1856. It was during their ministry that the first organised moves toward Canadian Confederation took place.
It was under Monck's governorship that the Great Coalition of all of the political parties of the two Canadas occurred in 1864. The Great Coalition was formed to end the political deadlock between predominantly French-speaking Canada East and predominantly English-speaking Canada West. The deadlock resulted from the requirement of a "double majority" to pass laws in the Legislative Assembly (i.e., a majority in both the Canada East and Canada West sections of the assembly). The removal of the deadlock resulted in three conferences that led to confederation.
Thomson reformed the Executive Councils of Upper and Lower Canada by introducing a "President of the Committees of Council" to act as a chief executive officer for the Council and chair of the various committees. The first was Robert Baldwin Sullivan. Thomson also systematically organised the civil service into departments, the heads of which sat on the Executive Council. A further innovation was to demand that every Head of Department seek election in the Legislative Assembly.
The Legislative Council of the Province of Canada was the upper house. The 24 legislative councillors were originally appointed. In 1856, a bill was passed to replace the appointed members by election. Members were to be elected from 24 divisions in each of Canada East and Canada West. Twelve members were elected every two years from 1856 to 1862.
Canada West, with its 450,000 inhabitants, was represented by 42 seats in the Legislative Assembly, the same number as the more-populated Canada East, with 650,000 inhabitants.
The Legislature's effectiveness was further hampered by the requirement of a "double majority" where a majority of votes for the passage of a bill had to be obtained from the members of "both" Canada East and West.
Each administration was led by two men, one from each half of the province. Officially, one of them at any given time had the title of "Premier", while the other had the title of "Deputy".
Municipal government in Upper Canada was under the control of appointed magistrates who sat in Courts of Quarter Sessions to administer the law within a District. A few cities, such as Toronto, were incorporated by special acts of the legislature. Governor Thomson, 1st Baron Sydenham, spearheaded the passage of the District Councils Act which transferred municipal government to District Councils. His bill allowed for two elected councillors from each township, but the warden, clerk and treasurer were to be appointed by the government. This thus allowed for strong administrative control and continued government patronage appointments. Sydenham's bill reflected his larger concerns to limit popular participation under the tutelage of a strong executive. The Councils were reformed by the Baldwin Act in 1849 which made municipal government truly democratic rather than an extension of central control of the Crown. It delegated authority to municipal governments so they could raise taxes and enact by-laws. It also established a hierarchy of types of municipal governments, starting at the top with cities and continued down past towns, villages and finally townships. This system was to prevail for the next 150 years.
During the year-long constitutional crisis in 1843–44, when Metcalfe prorogued Parliament to demonstrate its irrelevance, Baldwin established a "Reform Association" in February 1844, to unite the Reform movement in Canada West and to explain their understanding of responsible government. Twenty-two branches were established. A grand meeting of all branches of the Reform Association was held in the Second Meeting House of the Children of Peace in Sharon. Over three thousand people attended this rally for Baldwin. the Association was not, however, a true political party and individual members voted independently.
The Parti rouge (alternatively known as the Parti démocratique) was formed in Canada East around 1848 by radical French Canadians inspired by the ideas of Louis-Joseph Papineau, the Institut canadien de Montréal, and the reformist movement led by the Parti patriote of the 1830s. The reformist "rouges" did not believe that the 1840 Act of Union had truly granted a responsible government to former Upper and Lower Canada. They advocated important democratic reforms, republicanism, separation of the state and the church. In 1858, the elected "rouges" allied with the Clear Grits. This resulted in the shortest-lived government in Canadian history, falling in less than a day.
The Clear Grits were the inheritors of William Lyon Mackenzie's Reform movement of the 1830s. Their support was concentrated among southwestern Canada West farmers, who were frustrated and disillusioned by the 1849 Reform government of Robert Baldwin and Louis-Hippolyte Lafontaine's lack of democratic enthusiasm. The Clear Grits advocated universal male suffrage, representation by population, democratic institutions, reductions in government expenditure, abolition of the Clergy reserves, voluntarism, and free trade with the United States. Their platform was similar to that of the British Chartists. The Clear Grits and the Parti rouge evolved into the Liberal Party of Canada.
The Parti bleu was a moderate political group in Canada East that emerged in 1854. It was based on the moderate reformist views of Louis-Hippolyte Lafontaine.
The Liberal-Conservative Party emerged from a coalition government in 1854 in which moderate Reformers and Conservatives from Canada West joined with "bleus" from Canada East under the dual prime-ministership of Allan MacNab and A.-N. Morin. The new ministry were committed to secularise the Clergy reserves in Canada West and to abolish seigneurial tenure in Canada East. Over time, the Liberal-Conservatives evolved into the Conservative party.
No provision for responsible government was included in the Act of Union 1840. Early Governors of the province were closely involved in political affairs, maintaining a right to make Executive Council and other appointments without the input of the legislative assembly.
However, in 1848 the Earl of Elgin, the then Governor General, appointed a Cabinet nominated by the majority party of the Legislative Assembly, the Baldwin–Lafontaine coalition that had won elections in January. Lord Elgin upheld the principles of responsible government by not repealing the Rebellion Losses Bill, which was highly unpopular with some English-speaking Loyalists who favoured imperial over majority rule.
As Canada East and Canada West each held 42 seats in the Legislative Assembly, there was a legislative deadlock between English (mainly from Canada West) and French (mainly from Canada East). The majority of the province was French, which demanded "rep-by-pop" (representation by population), which the Anglophones opposed.
The granting of responsible government to the colony is typically attributed to reforms in 1848 (principally the effective transfer of control over patronage from the Governor to the elected ministry). These reforms resulted in the appointment of the second Baldwin–Lafontaine government that quickly removed many of the disabilities on French-Canadian political participation in the colony.
Once the English population, rapidly growing through immigration, exceeded the French, the English demanded representation-by-population. In the end, the legislative deadlock between English and French led to a movement for a federal union which resulted in the broader Canadian Confederation in 1867.
In "The Liberal Order Framework: A Prospectus for a Reconnaissance of Canadian History" McKay argues that "the category 'Canada' should henceforth denote a historically specific project of rule, rather than either an essence we must defend or an empty homogeneous space we must possess. Canada-as-project can be analyzed as the implantation and expansion over a heterogeneous terrain of a certain politico-economic logic—to wit, liberalism." The liberalism of which McKay writes is not that of a specific political party, but of certain practices of state building which prioritise property, first of all, and the individual.
The Baldwin Act, also known as the Municipal Corporations Act, replaced the local government system based on district councils in Canada West by government at the county level. It also granted more autonomy to townships, villages, towns and cities.
In 1849, King's College was renamed the University of Toronto and the school's ties with the Church of England were severed.
The Canadian–American Reciprocity Treaty of 1854, also known as the Elgin–Marcy Treaty, was a trade treaty between the United Province of Canada and the United States. It covered raw materials and was in effect from 1854 to 1865. It represented a move toward free trade.
Education in Canada West was regulated by the province through the General Board of Education, in 1846, until 1850, when it was replaced by the Department of Public Instruction, until 1876.
Among its accomplishments, the United Province of Canada built the Grand Trunk Railway, improved the educational system in Canada West under Egerton Ryerson, reinstated French as an official language of the legislature and the courts, codified the Civil Code of Lower Canada in 1866, and abolished the seigneurial system in Canada East.
Exploration of Western Canada and Rupert's Land with a view to annexation and settlement was a priority of Canada West politicians in the 1850s leading to the Palliser Expedition and the Red River Expedition of Henry Youle Hind, George Gladman and Simon James Dawson. | https://en.wikipedia.org/wiki?curid=24247 |
Polish Corridor
The Polish Corridor (; ), also known as the Danzig Corridor, Corridor to the Sea or Gdańsk Corridor, was a territory located in the region of Pomerelia (Pomeranian Voivodeship, eastern Pomerania, formerly part of West Prussia), which provided the Second Republic of Poland (1920–1939) with access to the Baltic Sea, thus dividing the bulk of Germany (Weimar Republic) from the province of East Prussia. The Free City of Danzig (now the Polish city of Gdańsk) was separate from both Poland and Germany. A similar territory, also occasionally referred to as a corridor, had been connected to the Polish Crown as part of Royal Prussia during the period 1466–1772.
According to German historian Hartmut Boockmann the term "Corridor" was first used by Polish politicians, while Polish historian Grzegorz Lukomski writes that the word was coined by German nationalist propaganda of the 1920s. Internationally the term was used in the English language already as early as March 1919 and whatever its origins, it became a widespread term in English language usage.
The equivalent German term is "Polnischer Korridor". Polish names include "korytarz polski" ("Polish corridor") and "korytarz gdański" ("Gdańsk corridor"); however, reference to the region as a corridor came to be regarded as offensive by interwar Polish diplomats. Among the harshest critics of the term "corridor" was Polish Foreign Minister Józef Beck, who in his May 5, 1939 speech in Sejm (Polish parliament) said: "I am insisting that the term "Pomeranian Voivodeship" should be used. The word "corridor" is an artificial idea, as this land has been Polish for centuries, with a small percentage of German settlers". Poles would commonly refer to the region as "Pomorze Gdańskie" ("Gdańsk Pomerania, Pomerelia") or simply "Pomorze" ("Pomerania"), or as "województwo pomorskie" ("Pomeranian Voivodeship"), which was the administrative name for the region.
In the 10th century, Pomerelia was settled by Slavic Pomeranians, ancestors of the Kashubians, who were subdued by Boleslaw I of Poland. In the 11th century, they created an independent duchy. In 1116/1121, Pomerania was again conquered by Poland. In 1138, following the death of Duke Bolesław III, Poland was fragmented into several semi-independent principalities. The Samborides, "principes" in Pomerelia, gradually evolved into independent dukes, who ruled the duchy until 1294. Before Pomerelia regained independence in 1227, their dukes were vassals of Poland and Denmark. Since 1308-1309, following succession wars between Poland and Brandenburg, Pomerelia was subjugated by the Monastic state of the Teutonic Knights in Prussia. In 1466, with the second Peace of Thorn, Pomerelia became part of the Polish–Lithuanian Commonwealth as a part of autonomous Royal Prussia. After the First Partition of Poland in 1772 it was annexed by the Kingdom of Prussia and named West Prussia, and became a constituent part of the new German Empire in 1871. Thus the Polish Corridor was not an entirely new creation: the territory assigned to Poland had been an integral part of Poland prior to 1772, but with a large degree of autonomy.
Perhaps the earliest census data on ethnic or national structure of West Prussia (including areas which later became the Polish Corridor) is from 1819.
Karl Andree, "Polen: in geographischer, geschichtlicher und culturhistorischer Hinsicht" (Leipzig 1831), gives the total population of West Prussia as 700,000 inhabitants - including 50% Poles (350,000), 47% Germans (330,000) and 3% Jews (20,000).
Data from the 19th century and early 20th century show the following ethnic changes in four "core" counties of the Corridor (Puck, Wejherowo - directly at the Baltic Sea coast - and Kartuzy, Kościerzyna - between Provinz Pommern and Free City Danzig):
After the First World War, Poland was to be re-established as an independent state. Since a Polish state had not existed since the Congress of Vienna, the future republic's territory had to be defined.
Giving Poland access to the sea was one of the guarantees proposed by United States President Woodrow Wilson in his Fourteen Points of 1918. The thirteenth of Wilson's points was:
The following arguments were behind the creation of the corridor:
Ethnic situation was one of the reasons for returning the area to the restored Poland. The majority of the population in the area was Polish. As the Polish commission report to the Allied Supreme Council noted on 12 March 1919: "Finally the fact must be recognised that 600,000 Poles in West Prussia would under any alternative plan remain under German rule". Also, as David Hunter Miller from president Woodrow Wilson's group of experts and academics (known as The Inquiry) noted in his diary from the Paris Peace Conference: "If Poland does not thus secure access to the sea, 600,000 Poles in West Prussia will remain under German rule and 20,000,000 Poles in Poland proper will probably have but a hampered and precarious commercial outlet". The Prussian census of 1910 showed that there were 528,000 Poles (including West Slavic Kashubians, who had supported the Polish national lists in German elections) in the region compared with 385,000 Germans (including troops and officials stationed in the area). The province of West Prussia as a whole, had between 36% and 43% of ethnic Poles in 1910, depending on source (lower number is based directly on German 1910 census figures, while higher number is based on calculations according to which a large part of people counted as Catholic Germans in the official census, in fact identified as Poles). The Poles did not want the Polish population to remain under the control of the German state, which had in the past treated the Polish population and other minorities as second-class citizens and pursued Germanization. As Professor Lewis Bernstein Namier (1888–1960) born to Jewish parents in Lublin Governorate (Russian Empire, former Congress Poland) and later a British citizen, a former member of the British Intelligence Bureau throughout World War I and the British delegation at the Versailles conference, known for his anti-Polish and anti-German attitude, wrote in the "Manchester Guardian" on November 7, 1933: "The Poles are the Nation of the Vistula, and their settlements extend from the sources of the river to its estuary. … It is only fair that the claim of the river-basin should prevail against that of the seaboard."
The Poles held the view that without direct access to the Baltic Sea, Poland's economic independence would be illusory. Around 60.5% of Polish import trade and 55.1% of exports went through the area. The report of the Polish Commission presented to the Allied Supreme Council said:
The United Kingdom eventually accepted this argument. The suppression of the Polish Corridor would have abolished the economic ability of Poland to resist dependence on Germany. As Lewis Bernstein Namier, Professor of Modern History at the University of Manchester and known for both his "legendary hatred of Germany" and Germanophobia as well as his anti-Polish attitude directed against what he defined as the "aggressive, antisemitic and warmongerily imperialist" part of Poland, wrote in a newspaper article in 1933:
By 1938, 77.7% of Polish exports left either through Gdańsk (31.6%) or the newly built port of Gdynia (46.1%)
David Hunter Miller in his diary from the Paris Peace Conference noted, that the problem of Polish access to the sea was very difficult because leaving entire Pomerelia under German control meant cutting off millions of Poles from their commercial outlet and leaving several hundred thousand Poles under German rule, while granting such access meant cutting off East Prussia from the rest of Germany. The Inquiry recommended, that both the Corridor and Danzig should have been ceded directly to Poland.
Quote: ""It is believed that the lesser of these evils is preferable, and that the Corridor and Danzig should [both] be ceded to Poland, as shown on map 6. East Prussia, though territorially cut off from the rest of Germany, could easily be assured railroad transit across the Polish corridor (a simple matter as compared with assuring port facilities to Poland), and has, in addition, excellent communication via Königsberg and the Baltic Sea. In either case a people is asked to entrust large interests to the League of Nations. In the case of Poland they are vital interests; in the case of Germany, aside from Prussian sentiment, they are quite secondary"."
In the end, The Inquiry's recommendations were implemented only partially: most of West Prussia was given to Poland, but Danzig became a Free City.
During World War I, the Central Powers had forced the Imperial Russian troops out of Congress Poland and Galicia, as manifested in the Treaty of Brest-Litovsk on 3 March 1918. Following the military defeat of Austria-Hungary, an independent Polish republic was declared in Western Galicia on 3 November 1918, the same day Austria signed the armistice. The collapse of Imperial Germany's Western Front, and the subsequent withdrawal of her remaining occupation forces after the Armistice of Compiègne on 11 November allowed the republic led by Roman Dmowski and Józef Piłsudski to seize control over the former Congress Polish areas. Also in November, the revolution in Germany forced the Kaiser's abdication and gave way to the establishment of the Weimar Republic. Starting in December, the Polish-Ukrainian War expanded the Polish republic's territory to include Volhynia and parts of Eastern Galicia, while at the same time the German Province of Posen (where even according to the German made 1910 census 61,5% of the population was Polish) was severed by the Greater Poland uprising, which succeeded in attaching most of the province's territory to Poland by January 1919. This led Weimar's Otto Landsberg and Rudolf Breitscheid to call for an armed force to secure Germany's remaining eastern territories (some of which contained significant Polish minorities, primarily on the former Prussian partition territories). The call was answered by the minister of defense Gustav Noske, who decreed support for raising and deploying volunteer "Grenzschutz" forces to secure East Prussia, Silesia and the Netze District.
On 18 January, the Paris peace conference opened, resulting in the draft of the Treaty of Versailles 28 June 1919. Articles 27 and 28 of the treaty ruled on the territorial shape of the corridor, while articles 89 to 93 ruled on transit, citizenship and property issues. Per the terms of the Versailles treaty, which was put into effect on 20 January 1920, the corridor was established as Poland's access to the Baltic Sea from 70% of the dissolved province of West Prussia, consisting of a small part of Pomerania with around 140 km of coastline including the Hel Peninsula, and 69 km without it.
The primarily German-speaking seaport of Danzig (Gdańsk), controlling the estuary of the main Polish waterway, the Vistula river, became the Free City of Danzig and was placed under the protection of the League of Nations without a plebiscite. After the dock workers of Danzig harbour went on strike during the Polish–Soviet War, refusing to unload ammunition, the Polish Government decided to build an ammunition depot at Westerplatte, and a seaport at Gdynia in the territory of the Corridor, connected to the Upper Silesian industrial centers by the newly constructed Polish Coal Trunk Line railways.
The German author Christian Raitz von Frentz writes that after First World War ended, the Polish government tried to reverse the systematic Germanization from former decades. Frederick the Great (King in/of Prussia from 1740 to 1786) settled around 300,000 colonists in the eastern provinces of Prussia and aimed at a removal of the Polish nobility, which he treated with contempt. Frederick also described Poles as "slovenly Polish trash" and compared them to the Iroquois. On the other hand, he encouraged administrators and teachers who could speak both German and Polish.
Prussia pursued a second colonization aimed at Germanisation after 1832. The Prussians passed laws aiming at Germanisation of the provinces of Posen and West Prussia in the late 19th century. The Prussian Settlement Commission established a further 154,000 colonists, including locals, in the provinces of Posen and West Prussia before World War I. Military personnel were included in the population census. A number of German civil servants and merchants were introduced to the area, which influenced the population status.
According to Richard Blanke, an American historian of German descent, 421,029 Germans lived in the area in 1910, making up 42.5% of the population. Blanke has been criticised by Christian Raitz von Frentz, who has classified his book as part of a series on the subject that has an anti-Polish bias; additionally Polish professor A. Cienciala has described Blanke's views as sympathetic to Germany. In addition to the military personnel included in the population census, a number of German civil-servants and merchants were introduced to the area, which influenced the population mix, according to Andrzej Chwalba. By 1921 the proportion of Germans had dropped to 18.8% (175,771). Over the next decade, the German population decreased by another 70,000 to a share of 9.6%.
German political scientist Stefan Wolff, Professor at the University of Birmingham, says that the actions of Polish state officials after the corridor's establishment followed "a course of assimilation and oppression". As a result, a large number of Germans left Poland after 1918: according to Wolff, 800,000 Germans had left Poland by 1923, according to Gotthold Rhode, 575,000 left the former province of Posen and the corridor after the war, according to Herrmann Rauschning, 800,000 Germans had left between 1918 and 1926, contemporary author Alfons Krysinski estimated 800,000 plus 100,000 from East Upper Silesia, the contemporary German statistics say 592,000 Germans had left by 1921, other Polish scholars say that up to a million Germans left. Polish author Władysław Kulski says that a number of them were civil servants with no roots in the province and around 378,000, and this is to a lesser degree is confirmed by some German sources such as Hermann Rauschning.
Lewis Bernstein Namier raised the question as to whether many of the Germans who left were actually settlers without roots in the area - Namier remarked in 1933 "a question must be raised how many of those Germans had originally been planted artificially in that country by the Prussian Government."
The above-mentioned Richard Blanke, in his book "Orphans of Versailles", gives several reasons for the exodus of the German population:
Blanke says that official encouragement by the Polish state played a secondary role in the German exodus. Christian Raitz von Frentz notes "that many of the repressive measures were taken by local and regional Polish authorities in defiance of Acts of Parliament and government decrees, which more often than not conformed with the minorities treaty, the Geneva Convention and their interpretation by the League council - though it is also true that some of the central authorities tacitly tolerated local initiatives against the German population." While there were demonstrations and protests and occasional violence against Germans, they were at a local level, and officials were quick to point out that they were a backlash against former discrimination against Poles. There were other demonstrations when Germans showed disloyalty during the Polish-Bolshevik war as the Red Army announced the return to the pre-war borders of 1914. Despite popular pressure and occasional local actions, perhaps as many as 80% of Germans emigrated more or less voluntarily.
Helmut Lippelt writes that Germany used the existence of German minority in Poland for political ends and as part of its revisionist demands, which resulted in Polish countermeasures. Polish Prime Minister Władysław Sikorski stated in 1923 that the de-Germanization of these territories had to be ended by vigorous and quick liquidation of property and eviction of German "Optanten" (Germans who refused to accept Polish citizenship and per the Versailles Treaty were to leave Poland) so that German nationalists would realise that their view of the temporary state of Polish western border was wrong.
To Lippelt this was partially a reaction to the German claims and partially Polish nationalism, urging to exclude the German element. In turn, anti-Polish prejudice fueled German policy.
In the period leading up to the East Prussian plebiscite in July 1920, the Polish authorities tried to prevent traffic through the Corridor, interrupting postal, telegraphic and telephone communication. On March 10, 1920, the British representative on the Marienwerder Plebiscite Commission, H.D. Beaumont, wrote of numerous continuing difficulties being made by Polish officials and added "as a result, the ill-will between Polish and German nationalities and the irritation due to Polish intolerance towards the German inhabitants in the Corridor (now under their rule), far worse than any former German intolerance of the Poles, are growing to such an extent that it is impossible to believe the present settlement (borders) can have any chance of being permanent... It can confidently be asserted that not even the most attractive economic advantages would induce any German to vote Polish. If the frontier is unsatisfactory now, it will be far more so when it has to be drawn on this side (of the river) with no natural line to follow, cutting off Germany from the river bank and within a mile or so of Marienwerder, which is certain to vote German. I know of no similar frontier created by any treaty."
The German Ministry for Transport established the "Seedienst Ostpreußen" ("Sea Service East Prussia") in 1922 to provide a ferry connection to East Prussia, now a German exclave, so that it would be less dependent on transit through Polish territory.
Connections by train were also possible by "sealing" the carriages ("Korridorzug"), i.e. passengers were not forced to apply for an official Polish visa in their passport; however, the rigorous inspections by the Polish authorities before and after the sealing were strongly feared by the passengers.
In May 1925 a train, passing through the Corridor on its way to East Prussia, crashed because the spikes had been removed from the tracks for a short distance and the fishplates unbolted. 25 persons, including 12 women and 2 children, were killed, some 30 others were injured.
According to Polish Historian Andrzej Chwalba, during the rule of the Kingdom of Prussia and the German Empire various means were used to increase the amount of land owned by Germans at the expense of the Polish population. In Prussia, the Polish nobility had its estates confiscated after the Partitions, and handed over to German nobility. The same applied to Catholic monasteries. Later, the German Empire bought up land in an attempt to prevent the restoration of a Polish majority in Polish inhabited areas in its eastern provinces.
Christian Raitz von Frentz notes that measures aimed at reversing past Germanization included the liquidation of farms settled by the German government during the war under the 1908 law.
In 1925 the Polish government enacted a land reform program with the aim of expropriating landowners. While only 39% of the agricultural land in the Corridor was owned by Germans, the first annual list of properties to be reformed included 10,800 hectares from 32 German landowners and 950 hectares from seven Poles. The voivode of Pomorze, Wiktor Lamot, stressed that "the part of Pomorze through which the so-called corridor runs must be cleansed of larger German holdings". The coastal region "must be settled with a nationally conscious Polish population... Estates belonging to Germans must be taxed more heavily to encourage them voluntarily to turn over land for settlement. Border counties... particularly a strip of land ten kilometers wide, must be settled with Poles. German estates that lie here must be reduced without concern for their economic value or the views of their owners'.
Prominent politicians and members of the German minority were the first to be included on the land reform list and to have their property expropriated.
The creation of the corridor aroused great resentment in Germany, and all post-war German Weimar governments refused to recognize the eastern borders agreed at Versailles, and refused to follow Germany's acknowledgment of its western borders in the Treaty of Locarno of 1925 with a similar declaration with respect to its eastern borders.
Institutions in Weimar Germany supported and encouraged German minority organizations in Poland, in part radicalized by the Polish policy towards them, in filing close to 10,000 complaints about violations of minority rights to the League of Nations.
Poland in 1931 declared her commitment to peace, but pointed out that any attempt to revise its borders would mean war. Additionally, in conversation with U.S. President Herbert Hoover, Polish delegate Filipowicz noted that any continued provocations by Germany could tempt the Polish side to invade, in order to settle the issue once and for all.
The Nazi Party, led by Adolf Hitler, took power in Germany in 1933. Hitler at first ostentatiously pursued a policy of rapprochement with Poland, culminating in the ten-year Polish-German Non-Aggression Pact of 1934. In the years that followed, Germany placed an emphasis on rearmament, as did Poland and other European powers. Despite this, the Nazis were able to achieve their immediate goals without provoking armed conflict: firstly, in March 1938 Nazi Germany annexed Austria, and at the beginning of October the Sudetenland after the Munich Agreement; together with Germany Poland also made an advance against Czechoslovakia and annexed Zaolzie (1 October 1938). Germany tried to get Poland to join the Anti-Comintern Pact. Poland refused, as the alliance was rapidly becoming a sphere of influence of an increasingly powerful Germany.
Following negotiations with Hitler on the Munich Agreement, British Prime Minister Neville Chamberlain reported that, "He told me privately, and last night he repeated publicly, that after this Sudeten German question is settled, that is the end of Germany's territorial claims in Europe". Almost immediately following the agreement, however, Hitler reneged on it. The Nazis increased their requests for the incorporation of the Free City of Danzig into the Reich, citing the "protection" of the German majority as a motive.
In November 1938, Danzig's district administrator, Albert Forster, reported to the League of Nations that Hitler had told him Polish frontiers would be guaranteed if the Poles were "reasonable like the Czechs." German State Secretary Ernst von Weizsäcker reaffirmed this alleged guarantee in December 1938.
The situation regarding the Free City and the Polish Corridor created a number of headaches for German and Polish Customs. The Germans requested the construction of an extra-territorial "Reichsautobahn" freeway (to complete the "Reichsautobahn Berlin-Königsberg") and railway through the Polish Corridor, effectively annexing Polish territory and connecting East Prussia to Danzig and Germany proper, while cutting off Poland from the sea and its main trade route. If Poland agreed, in return they would extend the non-aggression pact for 25 years.
This seemed to conflict with Hitler's plans and with Poland's rejection of the Anti-Comintern Pact, and his desire either to isolate or to gain support against the Soviet Union. German newspapers in Danzig and Nazi Germany played an important role in inciting nationalist sentiment: headlines buzzed about how Poland was misusing its economic rights in Danzig and German Danzigers were increasingly subjugated to the will of the Polish state. At the same time, Hitler also offered Poland additional territory as an enticement, such as the possible annexation of Lithuania, the Memel Territory, Soviet Ukraine and Czech inhabited lands.
However, Polish leaders continued to fear for the loss of their independence and a fate like that of Czechoslovakia, which had yielded the Sudetenland to Germany in October 1938, only to be invaded by Germany in March 1939. Some felt that the Danzig question was inextricably tied to the problems in the Polish Corridor and any settlement regarding Danzig would be one step towards the eventual loss of Poland's access to the sea. Hitler's credibility outside Germany was very low after the occupation of Czechoslovakia, though some British and French politicians approved of a peaceful revision of the corridor's borders.
In 1939, Nazi Germany made another attempt to renegotiate the status of Danzig; Poland was to retain a permanent right to use the seaport if the route through the Polish Corridor was to be constructed. However, the Polish administration distrusted Hitler and saw the plan as a threat to Polish sovereignty, practically subordinating Poland to the Axis and the Anti-Comintern Bloc while reducing the country to a state of near-servitude as its entire trade would be dependent on Germany.
Robert Coulondre, the French ambassador in Berlin in a dispatch to the Foreign Minister Georges Bonnet wrote on 30 April 1939 that Hitler sought: "...a mortgage on Polish foreign policy, while itself retaining complete liberty of action allowing the conclusion of political agreements with other countries. In these circumstances, the new settlement proposed by Germany, which would link the questions of Danzig and of the passage across the Corridor with counterbalancing questions of a political nature, would only serve to aggravate this mortgage and practically subordinate Poland to the Axis and the Anti-Comintern Bloc. Warsaw refused this in order to retain its independence."
Hitler used the issue of the status city as pretext for attacking Poland, while explaining during a high level meeting of German military officials in May 1939 that his real goal is obtaining "Lebensraum" for Germany, isolating Poles from their Allies in the West and afterwards attacking Poland, thus avoiding the repeat of the Czech situation.
A revised and less favorable proposal came in the form of an ultimatum delivered by the Nazis in late August, after the orders had already been given to attack Poland on September 1, 1939. Nevertheless, at midnight on August 29, Joachim von Ribbentrop handed British Ambassador Sir Neville Henderson a list of terms that would allegedly ensure peace in regard to Poland. Danzig was to return to Germany and there was to be a plebiscite in the Polish Corridor; Poles who had been born or had settled there since 1919 would have no vote, while all Germans born but not living there would. An exchange of minority populations between the two countries was proposed. If Poland accepted these terms, Germany would agree to the British offer of an international guarantee, which would include the Soviet Union. A Polish plenipotentiary, with full powers, was to arrive in Berlin and accept these terms by noon the next day. The British Cabinet viewed the terms as "reasonable," except the demand for a Polish Plenipotentiary, which was seen as similar to Czechoslovak President Emil Hácha accepting Hitler's terms in mid-March 1939.
When Ambassador Józef Lipski went to see Ribbentrop on August 30, he was presented with Hitler’s demands. However, he did not have the full power to sign and Ribbentrop ended the meeting. News was then broadcast that Poland had rejected Germany's offer.
On September 1, 1939, Germany invaded Poland. German forces defeated the Polish Pomorze Army, which had been tasked with the defense of this region, and captured the corridor during the Battle of Tuchola Forest by September 5. Other notable battles took place at Westerplatte, the Polish post office in Danzig, Oksywie, and Hel.
Most of the area was inhabited by Poles, Germans, and Kashubians. The census of 1910 showed that there were 528,000 Poles (including West Slavic Kashubians) compared to 385,000 Germans in the region. The census included German soldiers stationed in the area as well as public officials sent to administer the area.
Since 1886, a Settlement Commission was set up by Prussia to enforce German settlement while at the same time Poles, Jews and Germans migrated west during the Ostflucht.
In 1921 the proportion of Germans in Pomerania (where the Corridor was located) was 18.8% (175,771). Over the next decade, the German population decreased by another 70,000 to a share of 9.6%. There was also a Jewish minority. in 1905, Kashubians numbered about 72,500.
After the occupation by Nazi Germany, a census was made by the German authorities in December 1939. 71% of people declared themselves as Poles, 188,000 people declared Kashubian as their language, 100,000 of those declared themselves Polish.
At the 1945 Potsdam Conference following the German defeat in World War II, Poland's borders were reorganized at the insistence of the Soviet Union, which occupied the entire area. Territories east of the Oder-Neisse line, including Danzig, were put under Polish administration. The Potsdam Conference did not debate about the future of the territories that were part of western Poland before the war, including the corridor. It automatically became part of the reborn state in 1945.
Many German residents were executed, others were expelled to the Soviet occupation zone, which later became East Germany.
In "The Shape of Things to Come", published in 1933, H. G. Wells predicted that the corridor would be the starting point of a future Second World War.
Other land corridors linking a country either to the sea or to a remote part of the country are: | https://en.wikipedia.org/wiki?curid=24250 |
Persephone
In Greek mythology, Persephone ( ; ), also called Kore ( ; ; "the maiden"), is the daughter of Zeus and Demeter. She becomes the queen of the underworld through her abduction by Hades, the god of the underworld. The myth of her abduction represents her function as the personification of vegetation, which shoots forth in spring and withdraws into the earth after harvest; hence, she is also associated with spring as well as the fertility of vegetation. Similar myths appear in the East, in the cults of male gods like Attis, Adonis, and Osiris, and in Minoan Crete.
Persephone as a vegetation goddess and her mother Demeter were the central figures of the Eleusinian Mysteries, which promised the initiated a more enjoyable prospect after death. In some versions, Persephone is the mother of Zeus' son Dionysus, (or Iacchus, and/or Zagreus, as a result of their identification with Dionysus). The origins of her cult are uncertain, but it was based on ancient agrarian cults of agricultural communities.
Persephone was commonly worshipped along with Demeter and with the same mysteries. To her alone were dedicated the mysteries celebrated at Athens in the month of Anthesterion. In Classical Greek art, Persephone is invariably portrayed
robed, often carrying a sheaf of grain. She may appear as a mystical divinity with a sceptre and a little box, but she was mostly represented in the process of being carried off by Hades.
Her name has numerous historical variants. These include Persephassa () and Persephatta (). In Latin her name is rendered Proserpina. She was identified by the Romans as the Italic goddess Libera.
In a Linear B Mycenaean Greek inscription on a tablet found at Pylos dated 1400–1200 BC, John Chadwick reconstructed the name of a goddess, "*Preswa" who could be identified with Persa, daughter of Oceanus and found speculative the further identification with the first element of Persephone. | https://en.wikipedia.org/wiki?curid=24253 |
Pandemic
A pandemic (from Greek , , "all" and , , "people") is an epidemic of an infectious disease that has spread across a large region, for instance multiple continents or worldwide, affecting a substantial number of people. A widespread endemic disease with a stable number of infected people is not a pandemic. Widespread endemic diseases with a stable number of infected people such as recurrences of seasonal influenza are generally excluded as they occur simultaneously in large regions of the globe rather than being spread worldwide.
Throughout human history, there have been a number of pandemics of diseases such as smallpox and tuberculosis. The most fatal pandemic in recorded history was the Black Death (also known as The Plague), which killed an estimated 75–200 million people in the 14th century. Other notable pandemics include the 1918 influenza pandemic (Spanish flu). Current pandemics include COVID-19 and HIV/AIDS.
A pandemic is an epidemic occurring on a scale that crosses international boundaries, usually affecting people on a worldwide scale. A disease or condition is not a pandemic merely because it is widespread or kills many people; it must also be infectious. For instance, cancer is responsible for many deaths but is not considered a pandemic because the disease is neither infectious nor contagious.
The World Health Organization (WHO) previously applied a six-stage classification to describe the process by which a novel influenza virus moves from the first few infections in humans through to a pandemic. It starts when mostly animals are infected with a virus and a few cases where animals infect people, then moves to the stage where the virus begins to be transmitted directly between people and ends with the stage when infections in humans from the virus have spread worldwide. In February 2020, a WHO spokesperson clarified that "there is no official category [for a pandemic]".
In a virtual press conference in May 2009 on the influenza pandemic, Dr. Keiji Fukuda, Assistant Director-General "ad interim" for Health Security and Environment, WHO said "An easy way to think about pandemic... is to say: a pandemic is a global outbreak. Then you might ask yourself: 'What is a global outbreak?' Global outbreak means that we see both spread of the agent... and then we see disease activities in addition to the spread of the virus."
In planning for a possible influenza pandemic, the WHO published a document on pandemic preparedness guidance in 1999, revised in 2005 and 2009, defining phases and appropriate actions for each phase in an "aide-mémoire" titled "WHO pandemic phase descriptions and main actions by phase". The 2009 revision, including definitions of a pandemic and the phases leading to its declaration, were finalized in February 2009. The 2009 H1N1 virus pandemic was neither on the horizon at that time nor mentioned in the document. All versions of this document refer to influenza. The phases are defined by the spread of the disease; virulence and mortality are not mentioned in the current WHO definition, although these factors have previously been included.
In 2014, The United States Centers for Disease Control and Prevention introduced an analogous framework to the WHO's pandemic stages titled the Pandemic Intervals Framework. It includes two pre-pandemic intervals,
and four pandemic intervals,
It also includes a table defining the intervals and mapping them to the WHO pandemic stages.
In 2014 the United States Centers for Disease Control and Prevention adopted the Pandemic Severity Assessment Framework (PSAF) to assess the severity of pandemics. The PSAF superseded the 2007 linear Pandemic Severity Index, which assumed 30% spread and measured case fatality rate (CFR) to assess the severity and evolution of the pandemic.
Historically, measures of pandemic severity were based on the case fatality rate. However, the case fatality rate might not be an adequate measure of pandemic severity during a pandemic response because:
To account for the limitations of measuring the case fatality rate alone, the PSAF rates severity of a disease outbreak on two dimensions: clinical severity of illness in infected persons; and the transmissibility of the infection in the population. Each dimension can be measured using more than one metric, which are scaled to allow comparison of the different metrics. Clinical severity can instead be measured, for example, as the ratio of deaths to hospitalizations or using genetic markers of virulence. Transmissibility can be measured, for example, as the basic reproduction number R0 and serial interval or via underlying population immunity. The framework gives guidelines for scaling the various measures and examples of assessing past pandemics using the framework.
The basic strategies in the control of an outbreak are containment and mitigation. Containment may be undertaken in the early stages of the outbreak, including contact tracing and isolating infected individuals to stop the disease from spreading to the rest of the population, other public health interventions on infection control, and therapeutic countermeasures such as vaccinations which may be effective if available. When it becomes apparent that it is no longer possible to contain the spread of the disease, management will then move on to the mitigation stage, in which measures are taken to slow the spread of the disease and mitigate its effects on society and the healthcare system. In reality, containment and mitigation measures may be undertaken simultaneously.
A key part of managing an infectious disease outbreak is trying to decrease the epidemic peak, known as "flattening the epidemic curve". This helps decrease the risk of health services being overwhelmed, and provides more time for a vaccine and treatment to be developed. A broad group of the so-called non-pharmaceutical interventions may be taken to manage the outbreak. In a flu pandemic, these actions may include: personal preventive measures such as hand hygiene, wearing face-masks, and self-quarantine; community measures aimed at social distancing such as closing schools and cancelling mass gatherings; community engagement to encourage acceptance and participation in such interventions; and environmental measures such as cleaning of surfaces.
Another strategy, suppression, requires more extreme long-term non-pharmaceutical interventions so as to reverse the pandemic by reducing the basic reproduction number to less than1. The suppression strategy, which includes stringent population-wide social distancing, home isolation of cases, and household quarantine, was undertaken by China during the COVID-19 pandemic where entire cities were placed under lockdown, but such strategy carries with it considerable social and economic costs.
Although the WHO uses the term "global epidemic" to describe HIV (), some authors use the term "pandemic".
HIV is believed to have originated in Africa. AIDS is currently a pandemic, with infection rates as high as 25% in southern and eastern Africa. In 2006, the HIV prevalence rate among pregnant women in South Africa was 29%. Effective education about safer sexual practices and bloodborne infection precautions training have helped to slow down infection rates in several African countries sponsoring national education programs.
A new strain of coronavirus which was first identified in Wuhan, Hubei province, China, in late December 2019, has caused a cluster of cases of an acute respiratory disease, which is referred to as coronavirus disease 2019 (COVID-19). According to media reports, more than 200 countries and territories have been affected by COVID-19, with major outbreaks occurring in central China, Iran, Western Europe and the United States. On 11 March 2020, the World Health Organization characterized the spread of COVID-19 as a pandemic. , the number of people infected with COVID-19 has reached 10,591,470 worldwide, of whom 5,800,659 have recovered. The death toll is 514,050. It is believed that these figures are understated as testing did not commence in the initial stages of the outbreak and many people infected by the virus have no or only mild symptoms and may not have been tested. Similarly, the number of recoveries may also be understated as tests are required before cases are officially recognised as recovered, and fatalities are sometimes attributed to other conditions. This was especially the case in large urban areas where a non-trivial number of patients died while in their private residences. It was later discovered that asymptomatic hypoxia due to COVID-19 pulmonary disease may be responsible for many such cases.
In human history, it is generally zoonoses such as influenza and tuberculosis which constitute most of the widespread outbreaks, resulting from the domestication of animals. There have been a number of particularly significant epidemics that deserve mention above the "mere" destruction of cities:
Encounters between European explorers and populations in the rest of the world often introduced epidemics of extraordinary virulence. Disease killed part of the native population of the Canary Islands in the 16th century (Guanches). Half the native population of Hispaniola in 1518 was killed by smallpox. Smallpox also ravaged Mexico in the 1520s, killing 150,000 in Tenochtitlán alone, including the emperor, and Peru in the 1530s, aiding the European conquerors. Measles killed a further two million Mexican natives in the 17th century. In 1618–1619, smallpox wiped out 90% of the Massachusetts Bay Native Americans. During the 1770s, smallpox killed at least 30% of the Pacific Northwest Native Americans. Smallpox epidemics in 1780–1782 and 1837–1838 brought devastation and drastic depopulation among the Plains Indians. Some believe the death of up to 95% of the Native American population of the New World was caused by Europeans introducing Old World diseases such as smallpox, measles and influenza. Over the centuries, Europeans had developed high degrees of herd immunity to these diseases, while the indigenous peoples had no such immunity.
Smallpox devastated the native population of Australia, killing around 50% of Indigenous Australians in the early years of British colonisation. It also killed many New Zealand Māori. In 1848–49, as many as 40,000 out of 150,000 Hawaiians are estimated to have died of measles, whooping cough and influenza. Introduced diseases, notably smallpox, nearly wiped out the native population of Easter Island. Measles killed more than 40,000 Fijians, approximately one-third of the population, in 1875, and in the early 19th century devastated the Andamanese population. The Ainu population decreased drastically in the 19th century, due in large part
to infectious diseases brought by Japanese settlers pouring into Hokkaido.
Researchers concluded that syphilis was carried from the New World to Europe after Columbus's voyages. The findings suggested Europeans could have carried the nonvenereal tropical bacteria home, where the organisms may have mutated into a more deadly form in the different conditions of Europe. The disease was more frequently fatal than it is today. Syphilis was a major killer in Europe during the Renaissance. Between 1602 and 1796, the Dutch East India Company sent almost a million Europeans to work in Asia. Ultimately, fewer than a third made their way back to Europe. The majority died of diseases. Disease killed more British soldiers in India and South Africa than war.
As early as 1803, the Spanish Crown organized a mission (the Balmis expedition) to transport the smallpox vaccine to the Spanish colonies, and establish mass vaccination programs there. By 1832, the federal government of the United States established a smallpox vaccination program for Native Americans. From the beginning of the 20th century onwards, the elimination or control of disease in tropical countries became a driving force for all colonial powers. The sleeping sickness epidemic in Africa was arrested due to mobile teams systematically screening millions of people at risk. In the 20th century, the world saw the biggest increase in its population in human history due to a drop in the mortality rate in many countries as a result of medical advances. The world population has grown from 1.6 billion in 1900 to an estimated 6.8 billion in 2011.
Since it became widespread in the 19th century, cholera has killed tens of millions of people.
It claimed 200,000 lives in Mexico.
Typhus is sometimes called "camp fever" because of its pattern of flaring up in times of strife. (It is also known as "gaol fever" and "ship fever", for its habits of spreading wildly in cramped quarters, such as jails and ships.) Emerging during the Crusades, it had its first impact in Europe in 1489, in Spain. During fighting between the Christian Spaniards and the Muslims in Granada, the Spanish lost 3,000 to war casualties, and 20,000 to typhus. In 1528, the French lost 18,000 troops in Italy, and lost supremacy in Italy to the Spanish. In 1542, 30,000 soldiers died of typhus while fighting the Ottomans in the Balkans.
During the Thirty Years' War (1618–1648), about eight million Germans were killed by bubonic plague and typhus. The disease also played a major role in the destruction of Napoleon's "Grande Armée" in Russia in 1812. During the retreat from Moscow, more French military personnel died of typhus than were killed by the Russians. Of the 450,000 soldiers who crossed the Neman on 25 June 1812, fewer than 40,000 returned. More military personnel were killed from 1500–1914 by typhus than from military action. In early 1813, Napoleon raised a new army of 500,000 to replace his Russian losses. In the campaign of that year, more than 219,000 of Napoleon's soldiers died of typhus. Typhus played a major factor in the Irish Potato Famine. During World War I, typhus epidemics killed more than 150,000 in Serbia. There were about 25 million infections and 3million deaths from epidemic typhus in Russia from 1918 to 1922. Typhus also killed numerous prisoners in the Nazi concentration camps and Soviet prisoner of war camps during World WarII. More than 3.5 million Soviet POWs died out of the 5.7 million in Nazi custody.
Smallpox was a contagious disease caused by the variola virus. The disease killed an estimated 400,000 Europeans per year during the closing years of the 18th century. During the 20th century, it is estimated that smallpox was responsible for 300–500 million deaths. As recently as the early 1950s, an estimated 50 million cases of smallpox occurred in the world each year. After successful vaccination campaigns throughout the 19th and 20th centuries, the WHO certified the eradication of smallpox in December 1979. To this day, smallpox is the only human infectious disease to have been completely eradicated, and one of two infectious viruses ever to be eradicated, along with rinderpest.
Historically, measles was prevalent throughout the world, as it is highly contagious. According to the U.S. National Immunization Program, 90% of people were infected with measles by age 15. Before the vaccine was introduced in 1963, there were an estimated three to four million cases in the U.S. each year. Measles killed around 200 million people worldwide over the last 150 years. In 2000 alone, measles killed some 777,000 worldwide out of 40 million cases globally.
Measles is an endemic disease, meaning it has been continually present in a community, and many people develop resistance. In populations that have not been exposed to measles, exposure to a new disease can be devastating. In 1529, a measles outbreak in Cuba killed two-thirds of the natives who had previously survived smallpox. The disease had ravaged Mexico, Central America, and the Inca civilization.
One-quarter of the world's current population has been infected with "Mycobacterium tuberculosis", and new infections occur at a rate of one per second. About 5–10% of these latent infections will eventually progress to active disease, which, if left untreated, kills more than half its victims. Annually, eight million people become ill with tuberculosis, and two million die from the disease worldwide. In the 19th century, tuberculosis killed an estimated one-quarter of the adult population of Europe; by 1918, one in six deaths in France were still caused by tuberculosis. During the 20th century, tuberculosis killed approximately 100 million people. TB is still one of the most important health problems in the developing world.
Leprosy, also known as Hansen's disease, is caused by a bacillus, "Mycobacterium leprae". It is a chronic disease with an incubation period of up to five years. Since 1985, 15 million people worldwide have been cured of leprosy.
Historically, leprosy has affected people since at least 600 BC. Leprosy outbreaks began to occur in Western Europe around 1000 AD. Numerous "leprosoria", or leper hospitals, sprang up in the Middle Ages; Matthew Paris estimated that in the early 13th century, there were 19,000 of them across Europe.
Malaria is widespread in tropical and subtropical regions, including parts of the Americas, Asia, and Africa. Each year, there are approximately 350–500 million cases of malaria. Drug resistance poses a growing problem in the treatment of malaria in the 21st century, since resistance is now common against all classes of antimalarial drugs, except for the artemisinins.
Malaria was once common in most of Europe and North America, where it is now for all purposes non-existent. Malaria may have contributed to the decline of the Roman Empire. The disease became known as "Roman fever". "Plasmodium falciparum" became a real threat to colonists and indigenous people alike when it was introduced into the Americas along with the slave trade. Malaria devastated the Jamestown colony and regularly ravaged the South and Midwest of the United States. By 1830, it had reached the Pacific Northwest. During the American Civil War, there were more than 1.2 million cases of malaria among
soldiers of both sides. The southern U.S. continued to be afflicted with millions of cases of malaria into the 1930s.
Yellow fever has been a source of several devastating epidemics. Cities as far north as New York, Philadelphia, and Boston were hit with epidemics. In 1793, one of the largest yellow fever epidemics in U.S. history killed as many as 5,000 people in Philadelphia—roughly 10% of the population. About half of the residents had fled the city, including President George Washington.
Another major outbreak of the disease struck the Mississippi River Valley in 1878, with deaths estimated at around 20,000. Among the hardest hit places was Memphis, Tennessee, where 5,000 people were killed and over 20,000 fled, then representing over half the city’s population, many of whom never returned. In colonial times, West Africa became known as "the white man's grave" because of malaria and yellow fever.
Antibiotic-resistant microorganisms, sometimes referred to as "superbugs", may contribute to the re-emergence of diseases which are currently well controlled. For example, cases of tuberculosis that are resistant to traditionally effective treatments remain a cause of great concern to health professionals. Every year, nearly half a million new cases of multidrug-resistant tuberculosis (MDR-TB) are estimated to occur worldwide. China and India have the highest rate of multidrug-resistant TB. The World Health Organization (WHO) reports that approximately 50 million people worldwide are infected with MDR TB, with 79 percent of those cases resistant to three or more antibiotics. In 2005, 124 cases of MDR TB were reported in the United States. Extensively drug-resistant tuberculosis (XDR TB) was identified in Africa in 2006, and subsequently discovered to exist in 49 countries, including the United States. There are about 40,000 new cases of XDR-TB per year, the WHO estimates.
In the past 20 years, common bacteria including "Staphylococcus aureus", "Serratia marcescens" and Enterococcus, have developed resistance to various antibiotics such as vancomycin, as well as whole classes of antibiotics, such as the aminoglycosides and cephalosporins. Antibiotic-resistant organisms have become an important cause of healthcare-associated (nosocomial) infections (HAI). In addition, infections caused by community-acquired strains of methicillin-resistant "Staphylococcus aureus" (MRSA) in otherwise healthy individuals have become more frequent in recent years.
Viral hemorrhagic fevers such as Ebola virus disease, Lassa fever, Rift Valley fever, Marburg virus disease, Bolivian hemorrhagic fever and Crimean-Congo hemorrhagic fever are highly contagious and deadly diseases, with the theoretical potential to become pandemics. Their ability to spread efficiently enough to cause a pandemic is limited, however, as transmission of these viruses requires close contact with the infected vector, and the vector has only a short time before death or serious illness. Furthermore, the short time between a vector becoming infectious and the onset of symptoms allows medical professionals to quickly quarantine vectors, and prevent them from carrying the pathogen elsewhere. Genetic mutations could occur, which could elevate their potential for causing widespread harm; thus close observation by contagious disease specialists is merited.
Coronaviruses (CoV) are a large family of viruses that cause illness ranging from the common cold to more severe diseases such as Middle East Respiratory Syndrome (MERS-CoV) and Severe Acute Respiratory Syndrome (SARS-CoV). A new strain of coronavirus (SARS-CoV-2) causes Coronavirus disease 2019, or COVID-19, which was declared a pandemic by the WHO on 11 March 2020.
Some coronaviruses are zoonotic, meaning they are transmitted between animals and people. Detailed investigations found that SARS-CoV was transmitted from civet cats to humans, and MERS-CoV from dromedary camels to humans. Several known coronaviruses are circulating in animals that have not yet infected humans. Common signs of infection include respiratory symptoms, fever, cough, shortness of breath, and breathing difficulties. In more severe cases, infection can cause pneumonia, severe acute respiratory syndrome, kidney failure and even death. Standard recommendations to prevent the spread of infection include regular hand washing, covering mouth and nose when coughing and sneezing, thoroughly cooking meat and eggs, wearing a face mask, and avoiding close contact with anyone showing symptoms of respiratory illness such as coughing and sneezing. The recommended distance from other people is six feet, a practice more commonly called social distancing.
In 2003 the Italian physician Carlo Urbani (1956–2003) was the first to identify severe acute respiratory syndrome (SARS) as a new and dangerously contagious disease, although he became infected and died. It is caused by a coronavirus dubbed SARS-CoV. Rapid action by national and international health authorities such as the World Health Organization helped to slow transmission and eventually broke the chain of transmission, which ended the localized epidemics before they could become a pandemic. However, the disease has not been eradicated and could re-emerge. This warrants monitoring and reporting of suspicious cases of atypical pneumonia.
Wild aquatic birds are the natural hosts for a range of influenza A viruses. Occasionally, viruses are transmitted from these species to other species, and may then cause outbreaks in domestic poultry or, rarely, in humans.
In February 2004, avian influenza virus was detected in birds in Vietnam, increasing fears of the emergence of new variant strains. It is feared that if the avian influenza virus combines with a human influenza virus (in a bird or a human), the new subtype created could be both highly contagious and highly lethal in humans. Such a subtype could cause a global influenza pandemic, similar to the Spanish flu or the lower mortality pandemics such as the Asian Flu and the Hong Kong Flu.
From October 2004 to February 2005, some 3,700 test kits of the 1957 Asian Flu virus were accidentally spread around the world from a lab in the U.S.
In May 2005, scientists urgently called upon nations to prepare for a global influenza pandemic that could strike as much as 20% of the world's population.
In October 2005, cases of the avian flu (the deadly strain H5N1) were identified in Turkey. EU Health Commissioner Markos Kyprianou said: "We have received now confirmation that the virus found in Turkey is an avian flu H5N1 virus. There is a direct relationship with viruses found in Russia, Mongolia and China." Cases of bird flu were also identified shortly thereafter in Romania, and then Greece. Possible cases of the virus have also been found in Croatia, Bulgaria and the United Kingdom.
By November 2007, numerous confirmed cases of the H5N1 strain had been identified across Europe. However, by the end of October, only 59 people had died as a result of H5N1, which was atypical of previous influenza pandemics.
Avian flu cannot be categorized as a "pandemic" because the virus cannot yet cause sustained and efficient human-to-human transmission. Cases so far are recognized to have been transmitted from bird to human, but as of December 2006 there had been few (if any) cases of proven human-to-human transmission. Regular influenza viruses establish infection by attaching to receptors in the throat and lungs, but the avian influenza virus can attach only to receptors located deep in the lungs of humans, requiring close, prolonged contact with infected patients, and thus limiting person-to-person transmission.
An outbreak of Zika virus began in 2015 and strongly intensified throughout the start of 2016, with more than 1.5 million cases across more than a dozen countries in the Americas. The World Health Organization warned that Zika had the potential to become an explosive global pandemic if the outbreak was not controlled.
In 2016, the Commission on a Global Health Risk Framework for the Future estimated that pandemic disease events would cost the global economy over $6 trillion in the 21st century—over $60 billion per year. The same report recommended spending $4.5 billion annually on global prevention and response capabilities to reduce the threat posed by pandemic events, a figure that the World Bank Group raised to $13 billion in a 2019 report . It has been suggested that such costs be paid from a tax on aviation rather than from, e.g., income taxes , given the crucial role of air traffic in transforming local epidemics into pandemics (being the only factor considered in state-of-the-art models of long-range disease transmission ).
The 2019-2020 COVID-19 pandemic is expected to have a profound negative effect on the global economy, potentially for years to come, with substantial drops in GDP accompanied by increases in unemployment noted around the world. The slowdown of economic activity during the COVID-19 pandemic had a profound effect on emissions of pollutants and greenhouse gases. The reduction of air pollution, and economic activity associated with it during a pandemic was first documented by Alexander F. More for the Black Death plague pandemic, showing the lowest pollution levels in the last 2000 years occurring during that pandemic, due to its 40 to 60% death rate through out Eurasia. | https://en.wikipedia.org/wiki?curid=24255 |
Pervez Musharraf
Born in Delhi during the British Raj, Musharraf was raised in Karachi and Istanbul. He studied mathematics at Forman Christian College in Lahore and was also educated at the Royal College of Defence Studies in the United Kingdom. Musharraf entered the Pakistan Military Academy in 1961 and was commissioned to the Pakistan Army in 1964, playing an active role in the Afghan civil war. Musharraf saw action during the Indo-Pakistani War of 1965 as a second lieutenant. By the 1980s, he was commanding an artillery brigade. In the 1990s, Musharraf was promoted to major general and assigned an infantry division, and later commanded the Special Services Group. Soon after, he also served as deputy military secretary and director general of military operations.
Musharraf rose to national prominence when he was promoted to four-star general by Prime Minister Nawaz Sharif in 1998, making Musharraf the head of the armed forces. He led the Kargil infiltration that almost brought India and Pakistan to a full-fledged war in 1999. After months of contentious relations between Sharif and Musharraf, Sharif unsuccessfully attempted to remove Musharraf as the army's leader. In retaliation, the army staged a coup d'état in 1999, which allowed Musharraf to take over Pakistan as president in 2001. He subsequently placed Sharif under strict house arrest before launching official criminal proceedings against him.
Musharraf initially remained the Chairman of the Joint Chiefs and the Chief of the Army Staff, relinquishing the former position upon confirmation of his presidency. However, he remained the Army Chief until retiring in 2007. The initial stages of his presidency featured controversial wins in a state referendum to grant him a five-year term limit, and a general election in 2002. During his presidency, he advocated for the Third Way, adopting a synthesis of conservatism and socialism. Musharraf reinstated the constitution in 2002, though it was heavily amended within the Legal Framework Order. He appointed Shaukat Aziz to replace Sharif in 2004, and oversaw directed policies against terrorism, becoming a key player in the American-led war on terror.
Musharraf pushed for social liberalism under his enlightened moderation program and promoted economic liberalisation, while he also banned trade unions. Musharraf's presidency coincided with a rise of overall gross domestic product by around 50%; in the same period, domestic savings declined, and economic inequality rose at a rapid rate. Musharraf's government has also been accused of human rights abuses, and he survived a number of assassination attempts during his presidency. When Aziz departed as prime minister, and after approving the suspension of the judicature in 2007, Musharraf's position weakened dramatically. Tendering his resignation to avoid impeachment in 2008, Musharraf emigrated to London in a self-imposed exile. His legacy as leader is mixed; he saw the emergence of a more assertive middle class, but an open disregard for civilian institutions greatly weakened Pakistan.
Musharraf returned to Pakistan in 2013 to participate in that year's general election, but was disqualified from participating after the country's high courts issued arrest warrants for him and Aziz for their alleged involvement in the assassinations of Nawab Akbar Bugti and Benazir Bhutto. Upon Sharif's re-election in 2013, he initiated high treason charges against Musharraf for implementing emergency rule and suspending the constitution in 2007. The case against Musharraf continued after Sharif's removal from office in 2017, the same year in which Musharraf was declared an "absconder" in the Bhutto assassination case by virtue of moving to Dubai. In 2019, Musharraf was sentenced to death in absentia for the treason charges, although, the death sentence was later annulled by the Lahore High Court.
Musharraf was born on 11 August 1943 to an Urdu-speaking family in Delhi, British India, the son of Syed Musharrafuddin and his wife Begum Zarin Musharraf. His family were Muslims who were also Sayyids, claiming descent from prophet Muhammad. Syed Musharraf graduated from Aligarh Muslim University and entered the civil service, which was an extremely prestigious career under British rule. He came from a long line of government officials as his great-grandfather was a tax collector while his maternal grandfather was a "qazi" (judge). Musharraf's mother Zarin, born in the early 1920s, grew up in Lucknow and received her schooling there, after which she graduated from Indraprastha College at Delhi University, taking a bachelor's degree in English literature. She then married and devoted herself to raising a family. His father, Syed, was an accountant who worked at the foreign office in the British Indian government and eventually became an accounting director.
Musharraf was the second of three children, all boys. His elder brother, Javed Musharraf, based in Rome, is an economist and one of the directors of the International Fund for Agricultural Development. His younger brother, Naved Musharraf, is an anaesthesiologist based in Illinois, US.
At the time of his birth, Musharraf's family lived in a large home that belonged to his father's family for many years called "Nehar Wali Haveli", which means "House Next to the Canal". Sir Syed Ahmed Khan's family lived next door. It is indicative of "the family's western education and social prominence" that the house's title deeds, although written entirely in Urdu, were signed by Musharraf's father in English.
Musharraf was four years old when India achieved independence and Pakistan was created as the homeland for India's Muslims. His family left for Pakistan in August 1947, a few days before independence. His father joined the Pakistan Civil Services and began to work for the Pakistani government; later, his father joined the Foreign Ministry, taking up an assignment in Turkey. In his autobiography "", Musharraf elaborates on his first experience with death, after falling off a mango tree.
Musharraf's family moved to Ankara in 1949, when his father became part of a diplomatic deputation from Pakistan to Turkey. He learned to speak Turkish. He had a dog named Whiskey that gave him a "lifelong love for dogs". He played sports in his youth. In 1956, he left Turkey and returned to Pakistan in 1957 where he attended Saint Patrick's School in Karachi and was accepted at the Forman Christian College University in Lahore. At Forman, Musharraf chose mathematics as a major in which he exceled academically, but later developed an interest in economics.
In 1961, at the age of 18, Musharraf entered the Pakistan Military Academy at Kakul. During his college years at PMA and initial joint military testings, Musharraf shared a room with PQ Mehdi of the Pakistan Air Force and Abdul Aziz Mirza of the Navy (both reached four-star assignments and served with Musharraf later on) and after giving the exams and entrance interviews, all three cadets went to watch a world-acclaimed Urdu film, "Savera" (lit. "Dawn"), with his inter-services and college friends, Musharraf recalls, "", published in 2006. With his friends, Musharraf passed the standardise, physical, psychological, and officer-training exams, he also took discussions involving the socioeconomics issues; all three were interviewed by joint military officers who were designated as Commandants. The next day, Musharraf along with PQ Mehdi and Mirza, reported to PMA and they were selected for their respective training in their arms of commission.
Finally in 1964, Musharraf graduated with a Bachelor's degree in his class of 29th PMA Long Course together with Ali Kuli Khan and his lifelong friend Abdul Aziz Mirza. He was commissioned in the artillery regiment as second lieutenant and posted near the Indo-Pakistan border. During this time in the artillery regiment, Musharraf maintained his close friendship and contact with Mirza through letters and telephones even in difficult times when Mirza, after joining the Navy Special Service Group, was stationed in East-Pakistan as a military advisor to Eastern Corps.
His first battlefield experience was with an artillery regiment during the intense fighting for the Khemkaran sector in the Second Kashmir War. He also participated in the Lahore and Sialkot war zones during the conflict. During the war, Musharraf developed a reputation for sticking to his post under shellfire. He received the Imtiazi Sanad medal for gallantry.
Shortly after the end of the War of 1965, he joined the elite Special Service Group (SSG). He served in the SSG from 1966 to 1972. He was promoted to army captain and to major during this period. During the 1971 war with India, he was a company commander of an SSG commando battalion. During the 1971 war he was scheduled to depart to East-Pakistan to join the army-navy joint military operations, but the deployment was cancelled after Indian Army advances towards Southern Pakistan.
Musharraf was promoted to lieutenant colonel in 1974; and to colonel in 1978. As staff officer in the 1980s, he studied political science at NDU, and then briefly tenured as assistant professor of war studies at the Command and Staff College and then assistant professor of political science also at the National Defense University. One of his professors at NDU was general Jehangir Karamat who served Musharraf's guidance counselor and instructor who had significant influence on Musharraf's philosophy and critical thinking. He did not play any significant role in Pakistan's proxy war in the 1979–1989 Soviet invasion of Afghanistan. In 1987, he became a brigade commander of a new brigade of the SSG near Siachen Glacier. He was personally chosen by then-President and Chief of Army Staff general Zia-ul-Haq for this assignment due to Musharraf's wide experience in mountain and arctic warfare. In September 1987, Musharraf commanded an assault at Bilafond La before being pushed back.
He studied at the Royal College of Defense Studies (RCDS) in Britain during 1990–91. His course-mate included Major-generals B. S. Malik and Ashok Mehta of the Indian Army, and Ali Kuli Khan of Pakistan Army. In his course studies, Musharraf performed extremely in relation to his classmates, submitted his master's degree thesis, titled "Impact of Arm Race in the Indo-Pakistan subcontinent", and earned good remarks. He submitted his thesis to Commandant General Antony Walker who regarded Musharraf as one of his finest students he had seen in his entire career. At one point, Walker described Musharraf: "A capable, articulate and extremely personable officer, who made a valuable impact at RCDS. His country is fortunate to have the services of a man of his undeniable quality." He graduated with a master's degree from RCDS and returned to Pakistan soon after. Upon returning in the 1980s, Musharraf took an interest in the emerging Pakistani rock music genre, and often listened to rock music after leaving duty. During that decade, regarded as the time when rock music in Pakistan began, Musharraf was reportedly keen on the popular Western fashions of the time, which were then very popular in government and public circles. Whilst in the Army he earned the nickname "Cowboy" for his westernized ways and his fashion interest in Western clothing.
Earlier in 1988–89, as Brigadier, Musharraf proposed the Kargil infiltration to Prime Minister Benazir Bhutto but she rebuffed the plan. In 1991–93, he secured a two-star promotion, elevating him to the rank of major general and held the command of 40th Army Division as its GOC, stationed in Okara Military District in Punjab Province. In 1993–95, Major-General Musharraf worked closely with the Chief of Army Staff as Director-General of Pakistan Army's Directorate General for the Military Operations (DGMO). During this time, Musharraf became close to engineering officer and director-general of "ISI" lieutenant-general Javed Nasir and had worked with him while directing operations in Bosnian war. His political philosophy was influenced by Benazir Bhutto who mentored him on various occasions, and Musharraf generally was close to Benazir Bhutto on military policy issues on India. From 1993 to 1995, Musharraf repeatedly visited the United States as part of the delegation of Benazir Bhutto. It was Maulana Fazal-ur-Rehman who lobbied for his promotion to Benazir Bhutto, and subsequently getting Musharraf's promotion papers approved by Benazir Bhutto, which eventually led to his appointment in Benazir Bhutto's key staff. In 1993, Musharraf personally assisted Benazir Bhutto to have a secret meeting at the Pakistani Embassy in Washington, D.C. with officials from the Mossad and a special envoy of Israeli premier Yitzhak Rabin. It was during this time Musharraf built an extremely cordial relationship with Shaukat Aziz who, at that time, was serving as the executive president of global financial services of the Citibank.
After the collapse of the fractious Afghan government, Musharraf assisted General Babar and the Inter-Services Intelligence (ISI) in devising a policy of supporting the newly formed Taliban in the Afghan civil war against the Northern Alliance government. On policy issues, Musharraf befriended senior justice of the Supreme Court of Pakistan Justice Rafiq Tarar (later president) and held common beliefs with the latter.
His last military field operations posting was in the Mangla region of the Kashmir Province in 1995 when Benazir Bhutto approved the promotion of Musharraf to three-star rank, Lieutenant-General. Between 1995 and 1998, Lieutenant-General Musharraf was the corps commander (CC-I) of I "Strike" Corps stationed in Mangla, Mangla Military District.
Although both Nawaz Sharif and general Jehangir Karamat were educated, and held common beliefs concerning national security, problems arose with Chairman of the Joint Chiefs and Chief of Army Staff General Karamat in October 1998. While addressing the officers and cadets at the Naval War College, General Karamat promoted the creation of the National Security Council, which would be backed by a "team of civil-military experts" for devising policies to seek resolution ongoing problems relating the civil-military issues; also recommended a "neutral but competent bureaucracy and administration of at federal level and the establishment of Local governments in four provinces." This proposal was met with hostility, and led to Nawaz Sharif's dismissal of General Karamat. In turn, this reduced Nawaz's mandate in public circles, and led to much criticism from Leader of the Opposition Benazir Bhutto.
There were three lieutenant-general officers potentially in line to succeed General Karamat as four-star rank and chief of army staff. Lieutenant-general Ali Kuli Khan, a graduate of PMA and RMA, Sandhurst, was an extremely capable staff officer and well liked in public circles, but was seen as close to the former chief of army staff general (retired) Abdul Vaheed; and was not promoted. Second in line was lieutenant-general Khalid Nawaz Khan who was popularly known for his ruthless leadership in the army; particularly for his unforgiving attitude to his junior officers. Lieutenant-general Nawaz Khan was known for his opposition and anti-muhajir sentiment, and was particularly hardline against the MQM.
Musharraf was in third-in line, and was well regarded by the general public and the armed forces. He also had an excellent academic standing from his college and university studies. Musharraf was strongly favoured by the Prime Minister's colleagues: a straight officer with democratic views. Nisar Ali Khan and Shahbaz Sharif recommended Musharraf and Prime Minister Nawaz Sharif personally promoted Musharraf to the rank of four-star general to replace Karamat.
After the Kargil incident, Musharraf did not wish to be the Chairman of the Joint Chiefs: Musharraf favoured the chief of naval staff Admiral Bokhari to take on this role, and claimed that: "he did not care" Prime minister Sharif was displeased by this suggestion, due to the hostile nature of his relationship with the Admiral. Musharraf further exacerbated his divide with Nawaz Sharif after recommending the forced retirement of senior officers close to the Prime minister, including Lieutenant-General Tariq Pervez (or "TP"), commander of XII Corps, who was a brother-in-law of a high profile cabinet minister. According to Musharraf, lieutenant-general TP was an ill-mannered, foul-mouthed, ill-disciplined officer who caused a great deal of dissent within the armed forces. Nawaz Sharif's announcement of the promotion of General Musharraf to Chairman Joint Chiefs caused an escalation of the tensions with Admiral Bokhari: upon hearing the news, he launched a strong protest against the Prime minister The next morning, the Prime minister relieved Admiral Bokhari of his duties. It was during his time as Chairman of the Joint Chiefs that Musharraf began to build friendly relations with the United States Army establishment, including General Anthony Zinni, USMC, General Tommy Franks, General John Abizaid, and General Colin Powell of the US Army, all of whom were premier four-star generals in the military history of the United States.
The Pakistan Army originally conceived the Kargil plan after the Siachen conflict but the plan was rebuffed repeatedly by senior civilian and military officials. Musharraf was a leading strategist behind the Kargil Conflict. From March to May 1999, he ordered the secret infiltration of Kashmiri forces in the Kargil district. After India discovered the infiltration, a fierce Indian offensive nearly led to a full-scale war. However, Sharif withdrew support of the insurgents in the border conflict in July because of heightened international pressure. Sharif's decision antagonized the Pakistan Army and rumors of a possible coup began emerging soon afterward. Sharif and Musharraf dispute on who was responsible for the Kargil conflict and Pakistan's withdrawal.
This strategic operation met with great hostility in the public circles and wide scale disapproval in the media who roundly criticised this operation. Musharraf had severe confrontation and became involved in serious altercations with his senior officers, chief of naval staff Admiral Fasih Bokhari, chief of air staff, air chief marshal PQ Mehdi and senior lieutenant-general Ali Kuli Khan. Admiral Bokhari ultimately demanded a full-fledged joint-service court martial against General Musharraf, while on the other hand General Kuli Khan lambasted the war as "a disaster bigger than the East-Pakistan tragedy", adding that the plan was "flawed in terms of its conception, tactical planning and execution" that ended in "sacrificing so many soldiers." Problems with his life long friend, chief of air staff air chief marshal Pervez Mehdi also arose when air chief refrained to participate or authorise any air strike to support the elements of army operations in the Kargil region.
During the last meeting with the Prime minister, Musharraf faced a grave criticism on results produced by Kargil infiltration by the principal military intelligence (MI) director lieutenant-general Jamshed Gulzar Kiani who maintained in the meeting: "(...) whatever has been written there is against logic. If you catch your enemy by the jugular vein he would react with full force... If you cut enemy supply lines, the only option for him will be to ensure supplies by air... (sic).. at that situation the Indian Army was unlikely to confront and it had to come up to the occasion. It is against wisdom that you dictate to the enemy to keep the war limited to a certain front..."
Nawaz Sharif has maintained that the Operation was conducted without his knowledge. However, details of the briefing he got from the military before and after the Kargil operation have become public. Before the operation, between January and March, Sharif was briefed about the operation in three separate meetings. In January, the army briefed him about the Indian troop movement along the LOC in Skardu on 29 January 1999, on 5 February at Kel, on 12 March at the GHQ and finally on 17 May at the ISI headquarters. During the end of the June DCC meeting, a tense Sharif turned to the army chief and said "you should have told me earlier", Musharraf pulled out his notebook and repeated the dates and contents of around seven briefings he had given him since beginning of January.
Military officials from Musharraf's Joint Staff Headquarters (JS HQ) met with regional corps commanders three times in late September in anticipation of a possible coup. To quieten rumours of a fallout between Musharraf and Sharif, Sharif officially certified Musharraf's remaining two years of his term on 30 September.
Musharraf had left for a weekend trip to take part in Sri Lanka's Army's 50th-anniversary celebrations. When Pervez Musharraf was returning from an official visit to Colombo his flight was denied landing permissions to Karachi International Airport after orders were issued from the Prime Minister's office. Upon hearing the announcement of Nawaz Sharif, replacing Pervez Musharraf by Khwaja Ziauddin, the third replacement of the top military commander of the country in less than two years, local military commanders began to mobilize troops towards Islamabad from nearby Rawalpindi. The military placed Sharif under house arrest, but in a last-ditch effort Sharif privately ordered Karachi air traffic controllers to redirect Musharraf's flight to India. The plan failed after soldiers in Karachi surrounded the airport control tower. At 2:50 am on 13 October, Musharraf addressed the nation with a recorded message.
Musharraf met with President Rafiq Tarar on 13 October to deliberate on legitimising the coup. On 15 October, Musharraf ended emerging hopes of a quick transition to democracy after he declared a state of emergency, suspended the Constitution, and assumed power as Chief Executive. He also quickly purged the government of political enemies, notably Ziauddin and national airline chief Shahid Khaqan Abbassi. On 17 October, he gave his second national address and established a seven-member military-civilian council to govern the country. He named three retired military officers and a judge as provincial administrators on 21 October. Ultimately, Musharraf assumed executive powers but did not obtain the office of Prime minister. The Prime minister secretariat (official residence of Prime minister of Pakistan) was closed by the military police and its staff was fired by Musharraf immediately.
There were no organised protests within the country to the coup, that was widely criticized by the international community. Consequently, Pakistan was suspended from the Commonwealth of Nations. Sharif was put under house arrest and later exiled to Saudi Arabia on his personal request and under a contract.
The senior military appointments in the inter-services were extremely important and crucial for Musharraf to keep the legitimacy and the support for his coup in the joint inter-services. Starting with the PAF, Musharraf pressured President Tarar to appoint most-junior air marshal to four-star rank, particularly someone with Musharraf had experienced working during the inter-services operations. Once Air-chief Marshal Pervez Kureshi was retired, the most junior air marshal Muschaf Mir (who worked with Musharraf in 1996 to assist "ISI" in Taliban matters) was appointed to four-star rank as well as elevated as Chief of Air Staff. There were two extremely important military appointments made by Musharraf in the Navy. Although Admiral Aziz Mirza (a lifelong friend of Musharraf, he shared a dorm with the admiral in the 1960s and they graduated together from the academy) was appointed by Prime minister Nawaz Sharif, Mirza remained extremely supportive of Musharraf's coup and was also a close friend of Musharraf since 1971 when both participated in a joint operation against the Indian Army. After Mirza's retirement, Musharraf appointed Admiral Shahid Karimullah, with whom Musharraf had trained together in special forces schools during the 1960s, to four-star rank and chief of naval staff.
Musharraf's first foreign visit was to Saudi Arabia on 26 October where he met with King Fahd. After meeting senior Saudi royals, the next day he went to Medina and performed Umrah in Mecca. On 28 October, he went to United Arab Emirates before returning home.
By the end of October, Musharraf appointed many technocrats and bureaucrats in his Cabinet, including former Citibank executive Shaukat Aziz as Finance Minister and Abdul Sattar as Foreign Minister. In early November, he released details of his assets to the public.
In late December 1999, Musharraf dealt with his first international crisis when India accused Pakistan's involvement in the Indian Airlines Flight 814 hijacking. Though United States President Bill Clinton pressured Musharraf to ban the alleged group behind the hijacking — Harkat-ul-Mujahideen, Pakistani officials refused because of fears of reprisal from political parties such as Jamaat-e-Islami.
In March 2000, Musharraf banned political rallies. In a television interview given in 2001, Musharraf openly spoke about the negative role of a few high-ranking officers in the Pakistan Armed Forces in state's affairs. Musharraf labelled many of his senior professors at NDU as "pseudo-intellectuals", including the NDU's notable professors, General Aslam Beg and Jehangir Karamat under whom Musharraf studied and served well.
The Military Police held former prime minister Sharif under house arrest at a government guesthouse and opened his Lahore home to the public in late October 1999. He was formally indicted in November on charges of hijacking, kidnapping, attempted murder, and treason for preventing Musharraf's flight from landing at Karachi airport on the day of the coup. His trial began in early March 2000 in an anti-terrorism court, which is designed for speedy trials. He testified Musharraf began preparations of a coup after the Kargil conflict. Sharif was placed in Adiala Jail, infamous for hosting Zulfikar Ali Bhutto's trial, and his leading defence lawyer, Iqbal Raad, was shot dead in Karachi in mid-March. Sharif's defense team blamed the military for intentionally providing their lawyers with inadequate protection. The court proceedings were widely accused of being a show trial. Sources from Pakistan claimed that Musharraf and his military government's officers were in full mood to exercise tough conditions on Sharif, and intended to send Nawaz Sharif to the gallows to face a similar fate to that of Zulfikar Ali Bhutto in 1979. It was the pressure on Musharraf exerted by Saudi Arabia and the United States to exile Sharif after it was confirmed that the court is about to give its verdict on Nawaz Sharif over treason charges, and the court would sentence Sharif to death. Sharif signed an agreement with Musharraf and his military government and his family was exiled to Saudi Arabia in December 2000.
Shortly after Musharraf's takeover, Musharraf issued Oath of Judges Order No. 2000, which required judges to take a fresh oath of office. On 12 May 2000, the Supreme Court asked Musharraf to hold national elections by 12 October 2002. After President Rafiq Tarar's resignation, Musharraf formally appointed himself as President on 20 June 2001. In August 2002, he issued the Legal Framework Order No. 2002, which added numerous amendments to the Constitution.
Musharraf called for nationwide political elections in the country after accepting the ruling of the Supreme Court of Pakistan. Musharraf was the first military president to accept the rulings of the Supreme Court and holding free and fair elections in 2002, part of his vision to return democratic rule to the country. In October 2002, Pakistan held general elections, which the pro-Musharraf PML-Q won wide margins, although it had failed to gain absolute majority. The PML-Q formed government with far-right religious parties coalition, the MMA and the liberals MQM; the coalition legitimised Musharraf's rule.
After the elections, the PML-Q nominated Zafarullah Khan Jamali for the office of prime minister, which Musharraf also approved. After first session at the Parliament, Musharraf voluntarily transferred the powers of chief executive to Prime Minister Zafarullah Khan Jamali. Musharraf succeeded to pass the XVII amendment, which grants powers to dissolve the parliament, with approval required from the Supreme Court. Within two years, Jamali proved to be an ineffective prime minister as he forcefully implemented his policies in the country and caused problems with the business class elites. Musharraf accepted the resignation of Jamali and asked his close colleague Chaudhry Shujaat Hussain to appoint a new prime minister in place. Hussain nominated Finance minister Shaukat Aziz, who had been impressive due to his performance as finance minister in 1999. Musharraf regarded Aziz as his right hand and preferable choice for the office of Prime minister. With Aziz appointed as Prime minister, Musharraf transferred all executive powers to Aziz as he trusted Shaukat Aziz. Aziz proved to be extremely capable in running the government; under his leadership economic growth reached to a maximum level, which further stabilised Musharraf's presidency. Aziz swiftly, quietly and quickly undermined the elements seeking to undermine Musharraf, which became a factor in Musharraf's trust in him. Between 2004 and 2007, Aziz approved many projects that did not require Musharraf's permission.
In 2010, all constitutional changes carried out by Musharraf and Aziz's policies were reverted by the 18th Amendment, which put the country back to its initial position and restored the powers of the Prime Minister.
The presidency of Pervez Musharraf helped bring the liberal forces to the national level and into prominence, for the first time in the history of Pakistan. He granted national amnesty to the political workers of the liberal parties like Muttahida Qaumi Movement and Pakistan Muslim League (Q), and supported MQM in becoming a central player in the government. Musharraf disbanded the cultural policies of the previous Prime Minister Nawaz Sharif, and quickly adopted Benazir Bhutto's cultural policies after disbanding Indian channels in the country.
His cultural policies liberalized Pakistan's media, and he issued many television licenses to the private-sector to open television centers and media houses. The television dramas, film industry, theatre, music and literature activities, were personally encouraged by Pervez Musharraf. Under his policies, the rock music bands gained a following in the country and many concerts were held each week. His cultural policies, the film, theatre, rock and folk music, and television programmes were extremely devoted to and promoted the national spirit of the country. In 2001, Musharraf got on stage with the rock music band, Junoon, and sang national song with the band.
On political fronts, Musharraf faced fierce opposition from the ultraconservative alliance, the MMA, led by clergyman Maulana Noorani. In Pakistan, Maulana Noorani was remembered as a mystic religious leader and had preached spiritual aspects of Islam all over the world as part of the World Islamic Mission. Although the political deadlock posed by Maulana Noorani was neutralized after Noorani's death, Mushrraf yet had to face the opposition from ARD led by Benazir Bhutto of the PPP.
Musharraf allied with the United States against the Afghan mujahideen in Afghanistan after the September 11 attacks.
A few months after the 11 September attacks, Musharraf gave a speech against extremism. He instituted prohibitions on foreign students' access to studying Islam within Pakistan, an effort that began as an outright ban but was later reduced to restrictions on obtaining visas. On 18 September 2005, Musharraf made a speech before a broad based audience of Jewish leadership, sponsored by the American Jewish Congress's Council for World Jewry, in New York City. He was widely criticised by Middle Eastern leaders, but was met with some praise among Jewish leadership.
After the 2001 Gujarat earthquake, Musharraf expressed his sympathies to Indian Prime Minister Atal Bihari Vajpayee and sent a plane load of relief supplies to India.
In 2004, Musharraf began a series of talks with India to resolve the Kashmir dispute.
In 2006, King Abdullah of Saudi Arabia visited Pakistan for the first time as King. Musharraf honoured King Abdullah with the "Nishan-e-Pakistan". Musharraf received the King Abdul-Aziz Medallion in 2007.
From September 2001 until his resignation in 2007 from the military, Musharraf's presidency was affected by scandals relating to nuclear weapons, which were detrimental to his authoritative legitimacy in the country and in the international community. In October 2001, Musharraf authorised a sting operation led by FIA to arrest two physicists Sultan Bashiruddin Mahmood and Chaudhry Abdul Majeed, because of their supposed connection with the Taliban after they secretly visited Taliban-controlled Afghanistan in 2000. The local Pakistani media widely circulated the reports that "Mahmood had a meeting with Osama bin Laden where Bin Laden had shown interest in building a radiological weapon;" it was later discovered that neither scientist had any in-depth knowledge of the technology. In December 2001, Musharraf authorized security hearings and the two scientists were taken into the custody by the JAG Branch (JAG); security hearings continued until early 2002.
Another scandal arose as a consequence of a disclosure by Pakistani nuclear physicist Abdul Qadeer Khan. On 27 February 2001, Musharraf spoke highly of Khan at a state dinner in Islamabad, and he personally approved Khan's appointment as Science Advisor to the Government. In 2004, Musharraf relieved Abdul Qadeer Khan from his post and initially denied knowledge of the government's involvement in nuclear proliferation, despite Khan's claim that Musharraf was the "Big Boss" of the proliferation ring. Following this, Musharraf authorized a national security hearing, which continued until his resignation from the army in 2007. According to Zahid Malik, Musharraf and the military establishment at that time acted against Abdul Qadeer Khan in an attempt to prove the loyalty of Pakistan to the United States and Western world.
The investigations backfired on Musharraf and public opinion turned against him. The populist ARD movement, which included the major political parties such as the PML and the PPP, used the issue to bring down Musharraf's presidency.
The debriefing of Abdul Qadeer Khan severely damaged Musharraf's own public image and his political prestige in the country. He faced bitter domestic criticism for attempting to vilify Khan, specifically from opposition leader Benazir Bhutto. In an interview to "Daily Times", Bhutto maintained that Khan had been a "scapegoat" in the nuclear proliferation scandal and said that she didn't "believe that such a big scandal could have taken place under the nose of General Musharraf". Musharraf's long-standing ally, the MQM, published criticism of Musharraf over his handling of Abdul Qadeer Khan. The ARD movement and the political parties further tapped into the public anger and mass demonstrations against Musharraf. The credibility of the United States was also badly damaged; the US itselt refrained from pressuring Musharraf to take further action against Khan. While Abdul Qadeer Khan remained popular in the country, Musharraf could not withstand the political pressure and his presidency was further weakened. Musharraf quickly pardoned Abdul Qadeer Khan in exchange for cooperation and issued confinement orders against Khan that limited Khan's movement. He handed over the case of Abdul Qadeer Khan to Prime minister Aziz who had been supportive towards Khan, personally "thanking" him: "The services of Dr. Qadeer Khan are unforgettable for the country."
On 4 July 2008, in an interview, Abdul Qadeer Khan laid the blame on President Musharraf and later on Benazir Bhutto for transferring the technology, claiming that Musharraf was aware of all the deals and he was the "Big Boss" for those deals. Khan said that "Musharraf gave centrifuges to North Korea in a 2000 shipment supervised by the armed forces. The equipment was sent in a North Korean plane loaded under the supervision of Pakistan security officials." Nuclear weapons expert David Albright of the Institute for Science and International Security agreed that Khan's activities were government-sanctioned. After Musharraf's resignation, Abdul Qadeer Khan was released from house arrest by the executive order of the Supreme Court of Pakistan. After Musharraf left the country, the new Chairman of the Joint Chiefs of Staff Committee General Tärik Majid terminated all further debriefings of Abdul Qadeer Khan. Few believed that Abdul Qadeer Khan acted alone and the affair risked gravely damaging the Armed Forces, which oversaw and controlled the nuclear weapons development and of which Musharraf was Chairman of the Joint Chiefs of Staff until his resignation from military service on 28 November 2007.
When Musharraf came to power in 1999, he promised that the corruption in the government bureaucracy would be cleaned up. However, some claimed that the level of corruption did not diminish throughout Musharraf's time.
In December 2003, Musharraf made a deal with MMA, a six-member coalition of far-right Islamic parties, agreeing to leave the army by 31 December 2004. With that party's support, pro-Musharraf legislators were able to muster the two-thirds supermajority required to pass the Seventeenth Amendment, which retroactively legalised Musharraf's 1999 coup and many of his decrees. Musharraf reneged on his agreement with the MMA and pro-Musharraf legislators in the Parliament passed a bill allowing Musharraf to keep both offices.
On 1 January 2004, Musharraf had won a confidence vote in the Electoral College of Pakistan, consisting of both houses of Parliament and the four provincial assemblies. Musharraf received 658 out of 1170 votes, a 56% majority, but many opposition and Islamic members of parliament walked out to protest the vote. As a result of this vote, his term was extended to 2007.
Prime Minister Zafarullah Khan Jamali resigned on 26 June 2004, after losing the support of the Musharraf's party, PML(Q). His resignation was at least partially due to his public differences with the party chairman, Chaudhry Shujaat Hussain. This was rumored to have happened at Musharraf's command. Jamali had been appointed with the support of Musharraf's and the pro-Musharraf PML(Q). Most PML(Q) parliamentarians formerly belonged to the Pakistan Muslim League party led by Sharif, and most ministers of the cabinet were formerly senior members of other parties, joining the PML(Q) after the elections upon being offered positions. Musharraf nominated Shaukat Aziz, the minister for finance and a former employee of Citibank and head of Citibank Private Banking as the new prime minister.
In 2005, the Bugti clan attacked a gas field in Balochistan, after Dr Shazia was raped at that location. Musharraf responded by 4,500 soldiers, supported by tanks and helicopters, to guard the gas field.
The National Assembly voted in favour of the "Women's Protection Bill" on 15 November 2006 and the Senate approved it on 23 November 2006. President General Pervez Musharraf signed into law the "Women's Protection Bill", on 1 December 2006. The bill places rape laws under the penal code and allegedly does away with harsh conditions that previously required victims to produce four male witnesses and exposed the victims to prosecution for adultery, if they were unable to prove the crime.
However, the Women's Protection bill has been criticised heavily by many for paying continued lip service and failing to address the actual problem by its roots: repealing the Hudood Ordinance. In this context, Musharraf has also been criticized by women and human rights activists for not following up his words by action. The Human Rights Commission of Pakistan (HRCP) said that "The so-called Women's Protection Bill is a farcical attempt at making Hudood Ordinances palatable" outlining the issues of the bill and the continued impact on women.
His government increased reserved seats for women in assemblies, in order to increase women's representation and make their presence more effective. The number of reserved seats in the National Assembly were increased from 20 to 60. In provincial assemblies 128 seats were reserved for women. This situation has brought out increase participation of women for 1988 and 2008 elections.
In March 2005, a couple of months after the rape of a Pakistani physician, Dr. Shazia Khalid, working on a government gas plant in the remote Balochistan province, Musharraf was criticised for pronouncing, Captain Hammad, a fellow military man and the accused in the case, innocent before the judicial inquiry was complete. Shazia alleged that she was forced by the government to leave the country.
In an interview given to "The Washington Post" in September 2005, Musharraf said that Pakistani women who had been the victims of rape treated rape as a "moneymaking concern", and were only interested in the publicity in order to make money and get a Canadian visa. He subsequently denied making these comments, but the "Post" made available an audio recording of the interview, in which Musharraf could be heard making the quoted remarks. Musharraf also denied Mukhtaran Mai, a Pakistani rape victim, the right to travel abroad, until pressured by US State Department. The remarks made by Musharraf sparked outrage and protests both internationally and in Pakistan by various groups i.e. women groups, activists. In a rally, held close to the presidential palace and Pakistan's parliament, hundreds of women demonstrated in Pakistan demanding Musharraf apologise for the controversial remarks about female rape victims.
In 2000 Kamran Atif, an alleged member of Harkat-ul Mujahideen al-Alami, tried to assassinate Musharraf. Atif was sentenced to death in 2006 by an Anti Terrorism Court. On 14 December 2003, Musharraf survived an assassination attempt when a powerful bomb went off minutes after his highly guarded convoy crossed a bridge in Rawalpindi; It was the third such attempt during his four-year rule. On 25 December 2003, two suicide bombers tried to assassinate Musharraf, but their car bombs failed to kill him; 16 others died instead. Musharraf escaped with only a cracked windshield on his car. Amjad Farooqi was an alleged mastermind behind these attempts, and was killed by Pakistani forces in 2004 after an extensive manhunt.
On 6 July 2007, there was another attempted assassination, when an unknown group fired a 7.62 submachine gun at Musharraf's plane as it took off from a runway in Rawalpindi. Security also recovered 2 anti-aircraft guns, from which no shots had been fired. On 17 July 2007, Pakistani police detained 39 people in relation to the attempted assassination of Musharraf. The suspects were detained at an undisclosed location by a joint team of Punjab Police, the Federal Investigation Agency and other Pakistani intelligence agencies.
By August 2007, polls showed 64 percent of Pakistanis did not want another Musharraf term. Controversies involving the atomic issues, Lal Masjid incident, the unpopular War in North-West Pakistan, the suspension of Chief Justice Iftikhar Muhammad Chaudhry, and widely circulated criticisms from rivals Benazir Bhutto and Nawaz Sharif, had brutalized the personal image of Musharraf in public and political circles. More importantly, with Shaukat Aziz departing from the office of Prime Minister, Musharraf could not have sustained his presidency any longer and dramatically fell from the presidency within a matter of eight months, after popular and mass public movements called for his impeachment for the actions taken during his presidency.
On 9 March 2007, Musharraf suspended Chief Justice Iftikhar Muhammad Chaudhry and pressed corruption charges against him. He replaced him with Acting Chief Justice Javed Iqbal.
Musharraf's moves sparked protests among Pakistani lawyers. On 12 March 2007, lawyers started a campaign called Judicial Activism across Pakistan and began boycotting all court procedures in protest against the suspension. In Islamabad, as well as other cities such as Lahore, Karachi, and Quetta hundreds of lawyers dressed in black suits attended rallies, condemning the suspension as unconstitutional. Slowly the expressions of support for the ousted Chief Justice gathered momentum and by May, protesters and opposition parties took out huge rallies against Musharraf and his tenure as army chief was also challenged in the courts.
Lal Masjid had a religious school for women and the Jamia Hafsa madrassa, which was attached to the mosque. A male madrassa was only a few minutes drive away.
In April 2007, the mosque administration started to encourage attacks on local video shops, alleging that they were selling porn films, and massage parlours, which were alleged to be used as brothels. These attacks were often carried out by the mosque's female students. In July 2007, a confrontation occurred when government authorities made a decision to stop the student violence and send police officers to arrest the responsible individuals and the madrassa administration.
This development led to a standoff between police forces and armed students. Mosque leaders and students refused to surrender and fired at police from inside the mosque building. Both sides suffered casualties.
On 27 July, Bhutto met for the first time with Musharraf in the United Arab Emirates to discuss her return to Pakistan. On 14 September 2007, Deputy Information Minister Tariq Azim stated that Bhutto will not be deported, but must face corruption charges against her. He clarified Sharif's and Bhutto's right to return to Pakistan. On 17 September 2007, Bhutto accused Musharraf's allies of pushing Pakistan to crisis by refusal to restore democracy and share power. Bhutto returned from eight years exile on 18 October. Musharraf called for a three-day mourning period after Bhutto's assassination on 27 December 2007.
Sharif returned to Pakistan in September 2007, and was immediately arrested and taken into custody at the airport. He was sent back to Saudi Arabia. Saudi intelligence chief Muqrin bin Abdul-Aziz Al Saud and Lebanese politician Saad Hariri arrived separately in Islamabad on 8 September 2007, the former with a message from Saudi King Abdullah and the latter after a meeting with Nawaz Sharif in London. After meeting President General Pervez Musharraf for two-and-a-half hours discussing Nawaz Sharif's possible return. On arrival in Saudi Arabia, Nawaz Sharif was received by Prince Muqrin bin Abdul-Aziz, the Saudi intelligence chief, who had met Musharraf in Islamabad the previous day. That meeting had been followed by a rare press conference, at which he had warned that Sharif should not violate the terms of King Abdullah's agreement of staying out of politics for 10 years.
On 2 October 2007, Musharraf appointed General Tariq Majid as Chairman Joint Chiefs Committee and approved General Ashfaq Kayani as vice chief of the army starting 8 October. When Musharraf resigned from military on 28 November 2007, Kayani became Chief of Army Staff.
In a March 2007 interview, Musharraf said that he intended to stay in office for another five years.
A nine-member panel of Supreme Court judges deliberated on six petitions (including Jamaat-e-Islami's, Pakistan's largest Islamic group) for disqualification of Musharraf as presidential candidate. Bhutto stated that her party may join other opposition groups, including Sharif's.
On 28 September 2007, in a 6–3 vote, Judge Rana Bhagwandas's court removed obstacles to Musharraf's election bid.
On 3 November 2007 Musharraf declared emergency rule across Pakistan. He suspended the Constitution, imposed a state of emergency, and fired the Chief Justice of the Supreme Court again. In Islamabad, troops entered the Supreme Court building, arrested the judges and kept them detained in their homes. Independent and international television channels went off air. Public protests were mounted against Musharraf.
General elections were held on 18 February 2008, in which the Pakistan Peoples Party (PPP) polled the highest votes and won the most seats. On 23 March 2008, President Musharraf said an "era of democracy" had begun in Pakistan and that he had put the country "on the track of development and progress". On 22 March, the PPP named former parliament speaker Syed Yousaf Raza Gillani as its candidate for the country's next prime minister, to lead a coalition government united against him.
On 7 August 2008, the Pakistan Peoples Party and the Pakistan Muslim League (N) agreed to force Musharraf to step down and begin his impeachment. Asif Ali Zardari and Nawaz Sharif announced sending a formal request or joint charge sheet that he step down, and impeach him through parliamentary process upon refusal. Musharraf refused to step down. A charge-sheet had been drafted, and was to be presented to parliament. It included Mr Musharraf’s first seizure of power in 1999—at the expense of Nawaz Sharif, the PML(N)'s leader, whom Mr Musharraf imprisoned and exiled—and his second in November 2007, when he declared an emergency as a means to get re-elected president. The charge-sheet also listed some of Mr Musharraf's contributions to the "war on terror."
Musharraf delayed his departure for the Beijing Olympics, by a day. On 11 August, the government summoned the national assembly.
On 18 August 2008, Musharraf announced his resignation. On the following day, he defended his nine-year rule in an hour-long televised speech. However, public opinion was largely against him by this time. A poll conducted a day after his resignation showed that 63% Pakistanis welcomed Musharraf's decision to step down while only 15% were unhappy with it. On 23 November 2008 he left for exile in London where he arrived the following day.
After his resignation, Musharraf went to perform a holy pilgrimage to Mecca. He then went on a speaking and lectureship tour through the Middle East, Europe, and United States. Chicago-based Embark LLC was one of the international public-relations firms trying to land Musharraf as a highly paid keynote speaker. According to Embark President David B. Wheeler, the speaking fee for Musharraf would be $150,000–200,000 for a day plus jet and other V.I.P. arrangements on the ground. In 2011, he also lectured at the Carnegie Endowment for International Peace on politics and racism where he also authored and published a paper with George Perkvich.
Musharraf launched his own political party, the All Pakistan Muslim League, in June 2010.
The PML-N has tried to get Pervez Musharraf to stand trial in an article 6 trial for treason in relation to the emergency on 3 November 2007. The Prime Minister of Pakistan Yousaf Raza Gilani has said a consensus resolution is required in national assembly for an article 6 trial of Pervez Musharraf"I have no love lost for Musharraf ... if parliament decides to try him, I will be with parliament. Article 6 cannot be applied to one individual ... those who supported him are today in my cabinet and some of them have also joined the PML-N ... the MMA, the MQM and the PML-Q supported him ... this is why I have said that it is not doable," said the Prime Minister while informally talking to editors and also replying to questions by journalists at an Iftar-dinner he had hosted for them. Although the constitution of Pakistan, Article 232 and Article 236, provides for emergencies, and on 15 February 2008, the "interim" Pakistan Supreme Court attempted to validated the Proclamation of Emergency on 3 November 2007, the Provisional Constitution Order No 1 of 2007 and the Oath of Office (Judges) Order, 2007, after the Supreme Court judges were restored to the bench, on 31 July 2009, they ruled that Musharraf had violated the constitution when he declared emergency rule in 2007.
Saudi Arabia exerted its influence to attempt to prevent treason charges, under Article 6 of the constitution, from being brought against Musharraf, citing existing agreements between the states, as well as pressuring Sharif directly. As it turned out, it was not Sharif's decision to make.
Abbottabad's district and sessions judge in a missing person's case passed judgment asking the authorities to declare Pervez Musharraf a proclaimed offender. On 11 February 2011 the Anti Terrorism Court, issued an arrest warrant for Musharraf and charged him with conspiracy to commit murder of Benazir Bhutto. On 8 March 2011, the Sindh High Court registered treason charges against him.
Regarding the Lahore attack on Sri Lankan players, Musharraf criticized the police commandos' inability to kill any of the gunmen, saying "If this was the elite force I would expect them to have shot down those people who attacked them, the reaction, their training should be on a level that if anyone shoots toward the company they are guarding, in less than three seconds they should shoot the man down."
Regarding the blasphemy laws, Musharraf said that Pakistan is sensitive to religious issues and that the blasphemy law should stay.
Since the start of 2011, news had circulated that Musharraf would return to Pakistan before the 2013 general election. He himself vowed this in several interviews. On "Piers Morgan Tonight", Musharraf announced his plans to return to Pakistan on 23 March 2012 in order to seek the Presidency in 2013. The Taliban and Talal Bugti threatened to kill him should he return. On 3 April 2014, Musharraf escaped the fourth assassination attempt, resulting in an injury of a woman, according to Pakistani news.
On 24 March 2013, after a four-year self-imposed exile, he returned to Pakistan. He landed at Jinnah International Airport, Karachi, via a chartered Emirates flight with Pakistani journalists and foreign news correspondents. Hundreds of his supporters and workers of APML greeted Musharraf upon his arrival at Karachi airport, and he delivered a short public speech.
On 16 April 2013, an electoral tribunal in Chitral declared Musharraf disqualified from candidacy there, effectively quashing his political ambitions (several other constituencies had previously rejected Musharraf's nominations). A spokesperson for Musharraf's party said the ruling was "biased" and they would appeal the decision.
While Musharraf had technically been on bail since his return to the country, on 18 April 2013, the Islamabad High Court ordered the arrest of Musharraf on charges relating to the 2007 arrests of judges. Musharraf escaped from court with the aid of his security personnel, and went to his farm-house mansion. The following day Musharraf was under house arrest but was later transferred to police headquarters in Islamabad. Musharraf characterized his arrest as "politically motivated" and his legal team has declared their intention to fight the charges in the Supreme Court. Further to the charges of this arrest, the Senate also passed a resolution petitioning that Musharraf be charged with high treason in relation to the events of 2007.
On Friday 26 April 2013 the court ordered house arrest for Musharraf in connection with the death of Benazir Bhutto. On 20 May, a Pakistani court granted bail to Musharraf. On 12 June 2014 Sindh High Court allowed him to travel to seek medical attention abroad.
On 25 June 2013, Musharraf was named as prime suspect in two separate cases. The first case was subverting and suspending the constitution, and the second was a Federal Investigation Agency probe into the conspiracy to assassinate Bhutto. Musharraf was indicted on 20 August 2013 for Bhutto's assassination in 2007. On 2 September 2013, a first information report (FIR) was registered against him for his role in the Lal Masjid Operation in 2007. The FIR was lodged after the son of slain hard line cleric Abdul Rahid Ghazi (who was killed during the operation) asked authorities to bring charges against Musharraf.
On 18 March 2016, Musharraf's name was removed from the Exit Control List and he was allowed to travel abroad, citing medical treatment. He currently lives in Dubai in self-imposed exile. Musharraf vowed to return to Pakistan, but has not done so. It was first disclosed in October 2018 that Musharraf suffers from amyloidosis, a rare and serious illness for which he has undergone treatment in hospitals in London and Dubai; an official with Musharraf's political party said that Musharraf would return to Pakistan after he made a full recovery.
In 2017, Musharraf appeared as a political analyst on his weekly television show "Sab Se Pehle Pakistan with President Musharraf", hosted by BOL News.
On 31 August 2017, the anti-terrorism court in Rawalpindi declared him an "absconder" in Bhutto's murder case. The court also ordered that his property and bank account in Pakistan be seized.
On 17 December 2019, a special court declared him a traitor and sentenced him "in absentia" to death for abrogating and suspending the constitution in November 2007. The three-member panel of the special court which issued the order was spearheaded by Chief Justice of the Peshawar High Court Waqar Ahmed Seth. He is also the first Pakistani Army General to be sentenced to death. Analysts did not expect Musharraf to face the sentence given his illness and the fact that Dubai has no extradition treaty with Pakistan; the verdict was also viewed as largely symbolic given that Musharraf retains support within the current Pakistani government and military.
Musharraf challenged the verdict, and on 13 January 2020, the Lahore High Court annulled the death sentence against Musharraf, ruling that the special court that held the trial was unconstitutional. The unanimous verdict was delivered by a three-member bench of the Lahore High Court, consisting of Justice Sayyed Muhammad Mazahar Ali Akbar Naqvi, Justice Muhammad Ameer Bhatti, and Justice Chaudhry Masood Jahangir. The court ruled that the prosecution of Musharraf was politically motivated and that the crimes of high treason and subverting the Constitution were "a joint offense" that "cannot be undertaken by a single person."
Musharraf is the second son of his parents and has two brothers—Javed and Naved. Javed retired as a high-level official in Pakistan's civil service. Naved is an anesthesiologist who has lived in Chicago since completing his residency training at Loyola University Medical Center in 1979.
Musharraf married Sehba, who is from Karachi, on 28 December 1968. They have a daughter, Ayla, an architect married to film director Asim Raza, and a son, Bilal.
Musharraf published his autobiography—""—in 2006. | https://en.wikipedia.org/wiki?curid=24260 |
Pomerania
Pomerania (; German, Low German and North Germanic languages: "Pommern"; Kashubian: "Pòmòrskô") is a historical region on the southern shore of the Baltic Sea in Central Europe, split between Poland and Germany. The largest Pomeranian city is Gdańsk followed by Szczecin, both located in Poland.
Outside its urban areas, Pomerania is characterized by farmland, dotted with numerous lakes, forests, small picturesque towns and islands. The largest Pomeranian islands are Rügen, Usedom/Uznam and Wolin. The region has a rich and complicated political and demographic history, and was ruled by various countries, often simultaneously, including local dynasties, although over the centuries Polish and German influences remained the strongest. The region was heavily affected by numerous disastrous wars and border shifts since the Late Middle Ages, but also saw long periods of great prosperity, reflected in its rich architecture, mainly thanks to maritime trade. The easternmost sub-regions of Pomerania are alternatively known as Pomerelia and Kashubia, which are inhabited by ethnic Kashubians.
The region is particularly known for its Brick Gothic and resort architecture, the oddly-shaped Crooked Forest, the Pomeranian dog breed and one of the tallest lighthouses in the world.
Pomerania is the area along the Bay of Pomerania of the Baltic Sea between the rivers Recknitz and Trebel in the west and Vistula in the east. It formerly reached perhaps as far south as the Noteć river, but since the 13th century its southern boundary has been placed further north.
Most of the region is coastal lowland, being part of the Central European Plain, but its southern, hilly parts belong to the Baltic Ridge, a belt of terminal moraines formed during the Pleistocene. Within this ridge, a chain of moraine-dammed lakes constitutes the Pomeranian Lake District. The soil is generally rather poor, sometimes sandy or marshy.
The western coastline is jagged, with many peninsulas (such as Darß–Zingst) and islands (including Rügen, Usedom, and Wolin) enclosing numerous bays (Bodden) and lagoons (the biggest being the Lagoon of Szczecin).
The eastern coastline is smooth. Łebsko and several other lakes were formerly bays, but have been cut off from the sea. The easternmost coastline along the Gdańsk Bay (with the Bay of Puck) and Vistula Lagoon, has the Hel Peninsula and the Vistula peninsula jutting out into the Baltic.
The Pomeranian region has the following administrative divisions:
The bulk of Farther Pomerania is included within the modern West Pomeranian Voivodeship, but its easternmost parts (the Słupsk area) now constitute the northwest of Pomeranian Voivodeship. Farther Pomerania in turn comprises several other historical subregions, most notably the Principality of Cammin, the County of Naugard, the Lands of Schlawe and Stolp, and also the Lauenburg and Bütow Land (the last, however, is sometimes regarded as a part of Pomerelia or Kashubia).
Parts of Pomerania and surrounding regions have constituted a euroregion since 1995. The Pomerania euroregion comprises Hither Pomerania and Uckermark in Germany, West Pomerania in Poland, and Scania in Sweden.
"Pomerania" and its cognates in other languages are derived from Old Slavic "po", meaning "by/next to/along", and "morze", meaning "sea", thus "Pomerania" literally means "seacoast" or "land by the sea", referring to its proximity to the Baltic Sea.
Pomerania was first mentioned in an imperial document of 1046, referring to a "Zemuzil dux Bomeranorum" (Zemuzil, Duke of the Pomeranians). Pomerania is mentioned repeatedly in the chronicles of Adam of Bremen (c. 1070) and Gallus Anonymous (ca. 1113).
The term "West Pomerania" is ambiguous, since it may refer to either Hither Pomerania (in German usage and historical usage based on German terminology) or to combined Hither and Farther Pomerania or the West Pomeranian Voivodeship (in Polish usage).
The term "East Pomerania" may similarly carry different meanings, referring either to Farther Pomerania (in German usage and historical usage based on German terminology), or to Pomerelia or the Pomeranian Voivodeship (in Polish usage).
Settlement in the area called Pomerania for the last 1,000 years started by the end of the Vistula Glacial Stage, some 13,000 years ago. Archeological traces have been found of various cultures during the Stone and Bronze Age, Baltic peoples, Germanic peoples and Veneti during the Iron Age and, in the Dark Ages, West Slavic tribes and Vikings. Starting in the 10th century, early Polish rulers sudbued the region, successfully integrating the eastern part with Poland, while the western part fell under the suzerainty of Denmark and the Holy Roman Empire in the late 12th century. Gdańsk, established during the reign of Mieszko I of Poland has since become Poland's main port (apart from periods of Poland losing control over the region).
In the 12th century, the Duchy of Pomerania (western part), as a vassal state of Poland, became Christian under saint Otto of Bamberg ("the Apostle of the Pomeranians"); at the same time Pomerelia (eastern part) became a part of diocese of Włocławek within Poland. Since the late 12th-early 13th century, the Griffin Duchy of Pomerania stayed with the Holy Roman Empire and the Principality of Rugia with Denmark, while Pomerelia, under the ruling of Samborides, was a part of Poland. Pomerania, during its alliance in the Holy Roman Empire, shared borders with West Slavic state Oldenburg, as well as Poland and the expanding Margraviate of Brandenburg. In the early 14th century the Teutonic Knights invaded and annexed Pomerelia from Poland into their monastic state, which already included historical Prussia. As a result of the Teutonic rule, in German terminology the name of Prussia was also extended to conquered Polish lands like Gdańsk Pomerania, although it was not inhabited by Baltic Prussians but Lechitic Poles. Meanwhile, the Ostsiedlung started to turn Slavic narrow Pomerania into an increasingly German-settled area; the remaining Wends and Polish people, often known as Kashubians, continued to settle within Pomerelia. In 1325 the line of the princes of Rügen died out, and the principality was inherited by the Griffins.
In 1466, with the Teutonic Order's defeat in the Thirteen Years' War, Pomerelia became again subject to the Polish Crown and formed the Pomeranian Voivodeship within the province of Royal Prussia. While the German population in the Duchy of Pomerania adopted the Protestant reformation in 1534, the Polish (along with Kashubian) population remained with the Roman Catholic Church. The Thirty Years' War severely ravaged and depopulated narrow Pomerania; few years later this same happened to Pomerelia ("the Deluge"). With the extinction of the Griffin house during the same period, the Duchy of Pomerania was divided between the Swedish Empire and Brandenburg-Prussia in 1648, while Pomerelia remained in with the Polish Crown.
Prussia gained the southern parts of Swedish Pomerania in 1720, invaded and annexed Pomerelia from Poland in 1772 and 1793, and gained the remainder of Swedish Pomerania in 1815, after the Napoleonic Wars. The former Brandenburg-Prussian Pomerania and the former Swedish parts were reorganized into the Prussian Province of Pomerania, while Pomerelia was made part of the Province of West Prussia. With Prussia, both provinces joined the newly constituted German Empire in 1871. Under the German rule the Polish minority suffered discrimination and oppressive measures aimed at eradicating its culture. Following the empire's defeat in World War I, however, Pomorze Gdańskie/Pomerelia was returned to the rebuilt Polish state as part of the so-called Polish Corridor), while German-majority Gdansk/Danzig was transformed into the independent Free City of Danzig. Germany's Province of Pomerania was expanded in 1938 to include northern parts of the former Province of Posen–West Prussia, and in late 1939 the annexed Pomorze Gdańskie/Polish Corridor became part of the wartime Reichsgau Danzig-West Prussia. The Nazis deported the Pomeranian Jews to a reservation near Lublin in Pomerelia. The Polish population suffered heavily during the Nazi oppression; more than 40,000 died in executions, death camps, prisons and forced labour, primarily those who were teachers, businessmen, priests, politicians, former army officers, and civil servants. Thousands of Poles and Kashubians suffered deportation, their homes taken over by the German military and civil servants, as well as some Baltic Germans resettled there between 1940-1943.
After Nazi Germany's defeat in World War II, the German–Polish border was shifted west to the Oder–Neisse line, and all of Pomerania was under Soviet military control. The German citizens of the former eastern territories of Germany and Poles of German ethnicity from Pomerelia were expelled, and the area was resettled primarily with Poles of Polish ethnicity, (some themselves expellees from former eastern Poland) and some Poles of Ukrainian ethnicity (resettled under Operation Vistula) and few Polish Jews. Most of Hither or Western Pomerania ("Vorpommern") remained in Germany, and at first about 500,000 fled and expelled Farther Pomeranians found refuge there, later many moved on to other German regions and abroad. Today German Hither Pomerania forms the eastern part of the state of Mecklenburg-Vorpommern, while the Polish part is divided mainly between the West Pomeranian, Pomeranian voivodeships, with their capitals in Szczecin and Gdańsk. During the 1980s, the Solidarity and "Die Wende" ("the change") movements overthrew the Communist regimes implemented during the post-war era; since then, Pomerania is democratically governed.
Pomerania still lives in the country of Brazil in a colony where the language is still spoken. The arrival of Pomerania immigrants with Germans and Italians helped form the state of Espírito Santo since the early 1930s, from Gustavo Barreto. Their importance and respect are one of the cultural signatures of the area. The Brazilian city of Pomerode (in the state of Santa Catarina) was founded by Pomeranian Germans in 1861 and is considered the most typically German of all the German towns of southern Brazil.
Western Pomerania is inhabited by German Pomeranians. In the eastern parts, Poles are the dominating ethnic group since the territorial changes of Poland after World War II, and the resulting Polonization. Kashubians, descendants of the medieval West Slavic Pomeranians, are numerous in rural Pomerelia.
German Hither Pomerania had a population of about 470,000 in 2012 (districts of Vorpommern-Rügen and Vorpommern-Greifswald combined) - while the Polish districts of the region had a population of about 520,000 in 2012 (cities of Szczecin, Świnoujście and Police County combined). So overall, about 1 million people live in the historical region of Hither Pomerania today, while the Szczecin metropolitan area reaches even further.
Cities in the historical region of Pomerania (with population figures for 2012):
Other cities in the Pomeranian and Kuyavian-Pomeranian voivodeships:
In the German part of Pomerania, Standard German and the East Low German Mecklenburgisch-Vorpommersch
and Central Pomeranian dialects are spoken, though Standard German dominates. Polish is the dominating language in the Polish part; Kashubian dialects are also spoken by the Kashubians in Pomerelia.
East Pomeranian, the East Low German dialect of Farther Pomerania and western Pomerelia, Low Prussian, the East Low German dialect of eastern Pomerelia, and Standard German were dominating in Pomerania east of the Oder-Neisse line before most of its speakers were expelled after World War II. Slovincian was spoken at the Farther Pomeranian–Pomerelian frontier, but is now extinct.
Kashubian and East Low German are also spoken by the descendants of émigrées, most notably in the Americas (e.g. Argentina, Brazil, Chile and Canada).
The Pomeranian State Museum in Greifswald, dedicated to the history of Pomerania, has a variety of archeological findings and artefacts from the different periods covered in this article. At least 50 museums in Poland cover the history of Pomerania, the most important of them being the National Museum in Gdańsk, the Central Pomerania Museum in Słupsk, the Darłowo Museum, the Koszalin Museum, and the National Museum in Szczecin.
Agriculture primarily consists of raising livestock, forestry, fishery, and the cultivation of cereals, sugar beets, and potatoes. Industrial food processing is increasingly relevant in the region. Key producing industries are shipyards, mechanical engineering facilities (i.a. renewable energy components), and sugar refineries, along with paper and wood fabricators. Service industries today are an important economical factor in Pomerania, most notably with logistics, information technology, life science, biotechnology, health care, and other high-tech branches often clustering around research facilities of the Pomeranian universities.
Since the late 19th century, tourism has been an important sector of the economy, primarily in the numerous seaside resorts along the coast. | https://en.wikipedia.org/wiki?curid=24261 |
Progeny Linux Systems
Progeny Linux Systems was a company which provided Linux platform technology. Their Platform Services technology supported both Debian and RPM-based distributions for Linux platforms. Progeny Linux Systems was based in Indianapolis. Ian Murdock, the founder of Debian, was the founder and Chairman of the Board. Its CTO was John H. Hartman, and Bruce Byfield was marketing and communications director.
Progeny created an operating system called Progeny Componentized Linux.
Progeny eventually announced via a post to their mailing lists on 1 May 2007 that they were ceasing operations.
"Progeny Componentized Linux", usually called "Progeny Debian", is a defunct free operating system. Progeny announced in a post to its various mailing lists on 1 May 2007 that they were ceasing operations, and shut down their website.
Progeny Debian was an alternative to Debian 3.1. Furthermore, it was based upon the Linux Standard Base (LSB) 3.0, adopting technology such as the Anaconda installer ported from Red Hat, Advanced Packaging Tool, and Discover. Progeny Debian aimed to be a model for developing a component-based Linux. | https://en.wikipedia.org/wiki?curid=24264 |
Ping (networking utility)
Ping is a computer network administration software utility used to test the reachability of a host on an Internet Protocol (IP) network. It is available for virtually all operating systems that have networking capability, including most embedded network administration software.
Ping measures the round-trip time for messages sent from the originating host to a destination computer that are echoed back to the source. The name comes from active sonar terminology that sends a pulse of sound and listens for the echo to detect objects under water.
Ping operates by sending Internet Control Message Protocol (ICMP) echo request packets to the target host and waiting for an ICMP echo reply. The program reports errors, packet loss, and a statistical summary of the results, typically including the minimum, maximum, the mean round-trip times, and standard deviation of the mean.
The command-line options of the ping utility and its output vary between the numerous implementations. Options may include the size of the payload, count of tests, limits for the number of network hops (TTL) that probes traverse, interval between the requests and time to wait for a response. Many systems provide a companion utility ping6, for testing on Internet Protocol version 6 (IPv6) networks, which implement ICMPv6.
The ping utility was written by Mike Muuss in December 1983 during his employment at the Ballistic Research Laboratory, now the US Army Research Laboratory. A remark by David Mills on using ICMP echo packets for IP network diagnosis and measurements prompted Muuss to create the utility to troubleshoot network problems. The author named it after the sound that sonar makes, since its methodology is analogous to sonar's echo location. The acronym Packet InterNet Groper for PING has been used for over 30 years, and although Muuss says that from his point of view PING was not intended as an acronym, he has acknowledged Mills’ expansion of the name. The first released version was public domain software; all subsequent versions have been licensed under the BSD license. Ping was first included in 4.3BSD. The FreeDOS version was developed by Erick Engelke and is licensed under the GPL. Tim Crawford developed the ReactOS version. It is licensed under the MIT License.
RFC 1122 prescribes that any host must process ICMP echo requests and issue echo replies in return.
The following is the output of running ping on Linux for sending five probes to the target host "www.example.com":
$ ping -c 5 www.example.com
PING www.example.com (93.184.216.34): 56 data bytes
64 bytes from 93.184.216.34: icmp_seq=0 ttl=56 time=11.632 ms
64 bytes from 93.184.216.34: icmp_seq=1 ttl=56 time=11.726 ms
64 bytes from 93.184.216.34: icmp_seq=2 ttl=56 time=10.683 ms
64 bytes from 93.184.216.34: icmp_seq=3 ttl=56 time=9.674 ms
64 bytes from 93.184.216.34: icmp_seq=4 ttl=56 time=11.127 ms
--- www.example.com ping statistics ---
5 packets transmitted, 5 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 9.674/10.968/11.726/0.748 ms
The output lists each probe message and the results obtained. Finally it lists the statistics of the entire test. In this example, the shortest round trip time was 9.674 ms, the average was 10.968 ms, and the maximum value was 11.726 ms. The measurement had a standard deviation of 0.748 ms.
In cases of no response from the target host, most implementations display either nothing or periodically print notifications about timing out. Possible ping results indicating a problem include the following:
In case of error, the target host or an intermediate router sends back an ICMP error message, for example "host unreachable" or "TTL exceeded in transit". In addition, these messages include the first eight bytes of the original message (in this case header of the ICMP echo request, including the quench value), so the ping utility can match responses to originating queries.
Generic composition of an ICMP packet:
The "echo request" ("ping") is an ICMP/ICMP6 message.
The Identifier and Sequence Number can be used by the client to match the reply with the request that caused the reply. In practice, most Linux systems use a unique identifier for every ping process, and sequence number is an increasing number within that process. Windows uses a fixed identifier, which varies between Windows versions, and a sequence number that is only reset at boot time.
The "echo reply" is an ICMP message generated in response to an echo request; it is mandatory for all hosts, and must include the exact payload received in the request.
The payload of the packet is generally filled with ASCII characters, as the output of the tcpdump utility shows in the last 32 bytes of the following example (after the eight-byte ICMP header starting with ):
The payload may include a timestamp indicating the time of transmission and a sequence number, which are not found in this example. This allows ping to compute the round trip time in a stateless manner without needing to record the time of transmission of each packet.
The payload may also include a "magic packet" for the Wake-on-LAN protocol, but the minimum payload in that case is longer than shown. The "Echo Request" typically does not receive any reply if the host was sleeping in hibernation state, but the host still wakes up from sleep state if its interface is configured to accept wakeup requests. If the host is already active and configured to allow replies to incoming ICMP "Echo Request" packets, the returned reply should include the same payload. This may be used to detect that the remote host was effectively woken up, by repeating a new request after some delay to allow the host to resume its network services. If the host was just sleeping in low power active state, a single request wakes up that host just enough to allow its "Echo Reply" service to reply instantly if that service was enabled. The host does not need to completely wake up all devices, and may return to low power mode after a short delay. Such configuration may be used to avoid a host to enter in hibernation state, with much longer wake up delay, after some time passed in low power active mode.
To conduct a denial-of-service attack, an attacker may send ping requests as fast as possible, possibly overwhelming the victim with ICMP echo requests. This technique is called a ping flood.
Ping requests to multiple addresses, ping sweeps, may be used to obtain a list of all hosts on a network. | https://en.wikipedia.org/wiki?curid=24265 |
Profinite group
In mathematics, profinite groups are topological groups that are in a certain sense assembled from finite groups. They share many properties with their finite quotients: for example, both Lagrange's theorem and the Sylow theorems generalise well to profinite groups.
A non-compact generalization of a profinite group is a locally profinite group.
Profinite groups can be defined in either of two equivalent ways.
A profinite group is a topological group that is isomorphic to the inverse limit of an inverse system of discrete finite groups. In this context, an inverse system consists of a directed set formula_1, a collection of finite groups formula_2, each having the discrete topology, and a collection of homomorphisms formula_3 such that formula_4 is the identity on formula_5 and the collection satisfies the composition property formula_6. The inverse limit is the set:
formula_7
equipped with the relative product topology. In categorical terms, this is a special case of a cofiltered limit construction. One can also define the inverse limit in terms of a universal property.
A profinite group is a Hausdorff, compact, and totally disconnected topological group: that is, a topological group that is also a Stone space. Given this definition, it is possible to recover the first definition using the inverse limit formula_8 where formula_9 ranges through the open normal subgroups of formula_10 ordered by (reverse) inclusion.
Given an arbitrary group formula_10, there is a related profinite group formula_17, the profinite completion of formula_10. It is defined as the inverse limit of the groups formula_19, where formula_9 runs through the normal subgroups in formula_10 of finite index (these normal subgroups are partially ordered by inclusion, which translates into an inverse system of natural homomorphisms between the quotients). There is a natural homomorphism formula_22, and the image of formula_10 under this homomorphism is dense in formula_17. The homomorphism formula_25 is injective if and only if the group formula_10 is residually finite (i.e.,
formula_27, where the intersection runs through all normal subgroups of finite index).
The homomorphism formula_25 is characterized by the following universal property: given any profinite group formula_29 and any group homomorphism formula_30, there exists a unique continuous group homomorphism formula_31 with formula_32.
There is a notion of ind-finite group, which is the conceptual dual to profinite groups; i.e. a group "G" is ind-finite if it is the direct limit of an inductive system of finite groups. (In particular, it is an ind-group.) The usual terminology is different: a group "G" is called locally finite if every finitely-generated subgroup is finite. This is equivalent, in fact, to being 'ind-finite'.
By applying Pontryagin duality, one can see that abelian profinite groups are in duality with locally finite discrete abelian groups. The latter are just the abelian torsion groups.
A profinite group is projective if it has the lifting property for every extension. This is equivalent to saying that "G" is projective if for every surjective morphism from a profinite "H" → "G" there is a section "G" → "H".
Projectivity for a profinite group "G" is equivalent to either of the two properties:
Every projective profinite group can be realized as an absolute Galois group of a pseudo algebraically closed field. This result is due to Alexander Lubotzky and Lou van den Dries. | https://en.wikipedia.org/wiki?curid=24266 |
Paul Whitehouse
Paul Julian Whitehouse (born 17 May 1958) is a Welsh actor, writer and comedian. He was one of the main stars of the BBC sketch comedy series "The Fast Show", and has also starred with Harry Enfield in the shows "Harry & Paul" and "Harry Enfield & Chums". In a 2005 poll to find The Comedian's Comedian, he was in the top 50 comedy acts voted for by comedians and comedy insiders.
Whitehouse was born on 17 May 1958, in Stanleytown, Glamorgan. His father, Harry, worked for the National Coal Board and his mother, Anita, was a singer with the Welsh National Opera. The family moved to Enfield, Middlesex, when he was four years old, which led to his discovering his talent for mimicry:
Whitehouse attended the University of East Anglia from autumn 1976, where he made friends with Charlie Higson. The pair spent little of their first year studying, instead playing sitar and performing with their jazz fusion combo, the Right Hand Lovers, along with other university friends Duncan Beamont, Kevin Buckland and Dave Cummings.
Whitehouse dropped out and squatted in a council flat in Hackney, east London and occasionally worked as a plasterer. After Higson graduated in 1980, he moved in with Whitehouse, working by day as a decorator and performing at night and the weekends with his new punk-funk group "The Higsons."
The pair began working as tradesmen on a house shared by comedians Stephen Fry and Hugh Laurie, which inspired them to start writing comedy. They moved to an estate where in a pub they met Harry Enfield, a neighbour with a stage act, and after he gained a place on Channel 4's "Saturday Live", the pair were invited to write for him. Whitehouse created Enfield's character "Stavros" (a London-based Greek kebab shop owner), and then "Loadsamoney" (an archetypal Essex boy made good in Margaret Thatcher's 1980s); he also appeared as Enfield's sidekick Lance on "Saturday Live".
This success turned Whitehouse and Higson's career, and they began to appear on shows such as "Vic Reeves' Big Night Out" and extensively for the BBC, with Whitehouse appearing on "A Bit of Fry and Laurie" as a man with a clinical need to have his bottom fondled, and "", then as performer on shows such as "Harry Enfield's Television Programme", where he developed numerous characters including DJ Mike Smash of Smashie and Nicey, alongside Harry Enfield as Dave Nice.
While watching a preview tape of highlights from Enfield's programme, Whitehouse and Higson were inspired to create a rapid-fire delivery comedy series, which would evolve into "The Fast Show" (when shown in the United States on BBC America, the show was titled "Brilliant"). Whitehouse's characters included:
An online series of "The Fast Show" commissioned by Fosters led to six weekly episodes launched on 10 November 2011.
In 2001 and 2002, Whitehouse wrote and performed in two series of the BBC comedy drama "Happiness", in which he played a voice-over actor with a mid-life crisis.
Whitehouse wrote, produced and appeared with Chris Langham in the 2005 comedy drama "Help", also for the BBC. In this series he took 25 roles, all patients of Langham's psychotherapist (except one, who is Langham's psychotherapist's psychotherapist). The pair's collaboration resulted in Whitehouse taking the witness stand on 24 July 2007 in the trial of Langham, in regard to the charge of holding explicit images and videos of minors. Langham claimed he downloaded this material as research for a character in the second series of "Help", but Whitehouse's testimony only partially corroborated this explanation.
Whitehouse appeared in the BBC sketch series "Harry & Paul" (formerly "Ruddy Hell! It's Harry and Paul"), starring alongside Harry Enfield.
Whitehouse starred alongside Charlie Higson in the BBC2 comedy series "Bellamy's People", with the first episode broadcast on 21 January 2010. The comedy evolved from the BBC Radio 4 program "Down the Line". The show originally had the working title of "Bellamy's Kingdom".
In October 2014, Harry Enfield and Whitehouse returned to the characters of Frank and George in a sketch for Channel 4's testicular cancer awareness comedy series "The Feeling Nuts Comedy Night".
In 2015, his sitcom "Nurse", based on his Radio 4 series of the same name (see below), debuted on BBC2 on 10 March.
In August 2015, Whitehouse, alongside Enfield, in celebration of their 25-year partnership, presented "An Evening With Harry Enfield and Paul Whitehouse".
In June and July 2018 Whitehouse appeared with his long time friend and fellow comedian Bob Mortimer in a BBC2 six part comedy series, "". The two friends, who both suffer from heart conditions, share their thoughts and experiences while fishing at a variety of locations around the UK.
Whitehouse and Charlie Higson produced and appeared in a spoof phone-in show "Down the Line" on BBC Radio 4. The first series was broadcast May–June 2006. A second series was broadcast 16 January–20 February 2007, during which they won a Sony Radio Academy Award. A third series was broadcast in January 2008, a fourth in January 2011 and a fifth in May 2013. In February 2014, Radio 4 broadcast "Nurse", written by Whitehouse and David Cummings and starring Esther Coles in the title role, with Whitehouse playing a variety of characters, including Graham Downs who had previously appeared in "Down the Line".
He also starred alongside Eddie Large and Russ Abbot in episode 4 of "Horne & Corden". Comic Relief 2011 contained a new parody video of Newport (Ymerodraeth State of Mind) directed by MJ Delaney featuring Whitehouse and other Welsh celebrities lip-syncing to the song. It is available to download via iTunes.
Johnny Depp described Whitehouse as "the greatest actor of all time".
Whitehouse and John Sullivan's son, Jim Sullivan, have written "Only Fools and Horses The Musical", which launched on 9 February 2019 at the Theatre Royal Haymarket, London. Whitehouse stars as Grandad.
Whitehouse's main early influences were the sketches of Les Dennis and Dustin Gee and The Goodies. Tommy Cooper made him laugh, as did Morecambe and Wise and the television show "Dad's Army." He cites his modern influences as Harry Enfield, of whom he says without meeting he would not have been doing what he does now, and the approach of Reeves and Mortimer who he thinks are "far and away the best comedians that we have had in this country for a long while." | https://en.wikipedia.org/wiki?curid=24267 |
Pawnbroker
A pawnbroker is an individual or business (pawnshop or pawn shop) that offers secured loans to people, with items of personal property used as collateral. The items having been "pawned" to the broker are themselves called "pledges" or "pawns", or simply the collateral. While many items can be pawned, pawnshops typically accept jewelry, musical instruments, home audio equipment, computers, video game systems, coins, gold, silver, televisions, cameras, power tools, firearms, and other relatively valuable items as collateral.
If an item is pawned for a loan (colloquially "hocked" or "popped"), within a certain contractual period of time the pawner may redeem it for the amount of the loan plus some agreed-upon amount for interest. The amount of time, and rate of interest, is governed by law and by the pawnbroker's policies. If the loan is not paid (or extended, if applicable) within the time period, the pawned item will be offered for sale to other customers by the pawnbroker. Unlike other lenders, the pawnbroker does not report the defaulted loan on the customer's credit report, since the pawnbroker has physical possession of the item and may recoup the loan value through outright sale of the item. The pawnbroker also sells items that have been sold outright to them by customers. Some pawnshops are willing to trade items in their shop for items brought to them by customers.
In the West, pawnbroking existed in the Ancient Greek and Roman Empires. Most contemporary Western law on the subject is derived from the Roman jurisprudence. As the empire spread its culture, pawnbroking went with it. Likewise, in the East, the business model existed in China 1,500 years ago in Buddhist monasteries no different from today, through the ages strictly regulated by Imperial or other authorities.
In spite of early Roman Catholic Church prohibitions against charging interest on loans, there is some evidence that the Franciscans were permitted to begin the practice as an aid to the poor. In 1338, Edward III pawned his jewels to raise money for his war with France. King Henry V did much the same in 1415. The Lombards were not a popular class, and Henry VII harried them a good deal. In 1603 an "Act against Brokers" was passed and remained on the statute-book until 1872. It was aimed at the many counterfeit brokers in London. This type of broker was evidently regarded as a fence.
Crusaders, predominantly in France, brokered their land holdings to monasteries and diocese for funds to supply, outfit, and transport their armies to the Holy Land. Instead of outright repayment, the Church reaped a certain amount of crop returns for a certain amount of seasons, which could additionally be re-exchanged in a type of equity.
A pawnbroker can also be a charity. In 1450, Barnaba Manassei, a Franciscan friar, began the Monte di Pietà movement in Perugia, Italy. It provided financial assistance in the form of no-interest loans secured with pawned items. Instead of interest, the Monte di Pietà urged borrowers to make donations to the Church. It spread through Italy, then to other parts of Europe. The first Monte de Piedad organization in Spain was founded in Madrid, and from there the idea was transferred to New Spain by Pedro Romero de Terreros, the Count of Santa Maria de Regla and Knight of Calatrava. The Nacional Monte de Piedad is a charitable institution and pawn shop whose main office is located just off the Zócalo, or main plaza of Mexico City. It was established between 1774 and 1777 by Pedro Romero de Terreros as part of a movement to provide interest-free or low-interest loans to the poor. It was recognized as a national charity in 1927 by the Mexican government. Today it is a fast-growing institution with over 152 branches all over Mexico and with plans to open a branch in every Mexican city.
The pawning process begins when a customer brings an item into a pawn shop. Common items pawned (or, in some instances, sold outright) by customers include jewelry, electronics, collectibles, musical instruments, tools, and (depending on local regulations) firearms. Gold, silver, and platinum are popular items—which are often purchased, even if in the form of broken jewelry of little value. Metal can still be sold in bulk to a bullion dealer or smelter for the value by weight of the component metals. Similarly, jewelry that contains genuine gemstones, even if broken or missing pieces, have value.
The pawnbroker assumes the risk that an item might have been stolen. However, laws in many jurisdictions protect both the community and broker from "unknowingly" handling stolen goods (also known as "fencing"). These laws often require that the pawnbroker establish positive identification of the seller through photo identification (such as a driver's license or government-issued identity document), as well as a holding period placed on an item purchased by a pawnbroker (to allow time for local law enforcement authorities to track stolen items). In some jurisdictions, pawnshops must give a list of all newly pawned items and any associated serial number to police, so the police can determine if any of the items have been reported stolen. Many police departments advise burglary or robbery victims to visit local pawnshops to see if they can locate stolen items. Some pawnshops set up their own screening criteria to avoid buying stolen property.
The pawnbroker assesses an item for its condition and marketability by testing the item and examining it for flaws, scratches or other damage. Another aspect that affects marketability is the supply and demand for the item in the community or region. In some markets, the used goods market is so flooded with used stereos and car stereos, for example, that pawnshops will only accept the higher-quality brand names. Alternatively, a customer may offer to pawn an item that is difficult to sell, such as a surfboard in an inland region, or a pair of snowshoes in warm-winter regions. The pawnshop owner either turns down hard-to-sell items, or offers a low price. While some items never get outdated, such as hammers and hand saws, electronics and computer items quickly become obsolete and unsaleable. Pawnshop owners must learn about different makes and models of computers, software, and other electronic equipment, so they can value objects accurately.
To assess value of different items, pawnbrokers use guidebooks ("blue books"), catalogs, Internet search engines, and their own experience. Some pawnbrokers have trained in identification of gems, or employ a specialist to assess jewelry. One of the risks of accepting secondhand goods is that the item may be counterfeit. If the item is counterfeit, such as a fake Rolex watch, it may have only a fraction of the value of the genuine item. Once the pawnbroker determines the item is genuine and not likely stolen, and that it is marketable, the pawnbroker offers the customer an amount for it. The customer can either sell the item outright if (as in most cases) the pawnbroker is also a licensed secondhand dealer, or offer the item as collateral on a loan. Most pawnshops are willing to negotiate the amount of the loan with the client.
To determine the amount of the loan, the pawnshop owner needs to take into account several factors. A key factor is the predicted resale value of the item. This is often thought of in terms of a range, with the low point being the wholesale value of the used good, in the case that the pawnshop is unable to sell it to pawnshop customers, and they decide to sell it to a wholesale merchant of used goods. The higher point in the range is the retail sale price in the pawnshop. For example, a five-year-old laptop may have been bought by the customer for $1,000. However, as a used item in a pawnshop, it might only fetch $250 as a purchase price in the pawnshop, because the customers will be wary that it might be a "lemon" that the seller is getting rid of because it has some hard-to-detect problem, and because pawnshops do not typically offer a warranty with goods sold. Used electronics wholesalers will buy the laptop from the pawnshop owner for $100 to $150. The wholesaler pays a lower price than the retail value because they have the added cost of hiring electronics technicians who overhaul and repair the items so that they can be sold in used electronics stores.
The pawnshop owner also takes into account their knowledge of supply and demand for the item in question to determine if they think that they will end up selling the laptop for $100 to a wholesaler or $250 to a pawnshop customer. If the pawnshop owner believes that the local market for used laptops is saturated (overloaded with used laptops), they may fear that they will only get $100 for the laptop if they have to unload it to a wholesaler. With that figure in mind as the expected revenue, the pawnshop owner has to factor in the overhead costs of the store (rent, heat, electricity, phone connection, yellow pages advertisement, website costs, staff costs, insurance, alarm system, items lost when they are confiscated by police, etc.), and a profit for the business. As such, the customer who comes in with this laptop that they paid $1,000 for when it was new may be offered as little as $50 by the pawnshop owner, who is taking into account all of the risk and cost factors.
In determining the amount of the loan, the pawnshop owner also assesses the likelihood that the customer will pay the interest for several weeks or months and then return to repay the loan and reclaim the item. Since the key to the pawnshop business model is earning interest on the loaned money, pawnshop owners want to accept items that the customer is likely to want to recover, after having paid interest for a period on the loan. If, in an extreme case, a pawnshop only accepted items that customers had no interest in ever reclaiming, it would not make any money from interest, and the store would in effect become a second-hand dealer. Determining if the customer is likely to return to reclaim an item is a subjective decision, and the pawnshop owner may take many factors into account. For example, if a young able-bodied man comes into the pawnshop to pawn an electric wheelchair (perhaps claiming it to be the possession of his late grandparent), the pawnshop owner may doubt that the item will be redeemed. On the other hand, if a middle-aged man pawns a top quality set of golf clubs, the pawnshop owner may assess it as more credible that he will return for the items. Some customers may attempt to persuade the pawnshop owner that the item in question is important to them ("that necklace belonged to my grandmother, so I will certainly return for it") as a means of obtaining a loan. Other customers return to the same store, repeatedly pawn the same items as a way of borrowing money, and return to pay the interest and recover the items before the end of the loan period; thus, the pawnbroker knows that redemption is likely and will, therefore, make the loan.
The saleability of the item and the amount that the customer wants for it are also factored into the pawnbroker's assessment; if a customer offers a very salable item at a low price, the pawnbroker may accept it even if it is unlikely that the customer will return, because the pawnshop can turn around a quick profit on the item. However, if a customer offers an extremely low price the pawnbroker may turn down the offer because this suggests that the item may either be counterfeit or stolen.
In some countries such as Sweden, there is legislation to prevent the pawnbroker from making unfair profits (usury due to financial distress or ignorance of the customer) at the expense of the customer by low valuations of their collaterals. It is stated that the pawnbroker may not keep the collateral but must sell them at public auction. Any excess after paying the loan, the interest and auction costs must be paid to the customer. If the item does not fetch a price that will cover these expenses the pawnbroker may keep the item and sell it through other channels. Despite this protection, the cost for the customer to borrow money this way will be high, and if he cannot redeem the collateral it would in many cases be better to sell the goods directly.
Pawnshops have to be careful to manage how many new items they accept as pawns: either too little inventory or too much is bad. A pawnshop might have too little inventory if, for example, it mostly buys jewels and gold that it resells or smelts—or perhaps the pawnshop owner quickly sells most items through specialty shops (e.g., musical instruments to music stores, stereos to used hi-fi audio stores, etc.). In this case, the pawnshop is less interesting to customers, because it is mostly empty.
On the other extreme, a pawnshop with a huge inventory has several disadvantages. If the store is crammed with used athletic gear, old stereos, and old tools, the store owner must spend time and money shelving and sorting items, displaying them on different stands or in glass cases, and monitoring customers to prevent shoplifting. If there are too many low-value, poor quality items, such as old toasters, scratched-up 20-year-old TVs, and worn-out sports gear piled into cardboard boxes, the store may begin to look more like a rummage sale or flea market. Small, high-value items such as iPod players or cell phones must be in locked glass display cases, which means the owner may need additional staff to unlock the cabinets for items customers want to examine. As a store fills with items, an owner must protect inventory from theft by hiring staff to supervise the different areas or install security cameras and alarms. Too much unsold inventory means that the store has not been able to realize value from these items to provide cash to lend.
The better option lies in the middle: a store with a moderate amount of good quality, brand-name items arranged neatly in the display windows attracts passersby, who are more likely to enter and shop. If items are attractively laid out in display cases and shelves, the pawnshop looks more professional and reputable. Once passersby start shopping in the store, they may be more inclined to pawn or sell their own items to the pawnshop. Some pawnshop owners prevent cluttered look by storing overstocked items, or less attractive items such as snow tires, in a backroom or basement. Some pawnshop companies operate a chain of stores in a state or province. This way, they can balance inventory between stores. For example, they can move some of a rural store's surfeit of fishing gear to an urban store.
Some stores also slim down inventory by selling items to speciality retailers. A pawnshop in a low-income neighborhood that pays a customer $300 for a power amplifier with a used value of $2000 may find the unit hard to sell alongside much less expensive merchandise. They may sell the amplifier to a used audio equipment store whose customers expect higher-end equipment. Some pawnshops sell speciality items online, on eBay or other websites. A speciality item such as a high-end model railroad set may not sell in the store for its "blue book" value. On an online auction, it stands a good chance of bringing a good price.
Another growing trend in the industry is vehicle pawn or auto pawning. This form of Pawnbroking works like a traditional pawn loan, however, these stores only accept vehicles as security. Many stores are also accepting "Title Loans", where you can pawn the ownership or "Title" documents of your vehicle. This essentially means that the pawnbroker owns your car while you drive it, and you regain ownership once you pay back your loan.
While the main business activities of a pawnshop are lending money for interest based on valuable items that customers bring in, some pawnshops also undertake other business activities, such as selling brand-new retail items that are in demand in the neighborhood of the store. Depending on where a pawnshop is located, these other retail items may range from musical instruments to firearms. Some pawnbrokers also sell brand-new self-defense items such as pepper spray or stun guns.
Many pawnshops will also trade used items, as long as the transaction turns a profit for pawn shop. In cases where the pawnshop buys items outright, the money is not a loan; it is a straight payment for the item. On sales, the pawnshop may offer layaway plans, subject to conditions (down payment, regular payments, and forfeiture of previously paid amounts if the item is not paid off).
Some pawnshops may keep a few unusual, high value items on display to capture the interests of passersby, such as a vintage Harley Davidson motorcycle; the owner is not typically expecting to sell these items. Other activities carried out by pawnshops are financial services including fee-based check cashing, payday loans, vehicle title or house title loans, and currency exchange services.
Upscale pawnshops began to appear in the early 20th century, often referred to as "loan offices", since the term “pawn shop” had a very negative historical reputation at this point. Some of these so-called loan offices are even located in the upper floors of office buildings. The modern euphemism for the upscale pawn shop is the "high-end collateral lender", lending to upper-class often white-collar individuals, including doctors, lawyers and bankers, as well as more colorful individuals like high-rolling gamblers. They are also interchangeably called "upscale pawnshops" and "high-end pawnshops" due to their acceptance of higher value merchandise in exchange for short-term loans. These objects can include wine collections, jewelry, large diamonds, fine art, cars, and unique memorabilia. Loans are often sought to deal with business revenue shortfalls and other expensive fiscal issues. Upscale pawnshops have also been featured in reality television
In the US, there are over 11,000 pawnbrokers and an industry revenue of $14.5 billion. The US industry serves 30 million customers.
The pawnbrokers' symbol is three spheres suspended from a bar. The three sphere symbol is attributed to the Medici family of Florence, Italy, owing to its symbolic meaning of Lombard. This refers to the Italian region of Lombardy, where pawn shop banking originated under the name of Lombard banking. It has been conjectured that the golden spheres were originally three flat yellow effigies of bezants, or gold coins, laid heraldically upon a sable field, but that they were converted into spheres to better attract attention.
Most European towns called the pawn shop the "Lombard". The Lombards were a banking community in medieval London, England. According to legend, a Medici employed by Charlemagne slew a giant using three bags of rocks. The three-ball symbol became the family crest. Since the Medicis were so successful in the financial, banking, and moneylending industries, other families also adopted the symbol. Throughout the Middle Ages, coats of arms bore three balls, orbs, plates, discs, coins and more as symbols of monetary success.
Saint Nicholas is the patron saint of pawnbrokers. The symbol has also been attributed to the story of Nicholas giving a poor man's three daughters each a bag of gold so they could get married.
In Hong Kong the practice follows the Chinese tradition, and the counter of the shop is typically higher than the average person for security. A customer can only hold up his hand to offer belongings and there is a wooden screen between the door and the counter for customers' privacy. The symbol of a pawn shop in Hong Kong is a bat holding a coin (, Cantonese: "fūk syú diu gām chín"). The bat signifies fortune and the coin signifies benefits. In Japan, the usual symbol for a pawn shop is a circled number seven (7) because "shichi", the Japanese word for seven, sounds similar to the word for "pawn" ().
The majority of pawnbrokers in Malaysia, a multi-race country wherein Malaysian Chinese consists 25% of the population, are managed by Malaysian Chinese. In Malay, pawn is called "pajak gadai". A valid and licensed pawnshop in Malaysia must always declare themselves as a "pajak gadai" or a pawn shop for their company registration. They must also fulfill the requirements of the Ministry of Housing and Local Government which states the pawn counter must not be higher than 4 feet, is bullet-proof, has stainless-steel counters/doors, strong rooms with automatic locks, safes, equipped with fully computerized system, CCTV, alarm, and pawnbroker insurance.
In the Philippines, pawnshops are generally privately-owned businesses and are regulated by the Bangko Sentral ng Pilipinas (BSP). Pawnshops in the country traditionally have Spanish names beginning with "Agencia de Empeños" (lit. "Pawn agency"), contrary to "Casa de Empeños" in Spain and Latin America. Most pawnshops accept jewelry, vehicles or electronic valuables as collateral. They also offer various forms of other finance-related services such as remittance, bills payment and microfinancing. Therefore, they serve as financial one stop shops primarily to communities in which alternatives such as banks are not available. Recently, they have also started conducting services online and through mobile applications although this is yet subject to regulation by the BSP.
In India, the Marwari Jain community pioneered the pawnbroking business, but today others are involved; the work is done by many agents called "saudagar". Instead of working from a shop, they go to needy people's homes and motivate them to become involved in the business. Pawn shops are often run as part of jewelry stores. Gold, silver, and diamonds are frequently accepted as collateral.
Pawnbroking is also a traditional trade in Thailand, where pawn shops are run both privately and by local governments.
In Sri Lanka, pawnbroking is a lucrative business engaged in by specialized pawnbrokers as well as commercial banks and other finance companies.
In Indonesia, there is a state-owned company called Pegadaian which provides a range of conventional and Sharia-compliant pawnbroking services across the archipelago. The company accepts high-value items such as gold, motor vehicles, and other expensive items as collateral. In addition to pawnbroking activities, the company provides a range of other services, such as a safe deposit box and gold trading services. | https://en.wikipedia.org/wiki?curid=24268 |
House of Romanov
The House of Romanov (; also Romanoff; , "Románovy", ) was the reigning imperial house of Russia from 1613 to 1917.
The Romanovs achieved prominence as "boyars" of the Grand Duchy of Moscow and later the Tsardom of Russia under the reigning Rurik dynasty, which became extinct upon the death of Tsar Feodor I in 1598. The Time of Troubles was caused by the resulting succession crisis, where several pretenders and imposters (False Dmitris) fought for the crown during the Polish–Muscovite War. On 21 February 1613, Michael Romanov was elected Tsar of Russia by the Zemsky Sobor, establishing the Romanovs as Russia's second reigning dynasty. Michael's grandson Peter I established the Russian Empire in 1721, transforming the country into a great power through a series of wars and reforms. The direct male line of the Romanovs ended when Elizabeth of Russia died in 1762 leading the House of Holstein-Gottorp, a cadet branch of the German House of Oldenburg that reigned in Denmark, to ascend to the crown under Peter III. Officially known as the House of Romanov, descendants after Elizabeth are sometimes referred to as "Holstein-Gottorp-Romanov". The abdication of Tsar Nicholas II on 15 March 1917 as a result of the February Revolution ended 304 years of Romanov rule, establishing the Russian Republic under the Russian Provisional Government in the lead up to the Russian Civil War. In 1918, the Tsar and his family were executed by the Bolsheviks and the 47 survivors of the House of Romanov's 65 members went into exile abroad.
In 1924, Grand Duke Kirill Vladimirovich, the senior surviving male-line descendant of Alexander II of Russia by primogeniture, claimed the headship of the defunct Imperial House of Russia. Since 1991, the succession to the former Russian throne has been in dispute, largely due to disagreements over the validity of dynasts' marriages, especially between the lines of Grand Duchess Maria Vladimirovna of Russia and Prince Nicholas Romanovich Romanov, succeeded by Prince Andrew Romanov.
Legally, it remains unclear whether any "ukase" ever abolished the surname of Michael Romanov (or of his subsequent male-line descendants) after his accession to the Russian throne in 1613, although by tradition members of reigning dynasties seldom use surnames, being known instead by dynastic titles ("Tsarevich Ivan Alexeevich", "Grand Duke Nikolai Nikolaevich", etc.). From , the monarchs of the Russian Empire claimed the throne as relatives of Grand Duchess Anna Petrovna of Russia (1708–1728), who had married Charles Frederick, Duke of Holstein-Gottorp. Thus they were no longer Romanovs by patrilineage, belonging instead to the Holstein-Gottorp cadet branch of the German House of Oldenburg that reigned in Denmark. The 1944 edition of the "Almanach de Gotha" records the name of Russia's ruling dynasty from the time of Peter III (reigned 1761–1762) as "Holstein-Gottorp-Romanov". However, the terms "Romanov" and "House of Romanov" often occurred in official references to the Russian imperial family. The coat-of-arms of the Romanov boyars was included in legislation on the imperial dynasty,
and in a 1913 jubilee, Russia officially celebrated the "300th Anniversary of the Romanovs' rule".
After the February Revolution of March 1917, a special decree of the Provisional Government of Russia granted all members of the imperial family the surname "Romanov". The only exceptions, the morganatic descendants of the Grand Duke Dmitri Pavlovich (1891–1942), took (in exile) the surname .
The Romanovs share their origin with two dozen other Russian noble families. Their earliest common ancestor is one Andrei Kobyla, attested around 1347 as a boyar in the service of Semyon I of Moscow. Later generations assigned to Kobyla an illustrious pedigree. An 18th-century genealogy claimed that he was the son of the Old Prussians prince Glanda Kambila, who came to Russia in the second half of the 13th century, fleeing the invading Germans. Indeed, one of the leaders of the Old Prussians rebellion of 1260–1274 against the Teutonic order was named Glande. This legendary version of the Romanov's origin is contested by another version of their descent from a boyar family from Novgorod.
His actual origin may have been less spectacular. Not only is "Kobyla" Russian for "mare", some of his relatives also had as nicknames the terms for horses and other domestic animals, thus suggesting descent from one of the royal equerries. One of Kobyla's sons, Feodor, a member of the boyar Duma of Dmitri Donskoi, was nicknamed Koshka ("cat"). His descendants took the surname Koshkin, then changed it to Zakharin, which family later split into two branches: Zakharin-Yakovlev and Zakharin-Yuriev. During the reign of Ivan the Terrible, the former family became known as Yakovlev (Alexander Herzen among them), whereas grandchildren of changed their name to "Romanov".
Feodor Nikitich Romanov was descended from the Rurik dynasty through the female line. His mother, Evdokiya Gorbataya-Shuyskaya, was a Rurikid princess from the Shuysky branch, daughter of Alexander Gorbatyi-Shuisky.
The family fortunes soared when Roman's daughter, Anastasia Zakharyina, married Ivan IV (the Terrible), the Rurikid Grand Prince of Moscow, on 3 (13) February 1547. Since her husband had assumed the title of tsar, which literally means "Caesar", on 16 January 1547, she was crowned the very first tsaritsa of Russia. Her mysterious death in 1560 changed Ivan's character for the worse. Suspecting the boyars of having poisoned his beloved, Tsar Ivan started a reign of terror against them. Among his children by Anastasia, the elder (Ivan) was murdered by the tsar in a quarrel; the younger Feodor, a pious but lethargic prince, inherited the throne upon his father's death in 1584.
Throughout Feodor's reign (1584–1598), the Tsar's brother-in-law, Boris Godunov, and his Romanov cousins contested the "de facto" rule of Russia. Upon the death of childless Feodor, the 700-year-old line of Rurikids came to an end. After a long struggle, the party of Boris Godunov prevailed over the Romanovs, and the "Zemsky sobor" elected Godunov as tsar in 1599. Godunov's revenge on the Romanovs was terrible: all the family and its relations were deported to remote corners of the Russian North and Urals, where most of them died of hunger or in chains. The family's leader, Feodor Nikitich Romanov, was exiled to the Antoniev Siysky Monastery and forced to take monastic vows with the name Filaret.
The Romanovs' fortunes again changed dramatically with the fall of the Godunov dynasty in June 1605. As a former leader of the anti-Godunov party and cousin of the last legitimate tsar, Filaret Romanov's recognition was sought by several impostors who attempted to claim the Rurikid legacy and throne during the Time of Troubles. False Dmitriy I made him a metropolitan, and False Dmitriy II raised him to the dignity of patriarch. Upon the expulsion of the Polish army from Moscow in 1612, the "Zemsky Sobor" offered the Russian crown to several Rurikid and Gediminian princes, but all declined the honour.
On being offered the Russian crown, Filaret's 16-year-old son Mikhail Romanov, then living at the Ipatiev Monastery of Kostroma, burst into tears of fear and despair. He was finally persuaded to accept the throne by his mother Kseniya Ivanovna Shestova, who blessed him with the holy image of Our Lady of St. Theodore. Feeling how insecure his throne was, Mikhail attempted to emphasize his ties with the last Rurikid tsars and sought advice from the "Zemsky Sobor" on every important issue. This strategy proved successful. The early Romanovs were generally accepted by the population as in-laws of Ivan the Terrible and viewed as innocent martyrs of Godunov's wrath.
Mikhail was succeeded by his only son Alexei, who steered the country quietly through numerous troubles. Upon Alexei's death, there was a period of dynastic struggle between his children by his first wife Maria Ilyinichna Miloslavskaya (Feodor III, Sofia Alexeyevna, Ivan V) and his son by his second wife Nataliya Kyrillovna Naryshkina, the future Peter the Great. Peter ruled from 1682 until his death in 1725. In numerous successful wars he expanded the Tsardom into a huge empire that became a major European power. He led a cultural revolution that replaced some of the traditionalist and medieval social and political system with a modern, scientific, Europe-oriented, and rationalist system.
New dynastic struggles followed the death of Peter. His only son to survive into adulthood, Tsarevich Alexei, did not support Peter's modernization of Russia. He had previously been arrested and died in prison shortly thereafter. Near the end of his life, Peter managed to alter the succession tradition of male heirs, allowing him to choose his heir. Power then passed into the hands of his second wife, Empress Catherine, who ruled until her death in 1727. Peter II, the son of Tsarevich Alexei, took the throne but died in 1730, ending the Romanov male line. He was succeeded by Anna I, daughter of Peter the Great's half-brother and co-ruler, Ivan V. Before she died in 1740 the empress declared that her grandnephew, Ivan VI, should succeed her. This was an attempt to secure the line of her father, while excluding descendants of Peter the Great from inheriting the throne. Ivan VI was only a one-year-old infant at the time of his succession to the throne, and his parents, Grand Duchess Anna Leopoldovna and Duke Anthony Ulrich of Brunswick, the ruling regent, were detested for their German counselors and relations. As a consequence, shortly after Empress Anna's death, Elizabeth Petrovna, a legitimized daughter of Peter I, managed to gain the favor of the populace and dethroned Ivan VI in a "coup d'état", supported by the Preobrazhensky Regiment and the ambassadors of France and Sweden. Ivan VI and his parents died in prison many years later.
The Holstein-Gottorps of Russia retained the Romanov surname, emphasizing their matrilineal descent from Peter the Great, through Anna Petrovna (Peter I's elder daughter by his second wife). In 1742, Empress Elizabeth of Russia brought Anna's son, her nephew Peter of Holstein-Gottorp, to St. Petersburg and proclaimed him her heir. In time, she married him off to a German princess, Sophia of Anhalt-Zerbst. In 1762, shortly after the death of Empress Elizabeth, Sophia, who had taken the Russian name Catherine upon her marriage, overthrew her unpopular husband, with the aid of her lover, Grigory Orlov. She reigned as Catherine the Great. Catherine's son, Paul I, who succeeded his mother in 1796, was particularly proud to be a great-grandson of Peter the Great, although his mother's memoirs arguably insinuate that Paul's natural father was, in fact, her lover Serge Saltykov, rather than her husband, Peter. Painfully aware of the hazards resulting from battles of succession, Paul decreed house laws for the Romanovs – the so-called Pauline laws, among the strictest in Europe – which established semi-Salic primogeniture as the rule of succession to the throne, requiring Orthodox faith for the monarch and dynasts, and for the consorts of the monarchs and their near heirs. Later, Alexander I, responding to the 1820 morganatic marriage of his brother and heir, added the requirement that consorts of all Russian dynasts in the male line had to be of equal birth (i.e., born to a royal or sovereign dynasty).
Paul I was murdered in his palace in Saint Petersburg in 1801. Alexander I succeeded him on the throne and later died without leaving a son. His brother, crowned Nicholas I, succeeded him on the throne. The succession was far from smooth, however, as hundreds of troops took the oath of allegiance to Nicholas's elder brother, Constantine Pavlovich who, unbeknownst to them, had renounced his claim to the throne in 1822, following his marriage. The confusion, combined with opposition to Nicholas' accession, led to the Decembrist revolt. Nicholas I fathered four sons, educating them for the prospect of ruling Russia and for military careers, from whom the last branches of the dynasty descended.
Alexander II, son of Nicholas I, became the next Russian emperor in 1855, in the midst of the Crimean War. While Alexander considered it his charge to maintain peace in Europe and Russia, he believed only a strong Russian military could keep the peace. By developing the army, giving some freedom to Finland, and freeing the serfs in 1861 he gained much popular support.
Despite his popularity, however, his family life began to unravel by the mid 1860s. In 1864, his eldest son, and heir, Tsarevich Nicholas, died suddenly. His wife, Empress Maria Alexandrovna, who suffered from tuberculosis, spent much of her time abroad. Alexander eventually turned to a mistress, Princess Catherine Dolgoruki. Immediately following the death of his wife in 1880 he contracted a morganatic marriage with Dolgoruki. His legitimization of their children, and rumors that he was contemplating crowning his new wife as empress, caused tension within the dynasty. In particular, the grand duchesses were scandalized at the prospect of deferring to a woman who had borne Alexander several children during his wife's lifetime. Before Princess Catherine could be elevated in rank, however, on 13 March 1881 Alexander was assassinated by a hand-made bomb hurled by Ignacy Hryniewiecki. Slavic patriotism, cultural revival, and Panslavist ideas grew in importance in the latter half of this century, evoking expectations of a more Russian than cosmopolitan dynasty. Several marriages were contracted with members of other reigning Slavic or Orthodox dynasties (Greece, Montenegro, Serbia). In the early 20th century two Romanov princesses were allowed to marry Russian high noblemen – whereas until the 1850s, practically all marriages had been with German princelings.
Alexander II was succeeded by his son Alexander III. This tsar, the second-to-last Romanov emperor, was responsible for conservative reforms in Russia. Not expected to inherit the throne, he was educated in matters of state only after the death of his older brother, Nicholas. Lack of diplomatic training may have influenced his politics as well as those of his son, Nicholas II. Alexander III was physically impressive, being not only tall (1.93 m or 6'4", according to some sources), but of large physique and considerable strength. His beard hearkened back to the likeness of tsars of old, contributing to an aura of brusque authority, awe-inspiring to some, alienating to others. Alexander, fearful of the fate which had befallen his father, strengthened autocratic rule in Russia. Some of the reforms the more liberal Alexander II had pushed through were reversed.
Alexander had not only inherited his dead brother's position as "Tsesarevich", but also his brother's Danish fiancée, Princess Dagmar. Taking the name Maria Fyodorovna upon her conversion to Orthodoxy, she was the daughter of King Christian IX and the sister of the future kings Frederik VIII of Denmark and George I of Greece, as well as of Britain's Queen Alexandra, consort of Edward VII. Despite contrasting natures and backgrounds the marriage was considered harmonious, producing six children and acquiring for Alexander the reputation of being the first tsar not known to take mistresses.
His eldest son, Nicholas, became emperor upon Alexander III's death due to kidney disease at age 49 in November 1894. Nicholas reputedly said, "I am not ready to be tsar..." Just a week after the funeral, Nicholas married his fiancée, Alix of Hesse-Darmstadt, a favorite grandchild of Queen Victoria of the United Kingdom. Though a kind-hearted man, he tended to leave intact his father's harsh policies. For her part the shy Alix, who took the name Alexandra Fyodorovna, became a devout convert to Orthodoxy as well as a devoted wife to Nicholas and mother to their five children, yet avoided many of the social duties traditional for Russia's tsarinas. Seen as distant and severe, unfavorable comparisons were drawn between her and her popular mother-in-law, Maria Fyodorovna. When, in September 1915, Nicholas took command of the army at the front lines during World War I, Alexandra sought to influence him toward an authoritarian approach in government affairs even more than she had done during peacetime. His well-known devotion to her injured both his and the dynasty's reputation during World War I, due both to her German origin and her unique relationship with Rasputin, whose role in the life of her only son was not widely known. Alexandra was a carrier of the gene for haemophilia, inherited from her maternal grandmother, Queen Victoria. Her son, Alexei, the long-awaited heir to the throne, inherited the disease and suffered agonizing bouts of protracted bleeding, the pain of which was sometimes partially alleviated by Rasputin's ministrations. Nicholas and Alexandra also had four daughters, the Grand Duchesses Olga, Tatiana, Maria and Anastasia.
The six crowned representatives of the Holstein-Gottorp-Romanov line were: Paul (1796–1801), Alexander I (1801–1825), Nicholas I (1825–1855), Alexander II (1855–1881), Alexander III (1881–1894), and Nicholas II (1894–1917).
Constantine Pavlovich and Michael Alexandrovich, both morganatically married, are occasionally counted among Russia's emperors by historians who observe that the Russian monarchy did not legally permit interregnums. But neither was crowned and both actively declined the throne.
The February Revolution of 1917 resulted in the abdication of Nicholas II in favor of his brother Grand Duke Michael Alexandrovich. The latter declined to accept imperial authority save to delegate it to the Provisional Government pending a future democratic referendum, effectively terminating the Romanov dynasty's rule over Russia.
After the February Revolution, Nicholas II and his family were placed under house arrest in the Alexander Palace. While several members of the imperial family managed to stay on good terms with the Provisional Government, and were eventually able to leave Russia, Nicholas II and his family were sent into exile in the Siberian town of Tobolsk by Alexander Kerensky in August 1917. In the October Revolution of 1917 the Bolsheviks ousted the Provisional government. In April 1918 the Romanovs were moved to the Russian town of Yekaterinburg, in the Urals, where they were placed in the Ipatiev House.
There have been numerous post-Revolution reports of Romanov survivors and unsubstantiated claims by individuals to be members of the deposed Tsar Nicholas II's family, the best known of whom was Anna Anderson. Proven research has, however, confirmed that all of the Romanovs held prisoners inside the Ipatiev House in Ekaterinburg were killed. Descendants of Nicholas II's two sisters, Grand Duchess Xenia Alexandrovna of Russia and Grand Duchess Olga Alexandrovna of Russia, do survive, as do descendants of previous tsars.
Grand Duke Kirill Vladimirovich, a male-line grandson of Tsar Alexander II, claimed the headship of the deposed Imperial House of Russia, and assumed, as pretender, the title "Emperor and Autocrat of all the Russias" in 1924 when the evidence appeared conclusive that all Romanovs higher in the line of succession had been killed. Kirill was followed by his only son Vladimir Kirillovich. Vladimir's only child, Maria Vladimirovna (born 1953), claims to have succeeded her father. The only son of her marriage with Prince Franz Wilhelm of Prussia, George Mikhailovich, is her heir apparent. The Romanov Family Association (RFA) formed in 1979, a private organization of most of the male-line descendants of Emperor Paul I of Russia (other than Vladimir Kirillovich, Maria Vladimirovna and her son) acknowledges the dynastic claims to the throne of no pretender, and is officially committed to support only that form of government chosen by the Russian nation. However the RFA's former president, Nicholas Romanovich, along with his brother Dimitri and some other family members, have repudiated the transfer of the dynasty's legacy to the female-line, contending that his claim is as valid as that of Maria Vladimirovna or her son. A great-grandson of Kirill's who is not a male-line Romanov, Prince Karl Emich of Leiningen, also claims to be the rightful representative of the Romanov Imperial heritage and has become the founder of Romanov Empire.
On the night of 17 July 1918, Bolshevik authorities acting on Yakov Sverdlov's orders in Moscow and led locally by Filip Goloschekin and Yakov Yurovsky, shot Nicholas II, his immediate family and four servants in the Ipatiev House's cellar.
The family was roused from sleep around 1:30 a.m. and told that they were being moved to a newer, safer location. They dressed quickly but informally. They were then led from the house where they had been staying and taken across a courtyard and down some stairs, then through a number of corridors and small dark rooms, few of which were lit. They reached a room at the end of one particular corridor that had a single electric light burning dimly. They asked for and were brought two chairs for the youngest children to sit on. The family members were then left alone for several minutes. Suddenly, a group of armed men led by Yurovsky entered the room. Yurovsky read an announcement from the local Duma explaining that they must all be killed immediately. Nicholas was utterly perplexed, and asked Yurovsky, "What? What?" Yurovsky eventually responded by saying, "This!" and shot Nicholas in the chest.
Initially the gunmen shot at Nicholas, who immediately fell dead from multiple bullet wounds. Then the dark room filled with smoke and dust from the spray of bullets, and the gunmen shot blindly, often hitting the ceiling and walls, creating yet more dust. Alexandra was soon shot in the head by military commissar Petar Ermakov, and killed, and some of the gunmen themselves became injured. It was not until after the room had been cleared of smoke that the shooters re-entered to find the remaining Imperial family still alive and uninjured. Maria tried to escape through the doors at the rear of the room, which led to a storage area, but the doors were nailed shut. The noise as she rattled the doors attracted the attention of Ermakov. Some of the family were shot in the head, but several of the others, including the young and frail Tsarevich, would not die either from multiple close-range bullet wounds or bayonet stabs. Finally, each was shot in the head. Even so, two of the girls were still alive 10 minutes later, and had to be bludgeoned with the butt of a rifle to finally be killed. Later it was discovered that the bullets and bayonet stabs had been partially blocked by diamonds that had been sewn into the children's clothing. The bodies of the Romanovs were then hidden and moved several times before being interred in an unmarked pit where they remained until the summer of 1979 when amateur enthusiasts disinterred and re-buried some of them, and then decided to conceal the find until the fall of communism. In 1991 the grave site was excavated and the bodies were given a state funeral under the nascent democracy of post-Soviet Russia, and several years later DNA and other forensic evidence was used by Russian and international scientists to make genuine identifications.
The Ipatiev House has the same name as the Ipatiev Monastery in Kostroma, where Mikhail Romanov had been offered the Russian Crown in 1613. The large memorial church "on the blood" has been built on the spot where the Ipatiev House once stood.
Nicholas II and his family were proclaimed passion-bearers by the Russian Orthodox Church in 2000. In orthodoxy, a passion-bearer is a saint who was not killed "because" of his faith, like a martyr; but who died "in" faith at the hand of murderers.
In July 1991, the crushed bodies of Nicholas II and his wife, along with three of their five children and four of their servants, were exhumed (although some questioned the authenticity of these bones despite DNA testing). Because two bodies were not present, many people believed that two Romanov children escaped the killings. There was much debate as to which two children's bodies were missing. A Russian scientist made photographic superimpositions and determined that Maria and Alexei were not accounted for. Later, an American scientist concluded from dental, vertebral, and other remnants that it was Anastasia and Alexei who were missing. Much mystery has always surrounded Anastasia's fate. Several films have been produced suggesting that she lived on. This has since been disproved with the discovery of the final Romanov children's remains and extensive DNA testing, which connected those remains to the DNA of Nicholas II, his wife, and the other three children.
After the bodies were exhumed in June 1991, they remained in laboratories until 1998, while there was a debate as to whether they should be reburied in Yekaterinburg or St. Petersburg. A commission eventually chose St. Petersburg. The remains were transferred with full military honor guard and accompanied by members of the Romanov family from Yekaterinburg to St. Petersburg. In St. Petersburg the remains of the imperial family were moved by a formal military honor guard cortege from the airport to the Sts. Peter and Paul Fortress where they (along with several loyal servants who were killed with them) were interred in a special chapel in the Peter and Paul Cathedral near the tombs of their ancestors. President Boris Yeltsin attended the interment service on behalf of the Russian people.
In mid-2007, a Russian archaeologist announced a discovery by one of his workers. The excavation uncovered the following items in the two pits which formed a "T":
The area where the remains were found was near the old Koptyaki Road, under what appeared to be double bonfire sites about from the mass grave in Pigs Meadow near Yekaterinburg. The general directions were described in Yurovsky's memoirs, owned by his son, although no one is sure who wrote the notes on the page. The archaeologists said the bones are from a boy who was roughly between the ages of 10 and 13 years at the time of his death and of a young woman who was roughly between the ages of 18 and 23 years old. Anastasia was 17 years, 1 month old at the time of the murder, while Maria was 19 years, 1 month old. Alexei would have been 14 in two weeks' time. Alexei's elder sisters Olga and Tatiana were 22 and 21 years old at the time of the murder respectively. The bones were found using metal detectors and metal rods as probes. Also, striped material was found that appeared to have been from a blue-and-white striped cloth; Alexei commonly wore a blue-and-white striped undershirt.
On 30 April 2008, Russian forensic scientists announced that DNA testing proves that the remains belong to the Tsarevich Alexei and his sister Maria. DNA information, made public in July 2008, that has been obtained from Ekaterinburg and repeatedly subject to independent testing by laboratories such as the University of Massachusetts Medical School, US, and reveals that the final two missing Romanov remains are indeed authentic and that the entire Romanov family housed in the Ipatiev House, Yekaterinburg were executed in the early hours of 17 July 1918. In March 2009, results of the DNA testing were published, confirming that the two bodies discovered in 2007 were those of Tsarevich Alexei and Maria.
Research on mitochondrial DNA (mtDNA) was conducted in the American AFDIL and in European GMI laboratories. In comparison with the previous analyses mtDNA in the area of Alexandra Fyodorovna, positions 16519C, 524.1A and 524.2C were added. The mtDNA of Prince Philip, Duke of Edinburgh, a great-nephew of the last Tsarina, was used by forensic scientists to identify her body and those of her children.
On 18 July 1918, the day after the killing at Yekaterinburg of the tsar and his family, members of the extended Russian imperial family met a brutal death by being killed near Alapayevsk by Bolsheviks. They included: Grand Duke Sergei Mikhailovich of Russia, Prince Ioann Konstantinovich of Russia, Prince Konstantin Konstantinovich of Russia, Prince Igor Konstantinovich of Russia and Prince Vladimir Pavlovich Paley, Grand Duke Sergei's secretary Varvara Yakovleva, and Grand Duchess Elisabeth Fyodorovna, a granddaughter of Queen Victoria and elder sister of Tsarina Alexandra. Following the 1905 assassination of her husband, Grand Duke Sergei Alexandrovich, Elisabeth Fyodorovna had ceased living as a member of the Imperial family and took up life as a serving nun, but would nonetheless be arrested and slated for death with other Romanovs. They were thrown down a mine shaft into which explosives were then dropped, all being left to die there slowly.
The bodies were recovered from the mine by the White Army in 1918, who arrived too late to rescue them. Their remains were placed in coffins and moved around Russia during struggles between the White and the opposing Red Army. By 1920 the coffins were interred in a former Russian mission in Beijing, now beneath a parking area. In 1981 Grand Duchess Elisabeth was canonized by the Russian Orthodox Church Outside of Russia, and in 1992 by the Moscow Patriarchate. In 2006 representatives of the Romanov family were making plans to re-inter the remains elsewhere. The town became a place of pilgrimage to the memory of Elisabeth Fyodorovna, whose remains were eventually re-interred in Jerusalem.
On 13 June 1918, Bolshevik revolutionary authorities killed Grand Duke Michael Alexandrovich of Russia and Nicholas Johnson (Michael's secretary) in Perm.
In January 1919 revolutionary authorities killed Grand Dukes Dmitry Konstantinovich, Nikolai Mikhailovich, Paul Alexandrovich and George Mikhailovich, who had been held in the prison of the Saint Peter and Paul Fortress in Petrograd.
In 1919, Maria Fyodorovna, widow of Alexander III, and mother of Nicholas II, managed to escape Russia aboard , which her nephew, King George V of the United Kingdom, had sent, at the urging of his own mother, Queen Alexandra, Maria's elder sister, to rescue her. After a stay in England with Queen Alexandra, she returned to her native Denmark, first living at Amalienborg Palace, with her nephew, King Christian X, and later, at Villa Hvidøre. Upon her death in 1928 her coffin was placed in the crypt of Roskilde Cathedral, the burial site of members of the Danish Royal Family.
In 2006, the coffin with her remains was moved to the Sts. Peter and Paul Fortress, to be buried beside that of her husband. The transfer of her remains was accompanied by an elaborate ceremony at Saint Isaac's Cathedral officiated by the Patriarch Alexis II. Descendants and relatives of the Dowager Empress attended, including her great-grandson Prince Michael Andreevich, Princess Catherine Ioannovna of Russia, the last living member of the Imperial Family born before the fall of the dynasty, and Princes Dmitri and Prince Nicholas Romanov.
Among the other exiles who managed to leave Russia, were Maria Fyodorovna's two daughters, the Grand Duchesses Xenia Alexandrovna and Olga Alexandrovna, with their husbands, Grand Duke Alexander Mikhailovich and Nikolai Kulikovsky, respectively, and their children, as well as the spouses of Xenia's elder two children and her granddaughter. Xenia remained in England, following her mother's return to Denmark, although after their mother's death Olga moved to Canada with her husband, both sisters dying in 1960. Grand Duchess Maria Pavlovna, widow of Nicholas II's uncle, Grand Duke Vladimir, and her children the Grand Dukes Kiril, Boris and Andrei, and their sister Elena, also managed to flee Russia. Grand Duke Dmitri Pavlovich, a cousin of Nicholas II, had been exiled to the Caucasus in 1916 for his part in the murder of Grigori Rasputin, and managed to escape Russia. Grand Duke Nicholas Nikolaievich, who had commanded Russian troops during World War I prior to Nicholas II taking command, along with his brother, Grand Duke Peter, and their wives, Grand Duchesses Anastasia and Militza, who were sisters, and Peter's children, son-in-law, and granddaughter also fled the country.
Elizaveta Mavrikievna, widow of Konstantin Konstantinovich, escaped with her daughter Vera Konstantinovna and her son Georgii Konstantinovich, as well as her grandson Prince Vsevolod Ivanovich and her granddaughter Princess Catherine Ivanovna to Sweden. Her other daughter, Tatiana Konstantinovna, also escaped with her children Natasha and Teymuraz, as well as her uncle's aide-de-camp Alexander Korochenzov. They fled to Romania and then Switzerland. Gavriil Konstantinovich was imprisoned before fleeing to Paris.
Ioann Konstantinovich's wife, Elena Petrovna, was imprisoned in Alapayevsk and Perm, before escaping to Sweden and Nice, France.
Since 1991, the succession to the former Russian throne has been in dispute, largely due to disagreements over the validity of dynasts' marriages.
Grand Duchess Maria Vladimirovna of Russia claims to hold the title of empress in pretense with her only child, George Mikhailovich, as heir apparent.
Others have argued in support of the rights of the late Prince Nicholas Romanovich Romanov, whose brother Prince Dimitri Romanov was the next male heir of his branch after whom it is now passed to Prince Andrew Romanov.
In 2014, a micronation calling itself the Imperial Throne, founded in 2011 by Monarchist Party leader Anton Bakov, announced Prince Karl Emich of Leiningen, a Romanov descendant, as its sovereign. In 2017, it renamed itself as "Romanov Empire".
The collection of jewels and jewelry collected by the Romanov family during their reign are commonly referred to as the "Russian Crown Jewels" and they include official state regalia as well as personal pieces of jewelry worn by Romanov rulers and their family. After the Tsar was deposed and his family murdered, their jewels and jewelry became the property of the new Soviet government. A select number of pieces from the collection were sold at auction by Christie's in London in March 1927. The remaining collection is on view today in the Kremlin Armoury in Moscow.
On 28 August 2009, a Swedish public news outlet reported that a collection of over 60 jewel-covered cigarette cases and cufflinks owned by Grand Duchess Vladimir had been found in the archives of the Swedish Ministry for Foreign Affairs, and was returned to the descendants of Grand Duchess Vladimir. The jewelry was allegedly turned over to the Swedish embassy in St. Petersburg in November 1918 by Duchess Marie of Mecklenburg-Schwerin to keep it safe. The value of the jewelry has been estimated at 20 million Swedish krona (about 2.6 million US dollars).
The centerpiece is the coat of arms of Moscow that contains the iconic Saint George the Dragon-slayer with a blue cape (cloak) attacking golden serpent on red field.
The wings of double-headed eagle contain coat of arms of following lands:
Informational notes
Citations | https://en.wikipedia.org/wiki?curid=26240 |
Robert Bloch
Robert Albert Bloch (; April 5, 1917September 23, 1994) was an American fiction writer, primarily of crime, horror, fantasy and science fiction, from Milwaukee, Wisconsin. He is best known as the writer of "Psycho" (1959), the basis for the film of the same name by Alfred Hitchcock. His fondness for a pun is evident in the titles of his story collections such as "Tales in a Jugular Vein", "Such Stuff as Screams Are Made Of" and "Out of the Mouths of Graves".
Bloch wrote hundreds of short stories and over 30 novels. He was one of the youngest members of the Lovecraft Circle and began his professional writing career immediately after graduation, aged 17. He was a protégé of H. P. Lovecraft, who was the first to seriously encourage his talent. However, while Bloch started his career by emulating Lovecraft and his brand of "cosmic horror", he later specialized in crime and horror stories dealing with a more psychological approach.
Bloch was a contributor to pulp magazines such as "Weird Tales" in his early career, and was also a prolific screenwriter and a major contributor to science fiction fanzines and fandom in general.
He won the Hugo Award (for his story "That Hell-Bound Train"), the Bram Stoker Award, and the World Fantasy Award. He served a term as president of the Mystery Writers of America (1970) and was a member of that organization and of Science Fiction Writers of America, the Writers Guild of America, the Academy of Motion Picture Arts and Sciences and the Count Dracula Society. In 2008, The Library of America selected Bloch's essay "The Shambles of Ed Gein" (1962) for inclusion in its two-century retrospective of American true crime.
His favorites among his own novels were "The Kidnapper", "The Star Stalker", "Psycho", "Night-World," and "Strange Eons". His work has been extensively adapted into films, television productions, comics, and audiobooks.
Bloch was born in Chicago, the son of Raphael "Ray" Bloch (1884–1952), a bank cashier, and his wife Stella Loeb (1880–1944), a social worker, both of German Jewish descent. Bloch's family moved to Maywood, a Chicago suburb, when he was five; he lived there until he was ten. He attended the Methodist Church there, despite his parents' Jewish heritage, and attended Emerson Grammar School. In 1925, at eight years of age, living in Maywood, he attended (alone at night) a screening of Lon Chaney, Sr.'s film "The Phantom of the Opera" (1925). The scene of Chaney removing his mask terrified the young Bloch ("it scared the living hell out of me and I ran all the way home to enjoy the first of about two years of recurrent nightmares"). It also sparked his interest in horror. Bloch was a precocious child and found himself in fourth grade when he was eight. He also obtained a pass into the adult section of the Public Library, where he read omnivorously. Bloch considered himself a budding artist and worked in pencil sketching and watercolours, but myopia in adolescence seemed to effectively bar art as a career. He had passions for German-made lead toy soldiers and for silent cinema.
In 1929, Bloch's father Ray Bloch lost his bank job, and the family moved to Milwaukee, where Stella worked at the Milwaukee Jewish Settlement settlement house. Robert attended Washington, then Lincoln High School, where he met lifelong friend Harold Gauer. Gauer was editor of "The Quill", Lincoln's literary magazine, and accepted Bloch's first published short story, a horror story titled "The Thing" (the "thing" of the title was Death). Both Bloch and Gauer graduated from Lincoln in 1934 during the height of the Great Depression. Bloch was involved in the drama department at Lincoln and wrote and performed in school vaudeville skits.
During the 1930s, Bloch was an avid reader of the pulp magazine "Weird Tales", which he had discovered at the age of ten in 1927. In the Chicago Northwestern Railroad depot with his parents and aunt Lil, his aunt offered to buy him any magazine he wanted and he picked "Weird Tales" (Aug 1927 issue) off the newsstand over her shocked protest. He began his readings of the magazine with the first instalment of Otis Adelbert Kline's "The Bride of Osiris" which dealt with a secret Egyptian city called Karneter located beneath Bloch's birth city of Chicago. The Depression came in the early 1930s. He later recalled, in accepting the Lifetime Achievement Award at the First World Fantasy Convention (1975), how "times were very hard. "Weird Tales" cost twenty-five cents in a day when most pulp magazines cost a dime. I remember that meant a lot to me." He went on to relate how he would get up very early on the last day of the month, with twenty-five cents saved from his monthly allowance of one dollar, and would run all the way to a combination tobacco/magazine store and buy the new "Weird Tales" issue, sometimes smuggling it home under his coat if the cover was particularly risqué. His parents were not impressed with Hugh Doak Rankin's sexy covers for the magazine, and when the Bloch family moved to Milwaukeee in 1928 young Bloch gradually abandoned his interest. But by the time he had entered high school, he returned to reading "Weird Tales" during convalescence from flu.
H. P. Lovecraft, a frequent contributor to "Weird Tales", became one of his favorite writers. The first of Lovecraft's stories he had read was "Pickman's Model," in "Weird Tales" for October 1927. Bloch wrote: "In school I was forced to squirm my way through the works of Oliver Wendell Holmes, James Lowell and Henry Wadsworth Longfellow. In 'Pickman's Model', the ghouls ate all three. Now that, I decided, was poetic justice." As a teenager, Bloch wrote a fan letter to Lovecraft (1933), asking where he could find copies of earlier stories of Lovecraft's that Bloch had missed. Lovecraft lent them to him. Lovecraft also gave Bloch advice on his early fiction-writing efforts. asking whether Bloch had written any weird work and, if so, whether he might see samples of it. Bloch took up Lovecraft's offer in late April 1933, sending him two short items, "The Gallows" and another work whose title is unknown.
Lovecraft also suggested Bloch write to other members of the Lovecraft Circle, including August Derleth, Robert H. Barlow, Clark Ashton Smith, Donald Wandrei, Frank Belknap Long, Henry S. Whitehead, E. Hoffman Price, Bernard Austin Dwyer and J. Vernon Shea. Bloch's first completed tales were "Lilies," "The Laughter of a Young Ghoul" and "The Black Lotus". Bloch submitted these to "Weird Tales"; editor Farnsworth Wright summarily rejected them all. However Bloch successfully placed "Lilies" in the semi-professional magazine "Marvel Tales" (Winter 1934) and "Black Lotus" in "Unusual Stories" (1935). Bloch later commented, "I figured I'd better do something different or I'd end up as a florist."
Bloch graduated from high school in June 1934. He then wrote a story which promptly (six weeks later) sold to "Weird Tales." Bloch's first publication in "Weird Tales" was a letter criticising the Conan stories of Robert E. Howard. His first professional sales, at the age of 17 (July 1934), to "Weird Tales," were the short stories "The Feast in the Abbey" and "The Secret in the Tomb". "Feast ..." appeared first, in the January 1935 issue, which actually went on sale November 1, 1934; "The Secret in the Tomb" appeared in the May 1935 "Weird Tales".
Bloch's correspondence with Derleth led to a visit to Derleth's home in Sauk City, Wisconsin (the headquarters of Arkham House). Bloch was impressed by Derleth who "fulfilled my expectations as a writer by wearing this purple velvet smoking jacket. That impressed me even more because Derleth didn't even smoke." Following this, and continued correspondence with Lovecraft, Bloch went to Chicago and met Farnsworth Wright, the then editor of "Weird Tales". He also met the first "Weird Tales" writer outside of Derleth he had encountered - Otto Binder.
Bloch's early stories were strongly influenced by Lovecraft. Indeed, a number of his stories were set in, and extended, the world of Lovecraft's Cthulhu Mythos. These include "The Dark Demon", in which the character Gordon is a figuration of Lovecraft, and which features Nyarlathotep; "The Faceless God" (features Nyarlathotep); "The Grinning Ghoul" (written after the manner of Lovecraft) and "The Unspeakable Betrothal" (vaguely attached to the Cthulhu Mythos). It was Bloch who invented, for example, the oft-cited Mythos texts "De Vermis Mysteriis" and "Cultes des Goules". Many other stories influenced by Lovecraft were later collected in Bloch's volume "Mysteries of the Worm" (now in its third, expanded edition). In 1935, Bloch wrote the tale "Satan's Servants", on which Lovecraft lent much advice, but none of the prose was by Lovecraft; this tale did not appear in print until 1949, in "Something About Cats and Other Pieces".
The young Bloch appears, thinly disguised, as the character Robert Blake in Lovecraft's story "The Haunter of the Dark" (1936), which is dedicated to Bloch. Bloch was the only individual to whom Lovecraft ever dedicated a story. In this story, Lovecraft kills off Robert Blake, the Bloch-based character, repaying a "courtesy" Bloch earlier paid Lovecraft with his 1935 tale "The Shambler from the Stars", in which the Lovecraft-inspired figure dies; the story goes so far as to use Bloch's then-current address (620 East Knapp Street) in Milwaukee. (Bloch even had a signed certificate from Lovecraft [and some of his creations] giving Bloch permission to kill Lovecraft off in a story.) Bloch later recalled "believe me, beyond all doubt, I don't know anyone else I'd rather be killed by." Bloch later wrote a third tale, "The Shadow From the Steeple", picking up where "The Haunter of the Dark" finished ("Weird Tales" Sept 1950).
Lovecraft's death in 1937 deeply affected Bloch, who was then aged only 20. He recalled "Part of me died with him, I guess, not only because he was not a god, he was mortal, that is true, but because he had so little recognition in his own lifetime. There were no novels or collections published, no great realization, even here in Providence, of what was lost." Elsewhere he wrote, "the news of his fate came to me as a shattering blow; all the more so because the world at large ignored his passing. Only my parents and a few correspondents seemed to sense my shock, and my feeling that a part of me had died with him."
After Lovecraft's death in 1937, Bloch continued writing for "Weird Tales", where he became one of its most popular authors. He also began contributing to other pulps, such as the science fiction magazine "Amazing Stories". Bloch broadened the scope of his fiction. His horror themes included voodoo ("Mother of Serpents"), the conte cruel ("The Mandarin's Canaries"), demonic possession ("Fiddler's Fee"), and black magic ("Return to the Sabbat"). Bloch visited Henry Kuttner in California in 1937. Bloch's first science fiction story, "The Secret of the Observatory", was published in "Amazing Stories" (August 1938).
In 1935 Bloch joined a writers' group, The Milwaukee Fictioneers, members of which included Stanley Weinbaum, Ralph Milne Farley and Raymond A. Palmer. Another member of the group was Gustav Marx, who offered Bloch a job writing copy in his advertising firm, also allowing Bloch to write stories in his spare time in the office. Bloch was close friends with C.L. Moore and her husband Henry Kuttner, who visited him in Milwaukee.
During the years of the Depression, Bloch appeared regularly in dramatic productions, writing and performing in his own sketches. Around 1936 he sold some gags to radio comedians Stoopnagle and Budd, and to Roy Atwell. Also in 1936, his tale "The Grinning Ghoul" was published in "Weird Tales" (June); "The Opener of the Way" appeared in "Weird Tales" (Oct); "Mother of Serpents" appeared in the December issue. The December issue also contained Lovecraft's tale "The Haunter of the Dark" in which he killed off young author "Robert Blake".
In 1937, following Lovecraft's death, "The Mannikin" appeared in "Weird Tales" for April. "Weird Tales" published "Return to the Sabbath" in July 1938. Bloch's first science fiction story, "The Secret of the Observatory" appeared in "Amazing Stories" (Aug 1938). In a profile accompanying this tale, Bloch described himself as "tall, dark, unhandsome" with "all the charm and personality of a swamp adder". He noted that "I hate everything", but reserved particular dislike for "bean soup, red nail polish, house-cleaning, and optimists".
In 1939, Bloch was contacted by James Doolittle, who was managing the campaign for Mayor of Milwaukee of a little-known assistant city attorney named Carl Zeidler. He was asked to work on Zeidler's speechwriting, advertising, and photo ops, in collaboration with Harold Gauer. They created elaborate campaign shows; in Bloch's 1993 autobiography, "Once Around the Bloch", he gives an inside account of the campaign, and the innovations he and Gauer came up with – for instance, the original releasing-balloons-from-the-ceiling shtick. He comments bitterly on how, after Zeidler's victory, they were ignored and not even paid their promised salaries. He ends the story with a wryly philosophical point:
Also in 1939, two of Bloch's tales were published: "The Strange Flight of Richard Clayton" ("Amazing Stories," August) and "The Cloak" ("Unknown," March).
In October 1941, the tale "A Good Knight's Work" in "Unknown Worlds" first appeared. Shortly thereafter, Bloch created the Damon Runyon-esque humorous series character Lefty Feep in the story "Time Wounds All Heels" "Fantastic Adventures" (April 1942). Around the same time, he began work as an advertising copywriter at the Gustav Marx Advertising Agency, a position he held until 1953. Marx allowed Bloch to write stories in the office in quiet times. Bloch published a total of 23 Lefty Feep stories in "Fantastic Adventures", the last one published in 1950, but the bulk appeared during World War II. Feep's character name had actually been coined by Bloch's friend/collaborator Harold Gauer for their unpublished novel "In the Land of Sky-Blue Ointments", Bloch also worked for a time in local vaudeville and tried to break into writing for nationally known performers.
Bloch gradually evolved away from Lovecraftian imitations towards a unique style of his own. One of the first distinctly "Blochian" stories was "Yours Truly, Jack the Ripper", ("Weird Tales", July 1943). The story was Bloch's take on the Jack the Ripper legend, and was filled out with more genuine factual details of the case than many other fictional treatments. It cast the Ripper as an eternal being who must make human sacrifices to extend his immortality. It was adapted for both radio (in "Stay Tuned for Terror") and television (as an episode of "Thriller" in 1961 adapted by Barré Lyndon). Bloch followed up this story with a number of others in a similar vein dealing with half-historic, half-legendary figures such as the Man in the Iron Mask ("Iron Mask", 1944), the Marquis de Sade ("The Skull of the Marquis de Sade", 1945) and Lizzie Borden ("Lizzie Borden Took an Axe ...", 1946).
In 1944, Laird Cregar performed Bloch's tale "Yours Truly, Jack the Ripper" over a coast-to-coast radio network
Towards the end of World War Two, in 1945, Bloch was asked to write 39 15-minute episodes of his own radio horror show called "Stay Tuned for Terror". Many of the programs were adaptations of his own pulp stories. (None of the episodes, which were all broadcast, are extant).. The same year he published "The Skull of the Marquis de Sade" ("Weird Tales," Sept). August Derleth's Arkham House, Lovecraft's publisher, published Bloch's first collection of short stories, "The Opener of the Way", in an edition of 2,000 copies, with jacket art by Ronald Clyne. At the same time, his best-known early tale, "Yours Truly, Jack the Ripper", received considerable attention through dramatization on radio and reprinting in anthologies. This story, as noted below, involving a Ripper who has found literal immortality through his crimes, has been widely imitated (or plagiarized); Bloch himself would return to the theme (see below). Stories published in 1946 include "Enoch" ("Weird Tales", Sept) and "Lizzie Borden Took an Axe" ("Weird Tales", Nov).
Bloch's first novel was published in hardcover - the thriller "The Scarf" (The Dial Press 1947; the Fawcett Gold medal paperback of 1966 features a revised text). It tells the story of a writer, Daniel Morley, who uses real women as models for his characters. But as soon as he is done writing the story, he is compelled to murder them, and always the same way: with the maroon scarf he has had since childhood. The story begins in Minneapolis and follows him and his trail of dead bodies to Chicago, New York City, and finally Hollywood, where his hit novel is going to be turned into a movie, and where his self-control may have reached its limit.
In 1948, Bloch was the Guest of Honor at Torcon I, World Science Fiction Convention, Toronto, Canada. In 1952 he published "Lucy Comes to Stay"(Weird Tales, Jan).
Bloch published three novels in 1954 – "Spiderweb", "The Kidnapper" and "The Will to Kill" as he endeavored to support his family. That same year he was a weekly guest panellist on the TV quiz show "It's a Draw". "Shooting Star" (1958), a mainstream novel, was published in a double volume with a collection of Bloch's stories titled "Terror in the Night". "This Crowded Earth" (1958) was science fiction.
With the demise of "Weird Tales", Bloch continued to have his fiction published in "Amazing", "Fantastic", "The Magazine of Fantasy and Science Fiction", and "Fantastic Universe"; he was a particularly frequent contributor to "Imagination" and "Imaginative Tales". His output of thrillers increased and he began to appear regularly in "The Saint", "Ellery Queen" and similar mystery magazines, and to such suspense and horror-fiction magazine projects as "Shock".
Bloch continued to revisit the Jack the Ripper theme. His contribution to Harlan Ellison's 1967 science fiction anthology "Dangerous Visions" was a story, "A Toy for Juliette", which evoked both Jack the Ripper and the Marquis de Sade in a time-travel story. The same anthology had Ellison's sequel to it titled "The Prowler in the City at the Edge of the World". His earlier idea of the Ripper as an immortal being resurfaced in Bloch's contribution to the original "Star Trek" series episode "Wolf in the Fold". His 1984 novel "Night of the Ripper" is set during the reign of Queen Victoria and follows the investigation of Inspector Frederick Abberline in attempting to apprehend the Ripper, and includes some famous Victorians such as Sir Arthur Conan Doyle within the storyline.
Bloch won the Hugo Award for Best Short Story for "That Hellbound Train" in 1959, the same year that his sixth novel, "Psycho," was published. Bloch had written an earlier short story involving dissociative identity disorder, "The Real Bad Friend", which appeared in the February 1957 "Mike Shayne Mystery Magazine", that foreshadowed the 1959 novel "Psycho". However, "Psycho" also has thematic links to the story "Lucy Comes to Stay." Also in 1959, Bloch delivered a lecture titled "Imagination and Modern Social Criticism" at the University of Chicago; this was reprinted in the critical volume "The Science Fiction Novel" (Advent Publishers). His story "The Hungry Eye" appeared in "Fantastic" (May). This was also the year in which, despite having graduated from painting watercolours to oils, he gave up painting completely.
Norman Bates, the main character in "Psycho", was very loosely based on two people. First was the real-life serial killer Ed Gein, about whom Bloch later wrote a fictionalized account, "The Shambles of Ed Gein". (The story can be found in "Crimes and Punishments: The Lost Bloch, Volume 3"). Second, it has been indicated by several people, including Noel Carter (wife of Lin Carter) and Chris Steinbrunner, as well as allegedly by Bloch himself, that Norman Bates was partly based on Calvin Beck, publisher of "Castle of Frankenstein". Bloch's basing of the character of Norman Bates on Ed Gein is discussed in the documentary "Ed Gein: The Ghoul of Plainfield", which can be found on Disc 2 of the DVD release of the remake of "The Texas Chainsaw Massacre" (2003). However, Bloch also commented that it was the situation itself - a mass murderer living undetected and unsuspected in a typical small town in middle America - rather than Gein himself who sparked Bloch's storyline. He writes: "Thus the real-life murderer was not the role model for my character Norman Bates. Ed Gein didn't own or operate a motel. Ed Gein didn't kill anyone in the shower. Ed Gein wasn't into taxidermy. Ed Gein didn't stuff his mother, keep her body in the house, dress in a drag outfit, or adopt an alternative personality. These were the functions and characteristics of Norman Bates, and Norman Bates didn't exist until I made him up. Out of my own imagination, I add, which is probably the reason so few offer to take showers with me."
Though Bloch had little involvement with the film version of his novel, which was directed by Alfred Hitchcock from an adapted screenplay by Joseph Stefano, he was to become most famous as its author. Bloch was awarded a special Mystery Writers of America scroll for the novel in 1961.
The novel is one of the first examples at full length of Bloch's use of modern urban horror relying on the horrors of interior psychology rather than the supernatural. "By the mid-1940s, I had pretty well mined the vein of ordinary supernatural themes until it had become varicose," Bloch explained to Douglas E. Winter in an interview. "I realized, as a result of what went on during World War II and of reading the more widely disseminated work in psychology, that the real horror is not in the shadows, but in that twisted little world inside our own skulls." While Bloch was not the first horror writer to utilise a psychological approach (it originates in the work of Edgar Allan Poe), Bloch's psychological approach in modern times was comparatively unique.
Bloch's agent, Harry Altshuler, received a "blind bid" for the novel – the buyer's name was not mentioned – of $7,500 for screen rights to the book. The bid eventually went to $9,500, which Bloch accepted. Bloch had never sold a book to Hollywood before. His contract with Simon & Schuster included no bonus for a film sale. The publisher took 15 percent according to contract, while the agent took his 10%; Bloch wound up with about $6,750 before taxes. Despite the enormous profits generated by Hitchcock's film, Bloch received no further direct compensation.
Only Hitchcock's film was based on Bloch's novel. The later films in the "Psycho" series bear no relation to either of Bloch's sequel novels. Indeed, Bloch's proposed script for the film "Psycho II" was rejected by the studio (as were many other submissions), and it was this that he subsequently adapted for his own sequel novel.
The film "Hitchcock" (2012) tells the story of Alfred Hitchcock's making of the film version of "Psycho". Although it mentions Bloch and his novel, Bloch himself is not a character in the movie.
Following his move to Hollywood, around 1960, Bloch had multiple assignments from various television companies. However, he was not allowed to write for five months when the Writers Guild had a strike. After the strike was over, he became a frequent scriptwriter for television and film projects in the mystery, suspense, and horror genre. His first assignments were for the Macdonald Carey vehicle, "Lock-Up", (penning five episodes) as well as one for "Whispering Smith". Further TV work included an episode of "Bus Stop" ("I Kiss Your Shadow"), 10 episodes of "Thriller" (1960–62, several based on his own stories), and 10 episodes of "Alfred Hitchcock Presents" (1960–62). His short story collection "Pleasant Dreams - Nightmares" was published by Arkham House in 1960.
Bloch wrote the screenplay for "The Cabinet of Caligari" (1962), which is only very loosely related to the 1920 German silent film, and proved to be an unhappy experience. The same year, Bloch penned the story and teleplay "The Sorcerer's Apprentice" for "Alfred Hitchcock Presents". The episode was shelved when the NBC Television Network and sponsor Revlon called its ending "too gruesome" (by 1960s standards) for airing. Bloch was pleased later when the episode was included in the program's syndication package to affiliate stations, where not one complaint was registered. Today, due to public domain status, the episode is readily available in home media formats from numerous distributors and is even available on free video on demand.
His TV work did not slow Bloch's fictional output. In the early 1960s he published several novels, including "The Dead Beat" (1960), and "Firebug " (1961), for which Harlan Ellison, then an editor at Regency Books, contributed the first 1,200 words. In 1962 numerous works appeared in book form. Bloch's novel "The Couch" (1962) (the basis for the screenplay of his first movie, filmed the same year) was published. That year several Bloch short story collections- "Atoms and Evil", "More Nightmares" and "Yours Truly, Jack the Ripper" were published, as well as another novel, "Terror" (whose working titles included "Amok" and "Kill for Kali"). Editor Earl Kemp assembled a selection of Bloch's prolific output for fan magazines as "The Eight Stage of Fandom: Selections from 25 years of Fan Writing" (Advent Publishers). In this era, Stephen King later wrote, "What Bloch did with such novels as "The Deadbeat", "The Scarf", "Firebug", "Psycho", and "The Couch" was to re-discover the suspense novel and reinvent the antihero as first discovered by James Cain."
During 1963, Bloch saw into print two further collections of short stories, "Bogey men" and "Horror-7". In 1964 Bloch married Eleanor Alexander and wrote original screenplays for two films produced and directed by William Castle, "Strait-Jacket" (1964) and "The Night Walker" (also 1964), along with "The Skull" (1965).The latter film was based on his short story "The Skull of the Marquis de Sade".
Bloch's further TV writing in this period included "The Alfred Hitchcock Hour" (7 episodes, 1962–1965), "I Spy" (1 episode, 1966), "Run for Your Life" (1 episode, 1966), and "The Girl from U.N.C.L.E." (1 episode, 1967). He penned three scripts for the original "" series which were screened in 1966 and 1967: "What Are Little Girls Made Of?", "Wolf in the Fold" (another Jack the Ripper variant), and "".
In 1968, Bloch returned to London to do two episodes for the English Hammer Films series "Journey to the Unknown" for Twentieth Century Fox. One of the episodes, "The Indian Spirit Guide", was included in the American TV movie "Journey to Midnight" (1968). The other episode was "Girl of My Dreams," co-scripted with Michael J. Bird and based on the eponymous story by Richard Matheson.
Following the movie "The Skull" (1965), which was based on a Bloch story but scripted by Milton Subotsky, he wrote the screenplays for five feature films produced by Amicus Productions – "The Psychopath" (1966), "The Deadly Bees" (co-written with Anthony Marriott, 1967), "Torture Garden" (also 1967), "The House That Dripped Blood" (1971) and "Asylum" (1972). The last two films featured stories written by Bloch that were printed first in anthologies he wrote in the 1940s and early 1950s.
During the 1970s, Bloch wrote two TV movies for director Curtis Harrington – "The Cat Creature" (1973) (an "ABC Movie of the Week") and "The Dead Don't Die". "The Cat Creature" was an unhappy production experience for Bloch. Producer Doug Cramer wanted to do an update of "Cat People" (1942), the Val Lewton-produced film. Bloch commented: "Instead, I suggested a blending of the elements of several well-remembered films, and came up with a story line which dealt with the Egyptian cat-goddess (Bast), reincarnation and the first bypass operation ever performed on an artichoke heart." A detailed account of the troubled production of the film is described in Bloch's autobiography.
Bloch meanwhile (interspersed between his screenplays for Amicus Productions and other projects), penned single episodes for "Night Gallery" (1971), "Ghost Story" (1972), "The Manhunter" (1974), and "Gemini Man" (1976).
In 1965, two further collections of short stories appeared - "The Skull of the Marquis de Sade" and "Tales in a Jugular Vein". 1966 saw Bloch win the Ann Radcliffe Award for Television and publisher yet another collection of shorts - "Chamber of Horrors". Bloch returned to the site of his childhood home at 620 East Knapp St, Milwaukee (the address used by Lovecraft for the character Robert Blake in "The Haunter of the Dark") only to find the neighborhood razed and the entire neighborhood leveled and replaced by expressway approaches.
In 1967, another Bloch collection, "The Living Demons" was issued. He also published another classic story of Jack the Ripper, "A Toy for Juliette" in Harlan Ellison's "Dangerous Visions" anthology. In 1968 he published a duo of long sf novellas as "This Crowded Earth and Ladies'Day". His novel "The Star Stalker" was published, and "Dragons and Nightmares" (the first collection of Lefty Feep stories) appeared in hardcover (Mirage Press).
"Ladies Day/This Crowded Earth" and "The Star Stalker" followed in 1968. The collection "Bloch and Bradbury" (a collaboration with Ray Bradbury) and the hardcover novel "The Todd Dossier", originally as by Collier Young, were published in 1969.
Bloch won a second Ann Radcliffe Award, this time for Literature, in 1969. That same year, Bloch was invited to the Second International Film Festival in Rio de Janeiro, March 23–31, along with other science fiction writers from the United States, Britain and Europe.
In 1971, Bloch served as president of the Mystery Writers of America, meanwhile publishing the novel "Sneak preview", the collection "fear Today, Gone Tomorrow," and the short novel "It's All in Your Mind". In 1972 he published another novel, "Night-World". In 1973 Bloch was the Guest of Honor at Torcon II, World Science Fiction Convention, Toronto. 1974 saw the publication of his novel "American Gothic", inspired by the true life story of serial killer H.H. Holmes.
In 1975, Bloch won the Lifetime Achievement Award at the First World Fantasy Convention held in Providence, Rhode Island. The award was a bust of H.P. Lovecraft. The occasion of this convention was the first time Bloch actually visited the city of Providence. An audio recording was made of Robert Bloch during that 1975 convention, accessible online at
In 1976, two records of Bloch recordings of his stories were released by Alternate World recordings - "Gravely, Robert Bloch!" and "Blood! The Life and Times of jack the Ripper!"(with Harlan Ellison). In 1977, Lester del Rey edited "The Best of Robert Bloch" for Del Rey books. Two further short story collections appeared - "Cold Chills" and "The King of Terrors."
Bloch continued to published short story collections throughout this period. His "Selected Stories" (reprinted in paperback with the incorrect title "The Complete Stories") appeared in three volumes just prior to his death, although many previously uncollected tales have appeared in volumes published since 1997 (see below). Bloch also contributed the story "Heir Apparent," set in Andre Norton's Witch World, to "Tales of the Witch World" (Vol. 1), NY: Tor, 1987.
1979 saw the publication of Bloch's novel "There is a Serpent in Eden" (also reissued as "The Cunning"), and two more short story collections, "Out of the Mouths of graves" and "Such Stuff as Screams Are Made Of."
His numerous novels of the 1970s demonstrate Bloch's thematic range, from science fiction - "Sneak Preview" (1971) - through horror novels such as the loving Lovecraftian tribute "Strange Eons" (Whispers Press, 1978) and the non-supernatural mystery "There is a Serpent in Eden" (1979);
Bloch's screenplay-writing career continued active through the 1980s, with teleplays for "Tales of the Unexpected" (one episode, 1980), "Darkroom" (two episodes,1981), "Alfred Hitchcock Presents" (1 episode, 1986), "Tales from the Darkside" (three episodes, 1984–87 - "Beetles", "A Case of the Stubborns" and "Everybody needs a Little Love") and "Monsters" (three episodes, 1988–1989 - "The Legacy", "Mannikins of Horror", and "Reaper"). No further screen work appeared in the last five years before his death, although an adaptation of his "collaboration" with Edgar Allan Poe, "The Lighthouse", was filmed as an episode of "The Hunger" in 1998.
"The First World Fantasy Convention: Three Authors Remember" (Necronomicon Press, 1980) features reminiscences of that important event by Bloch, T.E.D. Klein and Fritz Leiber. In 1981, Zebra Books issued the first edition of the Cthulhu Mythos-themed collection "Mysteries of the Worm". This item was reprinted some years later in an expanded edition by Chaosium.
Bloch's sequel to the original "Psycho" ("Psycho II" was published in 1982 (unrelated to the film of the same title) and in 1983 he novelised "". His novel "Night of the Ripper" (1984), was another return to one of Bloch's favourite themes, the Jack the Ripper murders of 1888.
In 1986, Scream Press published the hardcover omnibus "Unholy Trinity", collecting three by now scarce Bloch novels, "The Scarf", "The Dead Beat" and "The Couch." A second retrospective selection of Bloch's nonfiction was published by NESFA Press as "Out of My Head."
In 1987, Bloch celebrated his 70th birthday. Underwood-Miller issued the three-volume hardcover set "The Selected Stories of Robert Bloch" (individual volumes titled "Final reckonings", "Bitter Ends" and "Last Rites"). When Citadel press reissued this in paperback they incorrectly named it "The Collected Stories of Robert Bloch." The same year a collection, "Midnight Pleasures" appeared from Doubleday, and "Lost in Time and Space with Lefty Feep" (Creatures at Large Press) collected a number of the stories on the Lefty Feep series. The latter was the first of a projected series of three volumes, however the further volumes were never published. In 1988, Tor Books reissued Bloch's scarce second novel, "The Kidnapper.'
In 1989, several works were published: the collection, "Fear and Trembling", the thriller novel "Lori" (later adapted as a standalone graphic novel) and another omnibus of long out-of-print early novels, "Screams" (containing "The Will to Kill", "Firebug" and "The Star Stalker"). Randall D. Larson issued "The Robert Bloch Companion: Collected Interviews 1969-1986" (Starmont House), together with "Robert Bloch" (Starmont Reader's Guide No 37), an exhaustive study of Bloch's work, and "The Complete Robert Bloch: An Illustrated, Comprehensive Bibliography" (Fandom Unlimited Enterprises). Larson's three books were bound in hardcover and distributed by Borgo Press.
Bloch's novel, "The Jekyll Legacy" (1990), was a collaboration with Andre Norton and a sequel to Robert Louis Stevenson's "Dr. Jekyll and Mr. Hyde". The same year he returned to the Norman Bates "mythos" with "Psycho House" (Tor), the third Psycho novel. As with the second novel in the sequence, it bears no relation to the film titled "Psycho III". It would prove to be his last published novel.
In February 1991, he was given the Honor of Master of Ceremonies at the first World Horror Convention held in Nashville, Tennessee. Weird Tales issued a special Robert Bloch issue in Spring, including his screenplay for the televised version of his tale "Beetles"". A standalone chapbook of the story "Yours Truly, Jack the Ripper" was issued in both hardcover and paperback by Pulphouse, and Bloch co-edited with Martin H. Greenberg the original anthology "Psycho-Paths" (Tor). In 1991 Bloch contributed an Introduction to "In Search of Lovecraft" by J. Vernon Shea.
In 1992, Bloch celebrated his 75th birthday with a bash at a Los Angeles mystery/horror bookstore which was attended by many sf/horror notables. In 1993, he published his "unauthorized autobiography", "Once Around the Bloch" (Tor) and edited the original anthology "Monsters in Our Midst".
In early 1994, Fedogan and Bremer published a collection of 39 of his stories, "The Early Fears". Bloch began editing a new original anthology, "Robert Bloch's Psychos" but was unable to complete work on it prior to his death; Martin H. Greenberg finished the work posthumously and the book appeared several years later (1997).
On October 2, 1940, Bloch married Marion Ruth Holcombe; it was reportedly a marriage of convenience designed to keep Bloch out of the army. During their marriage, she suffered (initially undiagnosed) tuberculosis of the bone, which affected her ability to walk.
After working for 11 years for the Gustav Marx Advertising Agency in Milwaukee, Bloch left in 1953 and moved to Weyauwega, Marion's home town, so she could be close to friends and family. Although she was eventually cured of tuberculosis, she and Bloch divorced in 1963. Bloch's daughter Sally (born 1943) elected to stay with him.
On January 18, 1964, Bloch met recently widowed Eleanor ("Elly") Alexander (née Zalisko) — who had lost her first husband, writer/producer John Alexander, to a heart attack three months earlier — and made her his second wife in a civil ceremony on the following October 16. Elly was a fashion model and cosmetician. They honeymooned in Tahiti, and in 1965 visited London, then British Columbia. They remained happily married until Bloch's death. Elly remained in the Los Angeles area for several years after selling their Laurel Canyon Home to fans of Bloch, eventually choosing to go home to Canada to be closer to her own family. She died March 7, 2007, at the Betel Home in Selkirk, Manitoba, Canada. Her ashes have been placed next to Bloch's in a similar book-shaped urn at Pierce Brothers in Westwood, California.
Bloch died on September 23, 1994, after a long battle with cancer, at the age of 77. in Los Angeles after a writing career lasting 60 years, including more than 30 years in television and film. Bloch survived by seven months the death of another member of the original "Lovecraft Circle", Frank Belknap Long, who had died in January 1994.
Bloch was cremated and his ashes interred in the Room of Prayer columbarium at Westwood Village Memorial Park Cemetery in Los Angeles. His wife Elly is also interred there.
The Robert Bloch Award is presented at the annual Necronomicon convention. Its recipient in 2013 was editor and scholar S.T. Joshi. The award is in the shape of the Shining Trapezohedron as described in H.P. Lovecraft's tale dedicated to Bloch, The Haunter of the Dark.
A number of Bloch's works have been adapted in graphic form for comics. These include:
The comic "Aardwolf" (No 2, Feb 1995) is a special tribute issue to Bloch. It contains brief tributes to Bloch from Harlan Ellison, Ray Bradbury, Richard Matheson, Julius Schwartz and Peter Straub incorporated within a piece called "Robert Bloch: A Retrospective" compiled by Clifford Lawrence. The first part of the text of Bloch's story "The Past Master" is also reprinted in this issue.
Bloch also contributed a script as part of the DC one-shot benefit comic "Heroes Against Hunger".
A number of Bloch's works have been adapted for audio productions.
Other adaptations include:
Various recordings of Bloch speaking at fantasy and sf conventions are also extant. Many of these are available for download from Will Hart's CthulhuWho site:
Note: The following three entries represent paperback reprints of the Underwood Miller "Selected Stories" set. "Complete Stories" is a misnomer as these three volumes do not contain anywhere near the complete oeuvre of Bloch's short fiction.
See also 42nd World Science Fiction Convention
The following is a list of films based on Bloch's work. For some of these he wrote the original screenplay; for others, he supplied the story or a novel (as in the case of "Psycho") on which the screenplay was based.
Bloch wrote a number of screenplays which remain unproduced. These include "Merry-Go-Round" for MGM (loosely based on Ray Bradbury's story "Black Ferris");, "Night-World" (from Bloch's novel, for MGM); "The Twenty-First Witch"; and "Day of the Comet" (from the H.G. Wells story), and a television adaptation of "Out of the Aeons". See also "The Todd Dossier".
Some scenes from Bloch's incomplete screenplay for the unproduced movie "Earthman's Burden", to have been based on the Hoka stories of Gordon R. Dickson and Poul Anderson appear in Richard Matheson and Ricia Mainhardt, eds, "Robert Bloch: Appreciations of the Master". NY: Tor, 1995, pp. 157–63.
Bloch appeared in the documentary "The Fantasy Film Worlds of George Pal" (1985) produced and directed by Arnold Leibovit.
Many of Bloch's published works, manuscripts (including those of the novels "The Star Stalker," "This Crowded Earth", and "Night World"), correspondence, books, recordings, tapes and other memorabilia are housed in the Special Collections division of the library at the University of Wyoming. The collection includes several unpublished short stories, such as "Dream Date", "The Last Clown", "A Pretty Girl is Like a Malady", "Twilight of a God", "It Only Hurts When I Laugh", "How to Pull the Wings Off a Barfly", "The Craven Image", "Afternoon in the Park", "Title Bout", and 'What Freud Can't Tell You". In addition there is even an unpublished one-act play entitled "The Birth of a Notion - A Tragedy of Hollywood." Thousands of other items from fanzines and professional periodicals to film stills, lobby cards, one-sheets and posters and press-books connected with Bloch's films, together with transcripts of several of his speeches, are also housed in the collection. | https://en.wikipedia.org/wiki?curid=26243 |
Recorder (musical instrument)
The recorder is a family of woodwind musical instruments in the group known as "internal duct flutes"—flutes with a whistle mouthpiece, also known as fipple flutes. A recorder can be distinguished from other duct flutes by the presence of a thumb-hole for the upper hand and seven finger-holes: three for the upper hand and four for the lower. It is the most prominent duct flute in the western classical tradition.
Recorders are made in different sizes with names and compasses roughly corresponding to different vocal ranges. The sizes most commonly in use today are the soprano (aka "descant", lowest note C5), alto (aka "treble", lowest note F4), tenor (lowest note C4) and bass (lowest note F3). Recorders are traditionally constructed from wood and ivory, while most recorders made in recent years are constructed from molded plastic. The recorders' internal and external proportions vary, but the bore is generally reverse conical (i.e. tapering towards the foot) to cylindrical, and all recorder fingering systems make extensive use of forked fingerings.
The recorder is first documented in Europe in the Middle Ages, and continued to enjoy wide popularity in the Renaissance and Baroque periods, but was little used in the Classical and Romantic periods. It was revived in the 20th century as part of the historically informed performance movement, and became a popular amateur and educational instrument. Composers who have written for the recorder include Monteverdi, Lully, Purcell, Handel, Vivaldi, Telemann, Johann Sebastian Bach, Paul Hindemith, Benjamin Britten, Leonard Bernstein, Luciano Berio, and Arvo Pärt. Today, there are many professional recorder players who demonstrate the instrument's full solo range and a large community of amateurs.
The sound of the recorder is often described as clear and sweet, and has historically been associated with birds and shepherds. It is notable for its quick response and its corresponding ability to produce a wide variety of articulations. This ability, coupled with its open finger holes, allow it to produce a wide variety of tone colors and special effects. Acoustically, its tone is relatively pure and, when the edge is positioned in the center of the
airjet, odd harmonics predominate in its sound (when the edge is decidedly off-center, an even distribution of harmonics occurs).
The instrument has been known by its modern English name at least since the 14th century. David Lasocki reports the earliest use of "recorder" in the household accounts of the Earl of Derby (later King Henry IV) in 1388, which register (one pipe called 'Recordour').
By the 15th century, the name had appeared in English literature. The earliest references are in John Lydgate's Temple of Glas (1430): "These lytylle herdegromys Floutyn al the longe day..In here smale recorderys, In floutys." (These little shepherds fluting all day long ... on these small recorders, on flutes.) and in Lydgate's Fall of Princes ( 1431–1438): "Pan, god off Kynde, with his pipes seuene, / Off recorderis fond first the melodies." (Pan, god of Nature, with his pipes seven, / of recorders found first the melodies.)
The instrument name "recorder" derives from the Latin "recordārī" (to call to mind, remember, recollect), by way of Middle French "recorder" (before 1349; to remember, to learn by heart, repeat, relate, recite, play music) and its derivative MFr "recordeur" (1395; one who retells, a minstrel). The association between the various, seemingly disparate, meanings of "recorder" can be attributed to the role of the medieval "jongleur" in learning poems by heart and later reciting them, sometimes with musical accompaniment.
The English verb "record" (from Middle French "recorder", early 13th century) meant "to learn by heart, to commit to memory, to go over in one's mind, to recite" but it was not used in English to refer to playing music until the 16th century, when it gained the meaning "silently practicing a tune" or "sing or render in song" (both almost exclusively referring to songbirds), long after the recorder had been named. Thus, the recorder cannot have been named after the sound of birds. The name of the instrument is also uniquely English: in Middle French there is no equivalent noun sense of "recorder" referring to a musical instrument.
Partridge indicates that the use of the instrument by "jongleurs" led to its association with the verb: "recorder" the minstrel's action, a "recorder" the minstrel's tool. The reason we know this instrument as the recorder and not one of the other instruments played by the "jongleurs" is uncertain.
The introduction of the Baroque recorder to England by a group of French professionals in 1673 popularized the French name for the instrument, "flute douce", or simply "flute", a name previously reserved for the transverse instrument. Until about 1695, the names "recorder" and "flute" overlapped, but from 1673 to the late 1720s in England, the word "flute" always meant recorder. In the 1720s, as the transverse flute overtook the recorder in popularity, English adopted the convention already present in other European languages of qualifying the word "flute", calling the recorder variously the "common flute", "common English-flute", or simply "English flute" while the transverse instrument was distinguished as the "German flute" or simply "flute". Until at least 1765, some writers still used "flute" to mean recorder.
Until the mid 18th century, musical scores written in Italian refer to the instrument as , whereas the transverse instrument was called . This distinction, like the English switch from "recorder" to "flute," has caused confusion among modern editors, writers and performers.
Indeed, in most European languages, the first term for the recorder was the word for flute alone. In the present day, cognates of the word "flute," when used without qualifiers, remain ambiguous and may refer to either the recorder, the modern concert flute, or other non-western flutes. Starting in the 1530s, these languages began to add qualifiers to specify this particular flute.
Since the 15th century, a variety of sizes of recorder have been documented, but a consistent terminology and notation for the different sizes was not formulated until the 20th century.
Today, recorder sizes are named after the different vocal ranges. This is not, however, a reflection of sounding pitch, and serves primarily to denote the pitch relationships between the different instruments. Groups of recorders played together are referred to as "consorts". Recorders are also often referred to by their lowest sounding note: "recorder in F" refers to a recorder with lowest note F, in any octave.
The table in this section shows the standard names of modern recorders in F and C and their respective ranges. Music composed after the modern revival of the recorder most frequently uses soprano, alto, tenor, and bass recorders, although sopranino and great bass are also fairly common. Consorts of recorders are often referred to using the terminology of organ registers: 8′ (8 foot) pitch referring to a consort sounding as written, 4′ pitch a consort sounding an octave above written, and 16′ a consort sounding an octave below written. The combination of these consorts is also possible.
As a rule of thumb, the tessitura of a baroque recorder lies approximately one octave above the tessitura of the human voice type after which it is named. For example, the tessitura of a soprano voice is roughly C4–C6, while the tessitura of a soprano recorder is C5–C7.
Modern variations include standard British terminology, due to Arnold Dolmetsch, which refers to the recorder in C5 (soprano) as the descant and the recorder in F4 (alto) as the treble. As conventions and instruments vary, especially for larger and more uncommon instruments, it is often practical to state the recorder's lowest note along with its name to avoid confusion.
Modern recorder parts are notated in the key they sound in. Parts for alto, tenor and contrabass recorders are notated at pitch, while parts for sopranino, soprano, bass, and great bass are typically notated an octave below their sounding pitch. As a result, soprano and tenor recorders are notated identically; alto and sopranino are notated identically; and bass and contrabass recorders are notated identically. Octave clefs may be used to indicate the sounding pitch, but usage is inconsistent.
Rare sizes and notations include the garklein, which may be notated two octaves below its sounding pitch, and the sub-contrabass, which may be notated an octave above its sounding pitch.
The earliest known document mentioning "a pipe called Recordour" dates from 1388.
Historically, recorders were used to play vocal music and parts written for other instruments, or for a general instrument. As a result, it was frequently the performers' responsibility to read parts not specifically intended for the instrument and to choose appropriate instruments. When such consorts consisted only of recorders, the pitch relationships between the parts were typically preserved, but when recorders were combined with other instruments, octave discrepancies were often ignored.
Recorder consorts in the 16th century were tuned in fifths and only occasionally employed tuning by octaves as seen in the modern C, F recorder consort. This means that consorts could be composed of instruments nominally in B, F, C, G, D, A and even E, although typically only three or four distinct sizes were used simultaneously. To use modern terminology, these recorders were treated as transposing instruments: consorts would be read identically to a consort made up of F3, C4, and G4 instruments. This is made possible by the fact that adjacent sizes are separated by fifths, with few exceptions. These parts would be written using "chiavi naturali", allowing the parts to roughly fit in the range of a single staff, and also in the range of the recorders of the period. (see Renaissance structure)
Transpositions ("registers"), such as C3–G3–D4, G3–D4–A4, or B2–F3–C4, all read as F3–C4–G4 instruments, were possible as described by Praetorius in his "Syntagma Musicum". Three sizes of instruments could be used to play four-part music by doubling the middle size, e.g. F3–C4–C4–G4, or play six-part music by doubling the upper size and tripling the middle size, e.g. F3–C4–C4–C4–G4–G4. Modern nomenclature for such recorders refers to the instruments' relationship to the other members of consort, rather than their absolute pitch, which may vary. The instruments from lowest to highest are called "great bass", "bass", "basset", "tenor", "alto", and "soprano". Potential sizes include: great bass in F2; bass in B2 or C3; basset in F3 or G3; tenor in C4 or D4; alto in F4, G4 or A4; and soprano in C5 or D5.
The alto in F4 is the standard recorder of the Baroque, although there is a small repertoire written for other sizes. In 17th-century England, smaller recorders were named for their relationship to the alto and notated as transposing instruments with respect to it: third flute (A4), fifth flute (soprano; C5), sixth flute (D5), and octave flute (sopranino; F5). The term "flute du quart", or fourth flute (B4), was used by Charles Dieupart, although curiously he treated it as a transposing instrument in relation to the soprano rather than the alto. In Germanic countries, the equivalent of the same term, "Quartflöte", was applied both to the tenor in C4, the interval being measured down from the alto in F4, and to a recorder in C5 (soprano), the interval of a fourth apparently being measured up from an alto in G4. Recorder parts in the Baroque were typically notated using the treble clef, although they may also be notated in French violin clef (G clef on the bottom line of the staff).
In modern usage, recorders not in C or F are alternatively referred to using the name of the closest instrument in C or F, followed by the lowest note. For example, a recorder with lowest note G4 may be known as a G-alto or alto in G, a recorder with lowest note D5 (also "sixth flute") as a D-soprano or soprano in D, and a recorder in G3 as a G-bass or G-basset. This usage is not totally consistent. Notably, the baroque recorder in D4 is not commonly referred to as a D-tenor nor a D-alto; it is most commonly referred to using the historical name "voice flute".
Recorders have historically been constructed from hardwoods and ivory, sometimes with metal keys. Since the modern revival of the recorder, plastics have been used in the mass manufacture of recorders, as well as by a few individual makers.
Today, a wide variety of hardwoods are used to make recorder bodies. Relatively fewer varieties of wood are used to make recorder blocks, which are often made of red cedar, chosen because of its rot resistance, ability to absorb water, and low expansion when wet. A recent innovation is the use of synthetic ceramics in the manufacture of recorder blocks.
Some recorders have tone holes too far apart for a player's hands to reach, or too large to cover with the pads of the fingers. In either case, more ergonomically placed keys can be used to cover the tone holes. Keys also allow the design of longer instruments with larger tone holes. Keys are most common in recorders larger than the alto. Instruments larger than the tenor need at least one key so the player can cover all eight holes. Keys are sometimes also used on smaller recorders to allow for comfortable hand stretch, and acoustically improved hole placement and size.
When playing a larger recorder, a player may not be able to simultaneously reach the keys or tone holes with the fingers and reach the windway with the mouth. In this case, a bocal may be used to allow the player to blow into the recorder while maintaining a comfortable hand position. Alternatively, some recorders have a bent bore that positions the windway closer to the keys or finger holes so the player can comfortably reach both. Instruments with a single bend are known as "knick" or bent-neck recorders.
Some newer designs of recorder are now being produced. Recorders with a square cross-section may be produced more cheaply and in larger sizes than comparable recorders manufactured by turning. Another area is the development of instruments with a greater dynamic range and more powerful bottom notes. These modern designs make it easier to be heard in concertos. Finally, recorders with a downward extension of a semitone are becoming available; such instruments can play a full three octaves in tune.
In the early 20th century, Peter Harlan developed a recorder with apparently simpler fingering, called German fingering. A recorder designed for German fingering has a hole five that is smaller than hole four, whereas baroque and neo-baroque recorders have a hole four that is smaller than hole five. The immediate difference in fingering is for F (soprano) or B (alto), which on a neo-baroque instrument must be fingered 0 123 4–67. With German fingering, this becomes a simpler 0 123 4 – – –. Unfortunately, however, this makes many other chromatic notes too out of tune to be usable. German fingering became popular in Europe, especially Germany, in the 1930s, but rapidly became obsolete in the 1950s as people began to treat the recorder more seriously, and the limitations of German fingering became more widely appreciated. Recorders with German fingering are today manufactured exclusively for educational purposes.
Modern recorders are most commonly pitched at A=440 Hz, but among serious amateurs and professionals, other pitch standards are often found. For the performance of baroque music, A=415 Hz is the "de facto" standard, while pre-Baroque music is often performed at A=440 Hz or A=466 Hz. These pitch standards are intended to reflect the broad variation in pitch standards throughout the history of the recorder. In various regions, contexts, and time periods, pitch standards have varied from A=~392 Hz to A=~520 Hz. The pitches A=415 Hz and A=466 Hz, a semitone lower and a semitone higher than A=440 Hz respectively, were chosen because they may be used with harpsichords or chamber organs that transpose up or down a semitone from A=440. These pitch standards allow recorder players to collaborate with other instrumentalists at a pitch other than A=440 Hz.
Some recorder makers produce instruments at pitches other than the three standard pitches above, and recorders with interchangeable bodies at different pitches.
The recorder produces sound in the manner of a whistle or an organ flue pipe. In normal play, the player blows into the "windway" (B), a narrow channel in the "head joint", which directs a stream of air across a gap called the "window", at a sharp edge called the "labium" (C). The air stream alternately travels above and below the labium, exciting standing waves in the bore of the recorder, and producing sound waves that emanate away from the window. Feedback from the resonance of the tube regulates the pitch of the sound.
In recorders, as in all woodwind instruments, the air column inside the instrument behaves like a vibrating string, to use a musical analogy, and has multiple modes of vibration. These waves produced inside the instrument are not travelling waves, like those the ear perceives as sound, but rather stationary standing waves consisting of areas of high pressure and low pressure inside the tube, called nodes. The perceived pitch is the lowest, and typically loudest, mode of vibration in the air column. The other pitches are "harmonics", or "overtones". Players typically describe recorder pitches by the number of nodes in the air column. Notes with a single node are in the "first register," notes with two nodes in the "second register," etc. As the number of nodes in the tube increases, the number of notes a player can produce in a given register decreases because of the physical constraint of the spacing of the nodes in the bore. On a Baroque recorder, the first, second, and third registers span about a major ninth, a major sixth, and a minor third respectively.
The recorder sound, for the most part, lacks high harmonics and odd harmonics predominate in its sound with the even harmonics being almost entirely absent, although the harmonic profile of the recorder sound varies from recorder to recorder, and from fingering to fingering. As a result of the lack of high harmonics, writers since Praetorius have remarked that it is difficult for the human ear to perceive correctly the sounding octave of the recorder.
As in organ flue pipes, the sounding pitch of duct type whistles is affected by the velocity of the air stream as it impinges upon the labium. The pitch generally increases with velocity of the airstream, up to a point.
Air speed can also be used to influence the number of pressure nodes in a process called over blowing. At higher airstream velocities, lower modes of vibration of the air column become unstable, resulting in a change of register.
The air stream is affected by the shaping of the surfaces in the head of the recorder (the "voicing"), and the way the player blows air into the windway. Recorder voicing is determined by physical parameters such as the proportions and curvature of the windway along both the longitudinal and latitudinal axes, the beveled edges ("chamfers") of the windway facing towards the labium, the length of the window, the sharpness of the labium (i.e. the steepness of the ramp) among other parameters. The player is able to control the speed and turbulence of the airstream using the diaphragm and vocal tract.
The finger holes, used in combination or partially covered, affect the sounding pitch of the instrument.
At the most basic level, the sequential uncovering of finger holes increases the sounding pitch of the instrument by decreasing the effective sounding length of the instrument, and vice versa for the sequential covering of holes. In the fingering 01234567, only the bell of the instrument is open, resulting in a low pressure node at the bell end of the instrument. The fingering 0123456 sounds at a higher pitch because the seventh hole and the bell both release air, creating a low pressure node at the seventh hole.
Besides sequential uncovering, recorders can use forked fingering to produce tones other than those produced by simple sequential lifting of fingers. In the fingering 0123, air leaks from the open holes 4,5,6, and 7. The pressure inside the bore is higher at the fourth hole than at the fifth, and decreases further at the 6th and 7th holes. Consequently, the most air leaks from the fourth hole and the least air leaks from the seventh hole. As a result, covering the fourth hole affects the pitch more than covering any of the holes below it. Thus, at the same air pressure, the fingering 01235 produces a pitch between 0123 and 01234. Forked fingerings allow recorder players to obtain fine gradations in pitch and timbre.
A recorder's pitch is also affected by the partial covering of holes. This technique is an important tool for intonation, and is related to the fixed process of tuning a recorder, which involves the adjustment of the size and shape of the finger holes through carving and the application of wax.
One essential use of partial covering is in "leaking," or partially covering, the thumb hole to destabilize low harmonics. This allows higher harmonics to sound at lower air pressures than by over-blowing alone, as on simple whistles. The player may also leak other holes to destabilize lower harmonics in place of the thumb hole (hole 0). This technique is demonstrated in the fingering tables of Ganassi's "Fontegara" (1535), which illustrate the simultaneous leaking of holes 0, 2, and 5 to produce some high notes. For example, Ganassi's table produces the 15th (third octave tonic) as the fourth harmonic of the tonic, leaking holes 0, 2 and 5 and produces the 16th as the third harmonic of the fifth, leaking holes 0 and 2. On some Baroque recorders, the 17th can be produced as the third harmonic of the sixth, leaking hole 0 as well as hole 1, 2 or both.
Although the design of the recorder has changed over its 700-year history, notably in fingering and bore profile (see History), the technique of playing recorders of different sizes and periods is much the same. Indeed, much of what is known about the technique of playing the recorder is derived from historical treatises and manuals dating to the 16th–18th century. The following describes the commonalities of recorder technique across all time periods.
In normal playing position, the recorder is held with both hands, covering the fingerholes or depressing the keys with the pads of the fingers: four fingers on the lower hand, and the index, middle and ring fingers and thumb on the upper hand. In standard modern practice, the right hand is the lower hand, while the left hand is the upper hand, although this was not standardized before the modern revival of the recorder.
The recorder is supported by the lips, which loosely seal around the beak of the instrument, the thumb of the lower hand, and, depending on the note fingered, by the other fingers and the upper thumb. A practice documented in many historical fingering charts is the use of finger seven or eight to support the recorder when playing notes for which the coverage of this hole negligibly affects the sounding pitch (e.g. notes with many holes uncovered). Larger recorders may have a thumbrest, or a neckstrap for extra support, and may use a bocal to direct air from the player's mouth to the windway.
Recorders are typically held at an angle between vertical and horizontal, the attitude depending on the size and weight of the recorder, and personal preference.
Pitches are produced on the recorder by covering the holes while blowing into the instrument. Modern terminology refers to the holes on the front of the instrument using the numbers 1 through 7, starting with the hole closest to the beak, with the thumbhole numbered hole 0. At the most basic level, the fingering technique of the recorder involves the sequential uncovering of the holes from lowest to highest (i.e., uncovering 7, then uncovering 7 and 6, then uncovering 7, 6 and 5, etc.) producing ever higher pitches. In practice, however, the uncovering of the holes is not strictly sequential, and the half covering or uncovering of holes is an essential part of recorder technique.
A forked fingering is a fingering in which an open hole has covered holes below it: fingerings for which the uncovering of the holes is not sequential. For example, the fingering 0123 is not a forked fingering, while 0123 56 is a forked fingering because the open hole 4 has holes covered below it holes 5 and 6. Forked fingerings allow for smaller adjustments in pitch than the sequential uncovering of holes alone would allow. For example, at the same air speed the fingering 0123 5 sounds higher than 01234 but lower than 0123. Many standard recorder fingerings are forked fingerings. Forked fingerings may also be used to produce microtonal variations in pitch.
Forked fingerings have a different harmonic profile from non-forked fingerings, and are generally regarded as having a weaker sound. Forked fingerings that have a different tone color or are slightly sharp or flat can provide so-called "alternate fingerings". For example, the fingering 0123 has a slightly sharper forked variant 012 4567.
Partial covering of the holes is an essential part of the playing technique of all recorders. This is variously known as "leaking," "shading," "half-holing," and in the context of the thumb hole, "pinching".
The primary function of the thumbhole is to serve as an octaving vent. When it is leaked, the first mode of vibration of the air column becomes unstable: i.e., the register changes. In most recorders, this is required for the playing of every note higher than a ninth above the lowest note. The player must adjust the position of the thumb for these notes to sound stably and in tune.
The partial opening of the thumbhole may be achieved by sliding or rolling the thumb off the hole, or by bending the thumb at the first knuckle. To partially uncover a covered hole, the player may slide the finger off the hole, bend or roll the finger away from the hole, gently lift the finger from the hole, or a combination of these. To partially cover an open hole, the reverse is possible.
Generally speaking, the partial opening of covered fingerholes raises the pitch of the sounding note while the partial closure of open fingerholes lowers the pitch.
On most "baroque" modeled modern recorders, the lower two fingers of the lower hand actually cover two holes each (called "double holes"). Whereas on the vast majority of baroque recorders and all earlier recorders these two fingers covered a single hole ("single holes"), double holes have become standard for baroque modeled modern recorders. By covering one or both of these two, smaller holes, a recorder player can play the notes a semitone above the lowest note and a minor third above the lowest note, notes that are possible on single holed recorders only through the partial covering of those holes, or the covering of the bell.
The open end of the bore facing away from the player (the "bell") may be covered to produce extra notes or effects. Because both hands are typically engaged in holding the recorder or covering the finger holes, the covering of the bell is normally achieved by bringing the end of the recorder in contact with the leg or knee, typically achieved through a combination of bending of the torso and/or raising of the knee. Alternatively, in rare cases instruments may be equipped with a key designed to cover the bell ("bell key"), operated by one of the fingers, typically the pinky finger of the upper hand, which is not normally used to cover a hole. Fingerings with a covered bell extend the recorder's chromatic playable range above and below the nominal fingered range.
The pitch and volume of the recorder sound are influenced by the speed of the air travelling through the windway, which may be controlled by varying the breath pressure and the shape of the vocal tract. The sound is also affected by the turbulence of the air entering the recorder. Generally speaking, faster air in the windway produces a higher pitch. Thus blowing harder causes a note it to go sharp whereas blowing the note gently causes it to go flat. Knowledge of this fact and the recorder’s individual tonal differences over its full range will help recorders play in tune with other instruments by knowing which notes will need slightly more or less air to stay in tune. As mentioned above at "Harmonic profile", blowing much harder can result in overblowing.
The technique of inhalation and exhalation for the recorder differs from that of many other wind instruments in that the recorder requires very little air pressure to produce a sound, unlike reed or brasswind instruments. Thus, it is often necessary for a recorder player to produce long, controlled streams of air at a very low pressure. Recorder breathing technique focuses on the controlled release of air rather than on maintaining diaphragmatic pressure.
The use of the tongue to stop and start the air is called "articulation". In this capacity, the tongue has two basic functions: to control the start of the note (the attack) and the end, or the length of the note (legato, staccato). Articulations are roughly analogous to consonants. Practically any consonant that may produced with the tongue, mouth, and throat may be used to articulate on the recorder. Transliterations of common articulation patterns include "du du du du" (using the tip of the tongue, "single tonguing") "du gu du gu," (alternating between the tip and the back of the tongue, "double tonguing") and "du g'll du g'll" (articulation with the tip and the sides of the tongue, "double tonguing"). The attack of the note is governed by such factors as the pressure buildup behind the tongue and shape of the articulant, while the length of the note governed by the stoppage of the air by the tongue. Each articulation pattern has a different natural pattern of attack and length, and recorder technique seeks to produce a wide variety of lengths and attacks using these articulation patterns. Patterns such as these have been used since at least the time of Ganassi (1535).
Mouth and throat shapes are roughly analogous to vowels. The shape of the vocal track affects the velocity and turbulence of the air entering the recorder. The shape of the mouth and vocal tract affect are closely related to the consonant used to articulate.
The player must coordinate fingers and tongue to align articulations with finger movements. In normal play, articulated attacks should align with the proper fingering, even in legato passages or in difficult finger transitions and the fingers move in the brief silence between the notes (silence d'articulation) created by the stoppage of the air by the tongue.
Both fingers and the breath can be used to control the pitch of the recorder. Coordinating the two is essential to playing the recorder in tune and with a variety of dynamics and timbres. On an elementary level, breath pressure and fingerings must accord with each other to provide an in-tune pitch. As an example of a more advanced form of coordination, a gradual increase in breath pressure combined with the shading of holes, when properly coordinated, results in an increase in volume and change in tone color without a change in pitch. The reverse is possible, decreasing breath pressure and gradually lifting fingers.
Note 1: See the section Types of recorder concerning recorders in C or in F.
Note 2: Individual recorders may need this hole closed (●), half closed (◐), or open (○) to play the note in tune.
● means to cover the hole. ○ means to uncover the hole. ◐ means half-cover.
The range of a modern "baroque" model recorder is usually considered two octaves and a tone. See the table above for "English" fingerings for the standard range. The numbers at the top correspond to the fingers and the holes on the recorder. The vast majority of recorders manufactured today are designed to play using these fingerings, with slight variations. Nonetheless, recorder fingerings vary widely between models and are mutable even for a single recorder: recorder players may use three or more fingerings for the same note along with partial covering of the holes to achieve proper intonation, in coordination with the breath or in faster passages where some fingerings are unavailable. This chart is a general guide, but by no means a definitive or complete fingering chart for the recorder, an impossible task. Rather, it is the basis for a much more complex fingering system, which is still being added to today.
Some fonts show miniature glyphs of complete recorder fingering charts in TrueType format. Because there are no Unicode values for complete recorder fingering charts, these fonts are custom encoded.
The earliest extant duct flutes date to the neolithic. They are found in almost every musical tradition around the world. Recorders are distinguished from other duct flutes primarily by the thumb hole, which is used as an octaving vent, and the presence of seven finger holes, although classification of early instruments has proved controversial. The performing practice of the recorder in its earliest history is not well documented, owing to the lack of surviving records from the time.
Our present knowledge of the structure of recorders in the Middle Ages is based on a small number of instruments preserved and artworks, or iconography, from the period.
Surviving instruments from the Middle Ages are heterogeneous.
The first medieval recorder discovered was a fruitwood instrument ("Dordrecht recorder") excavated in 1940 from the moat surrounding the castle "Huis te Merwede" ("House on the Merwede") near the town of Dordrecht in the Netherlands. The castle was only inhabited from 1335 to 1418. As the area was not disturbed until the modern excavation, the recorder has been dated to the period of occupation of the castle. The instrument has a cylindrical bore about in diameter and is about long with a vibrating air column of about . The block has survived, but the labium is damaged, making the instrument unplayable. The instrument has tenons on both ends of the instrument, suggesting the presence of now lost ferrules or turnings. Uncertainty regarding the nature of these fittings has hindered reconstruction of the instrument's original state.
A second, structurally different instrument ("Göttingen recorder") was discovered in 1987 in an archaeological excavation of the latrine of a medieval house in Göttingen, Germany. It has been dated to between 1246 and 1322. It is fruitwood in one piece with turnings, measuring about long. It has a cylindrical bore about at the highest measurable point, narrowing to between the first and second finger holes, to between the second and third finger holes, and contracting to at the seventh hole. The bore expands to at the bottom of the instrument, which has a bulbous foot. Unusually, the finger holes taper conically outwards, the opposite of the undercutting found in Baroque recorders. The top of the instrument is damaged: only a cut side of the windway survives, and the block has been lost. A reconstruction by Hans Reiners has a strident, penetrating sound rich in overtones and has a range of two octaves. With the thumb hole and the first three finger holes covered, the reconstruction produces a pitch ca. 450 Hz.
In the 21st century, a number of other instruments and fragments dated to the medieval period have come to light. These include a 14th-century fragment of a headjoint excavated in Esslingen, Germany ("Esslingen fragment"); a birch instrument dated to the second half of the 14th century unearthed in Tartu, Estonia ("Tartu recorder"); and a fruitwood instrument dated to the 15th century, found in Elbląg, Poland ("Elbląg recorder").
Common features of the surviving instruments include: a narrow cylindrical bore (except the Göttingen recorder); a doubled seventh hole for the little finger of the lower hand to allow for right- or left- handed playing (except the Tartu recorder); a seventh hole that produces a semitone instead of a tone; and a flat or truncated head, instead of the narrow beak found on later instruments. Additionally, the Esslingen fragment has turnings similar to the Göttingen recorder. No complete instruments larger than have survived, although the Esslingen fragment may represent a larger recorder.
The widely spaced doubled seventh hole persisted in later instruments. According to Virdung (1511), the hole that was not used was plugged with wax. It was not until the Baroque period, when instruments with adjustable footjoints were developed, that widely spaced double holes became obsolete.
The classification of these instruments is primarily complicated by the fact that the seventh hole produces a semitone instead of a tone. As a result, chromatic fingerings are difficult, and require extensive half-holing. These instruments share similarities with the six holed flageolet, which used three fingers on each hand and had no thumb hole. Anthony Rowland-Jones has suggested that the thumb hole on these early flutes was an improvement upon the flageolet to provide a stronger fingering for the note an octave above the tonic, while the seventh finger hole provided a leading tone to the tonic. As a result, he has suggested that these flutes should be described as improved flageolets, and has proposed the condition that true recorders produce a tone (rather than a semitone) when the seventh finger is lifted.
Controversy aside, there is little question that these instruments are at least precursors to later instruments that are indisputably recorders. Because there is sparse documentary evidence from the earliest history of the instrument, such questions may never be resolved. Indeed, historically there was no need for an all-inclusive definition that encompassed every form of the instrument past and present.
Recorders with a cylindrical profile are depicted in many medieval paintings, however their appearance does not easily correspond to the surviving instruments, and may be stylized. The earliest depictions of the recorder are probably in "The Mocking of Christ" from the monastery church of St George in Staro Nagoričano near Kumanovo, Macedonia (the painting of the church began in 1315) in which a man plays a cylindrical recorder; and the center panel of the "Virgin and Child" attributed to Pedro (Pere) Serra (c. 1390), painted for the church of S. Clara, Tortosa, now in the Museu Nacional d'Art de Catalunya, Barcelona, in which a group of angels play musical instruments around the Virgin Mary, one of them playing a cylindrical recorder.
Starting in the Middle Ages, angels have frequently been depicted playing one or more recorders, often grouped around the Virgin, and in several notable paintings trios of angels play recorders. This is perhaps a sign of the trinity, although the music must have often been in three parts.
No music marked for the recorder survives from prior to 1500. Groups of recorder players or recorder playing angels, particularly trios, are depicted in paintings from the 15th century, indicating the recorder was used in these configurations, as well as with other instruments. Some of the earliest music must have been vocal repertory.
Modern recorder players have taken up the practice of playing instrumental music from the period, perhaps anachronistically, such as the monophonic estampies from the Chansonnier du Roi (13th), Add MS 29987 (14th or 15th), or the Codex Faenza (15th), and have arranged keyboard music, such as the estampies from the Robertsbridge codex (14th), or the vocal works of composers such as Guillaume de Machaut and Johannes Ciconia for recorder ensembles.
In the 16th century, the structure, repertoire, and performing practice of the recorder is better documented than in prior epochs. The recorder was one of the most important wind instruments of the Renaissance, and many instruments dating to the 16th century survive, including some matched consorts. This period also produced the first extant books describing the recorder, including the treatises of Virdung (1511), Agricola (1529), Ganassi (1535), Cardano (c.1546), Jambe de Fer (1556), and Praetorius (1619). Nonetheless, understanding of the instrument and its practice in this period is still developing.
In the 16th century, the recorder saw important developments in its structure. As in the recorders of the Middle Ages, the etiology of these changes remains uncertain, development was regional and multiple types of recorder existed simultaneously. Our knowledge is based on documentary sources and surviving instruments.
Far more recorders survive from the Renaissance than from the Middle Ages. Most of the surviving instruments from the period have a wide, cylindrical bore from the blockline to the uppermost fingerhole, an inverted conical portion down to around the lowest finger hole (the "choke"), then a slight flare to the bell. Externally, they have a curved shape similar to the bore, with a profile like a stretched hourglass. Their sound is warm, rich in harmonics, and somewhat introverted. Surviving consorts of this type, identified by their makers marks, include those marked "HIER S•" or "HIE•S" found in Vienna, Sibiu and Verona; and those marked with variations on a rabbit's footprint, designated "!!" by Adrian Brown, which are dispersed among various museums. The pitch of these recorders is often generally grouped around A = 466 Hz, however little pitch standardization existed in the period. This type of recorder is described by Praetorius in "De Organographia" (1619). A surviving consort by "!!" follows the exact size configuration suggested by Praetorius: stacked fifths up from the basset in F3, and down a fifth then a fourth to bass in B2 and great bass in F2. Instruments marked "HIER S•" or "HIE•S" are in stacked fifths from great bass in F2 to soprano in E5. Many of these instruments are pitched around A = 440 Hz or A = 466 Hz, although pitch varied regionally and between consorts.
The range of this type is normally an octave plus a minor 7th, but as remarked by Praetorius (1619) and demonstrated in the fingering tables of Ganassi's "Fontegara" (1535), experienced players on particular instruments were capable of playing up to a fourth or even a seventh higher (see ). Their range is more suitable for the performance of vocal music, rather than purely instrumental music. This type is the recorder typically referred to as the "normal" Renaissance recorder, however this modern appellation does not fully capture the heterogeneity of instruments of the 16th century.
Another surviving Renaissance type has a narrow cylindrical bore and cylindrical profile like the medieval exemplars but a choke at the last hole. The earliest surviving recorders of this type were made by the Rafi family, instrument makers active in Lyons in Southern France in the early 16th century. Two recorders marked "C.RAFI" were acquired by the Accademia Filarmonica, Bologna in 1546, where they remain today. A consort of recorders or similar make, marked "P.GRE/C/E," was donated to the Accademia in 1675, expanding the pair marked "C.RAFI". Other recorders by the Rafi family survive in Northern Europe, notably a pair in Brussels. It is possible that Grece worked in the Rafi workshop, or was a member of the Rafi family. The pitch of the Rafi/Grece instruments is around A = 440 Hz. They have a relatively quiet sound with good pitch stability favoring dynamic expression.
In 1556, French author Philibert Jambe de Fer gave a set of fingerings for hybrid instruments such as the Rafi and Grece instruments that give a range of two octaves. Here, the 15th was now produced, as on most later recorders, as a variant of the 14th instead of as the fourth harmonic of the tonic, as in Ganassi's tables.
The first two treatises of the 16th century show recorders that differ from the surviving instruments dating to the century: these are Sebastian Virdung's (b. 1465?) "Musica getutscht" (1511), and Martin Agricola's (1486–1556) similar "Musica instrumentalis deudsch" (1529), published in Basel and Saxony respectively.
"Musica Getutscht", the earliest printed treatise on western musical instruments, is an extract of an earlier, now lost, manuscript treatise by Virdung, a chaplain, singer, and itinerant musician. The printed version was written in a vernacular form of Early New High German, and was aimed at wealthy urban amateur musicians: the title translates, briefly, as "Music, translated into German ... Everything there is to know about [music] – made simple." When a topic become too complex for Virdung to discuss briefly, he refers the reader to his lost larger work, an unhelpful practice for modern readers. While the illustrations have been called "maddeningly inaccurate" and his perspectives quirky, Virdung's treatise gives us an important source on the structure and performing practice of the recorder in northern Europe in the late 15th and early 16th centuries.
The recorders described by Virdung have cylindrical profiles with flat heads, narrow windows and long ramps, ring-like turnings on the feet, and a slight external flare at the bell (above, far left and middle left). Virdung depicts four recorders together: a "baßcontra" or "bassus" (basset) in F3 with an anchor shaped key and a perforated fontanelle, two tenors in C4 and a "discantus" (alto) in G4. According to Virdung, the configurations F–C–C–G or F–C–G–G should be used for four-part music, depending on the range of the bass part. As previously mentioned, the accuracy of these woodcuts cannot be verified as no recorders fitting this description survive. Virdung also provides the first ever fingering chart for a recorder with a range of an octave and a seventh, though he says that the bass had a range of only an octave and sixth. In his fingering chart, he numbers which fingers to lift rather than those to put down and, unlike in later charts, numbers them from bottom (1) to top (8). His only other technical instruction is that the player must blow into the instrument and "learn how to coordinate the articulations ... with the fingers".
Martin Agricola's "Musica instrumentalis Deudsch" ("A German instrumental music, in which is contained how to learn to play ... all kinds of ... instruments"), written in rhyming German verse (ostensibly to improve the understanding and retention of its contents), provides a similar account and copies most of its woodcuts directly from "Getutscht". Agricola also calls the tenor "altus," mistakenly depicting it as a little smaller than the tenor in the woodcut (above, middle right). Like Virdung, Agricola takes it for granted that recorders should be played in four-part consorts. Unlike "Getutscht", which provides a single condensed fingering chart, Agricola provides separate, slightly differing, fingering charts for each instrument, leading some to suppose that Agricola experimented on three different instruments, rather than copying the fingerings from one size to the other two. Agricola adds that graces ("Mordanten"), which make the melody "subtil", must be learned from a professional ("Pfeiffer"), and that the manner of ornamentation ("Coloratur") of the organist is best of all. A substantial 1545 revision of "Musica Instrumentalis" approvingly mentions the use of vibrato ("zitterndem Wind") for woodwind instruments, and includes an account of articulation, recommending the syllables "de" for semiminims and larger, "di ri" for semiminims and smaller, and the articulation "tell ell ell ell el le", which he calls the "flutter-tongue" "(flitter zunge)" for the smallest of note values, found in "passagi (Colorirn)".
The next treatise comes from Venice: Silvestro Ganassi dal Fontego's (1492–mid-1500s) "Opera Intitulata Fontegara" (1535), which is the first work to focus specifically on the technique of playing the recorder, and perhaps the only historical treatise ever published that approaches a description of a professional or virtuoso playing technique. Ganassi was a musician employed by the Doge and at the Basilica di San Marco at the time of the work's publication, an indication of his high level of accomplishment, and later wrote two works on the playing the viol and the violone, although he does not mention being employed by the Doge after "Fontegara".
"Fontegara" can be broadly divided into two parts: the first concerns the technique of playing the recorder, the second demonstrated divisions (regole, passagi, ornaments), some of great complexity, which the player may use to ornament a melody or, literally, "divide" it into smaller notes. In all aspects, Ganassi emphasizes the importance of imitating the human voice, declaring that "the aim of the recorder player is to imitate as closely as possible all the capabilities of the human voice", maintaining that the recorder is indeed able to do this. For Ganassi, imitation of the voice has three aspects: "a certain artistic proficiency," which seems to be the ability to perceive the nature of the music, prontezza (dexterity or fluency), achieved "by varying the pressure of the breath and shading the tone by means of suitable fingering," and galanteria (elegance or grace), achieved by articulation, and by the use of ornaments, the "simplest ingredient" of them being the trill, which varies according to the expression.
Ganassi gives fingering tables for a range of an octave and a seventh, the standard range also remarked by Praetorius, then tells the reader that he has discovered, through long experimentation, more notes not known to other players due to their lack of perseverance, extending the range to two octaves and a sixth. Ganassi gives fingerings for three recorders with different makers marks, and advises the reader to experiment with different fingerings, as recorders vary in their bore. The makers mark of one of the recorders, in the form of a stylized letter "A", has been associated with the Schnitzer family of instrument makers in Germany, leading Hermann Moeck to suppose that Ganassi's recorder might have been Northern European in origin. (see also Note on "Ganassi" recorders)
Ganassi uses three basic kinds of syllables "te che", "te re", and "le re" and also varies the vowel used with the syllable, suggesting the effect of mouth shape on the sound of the recorder. He gives many combinations of these syllables and vowels, and suggests the choice of the syllables according to their smoothness, "te che" being least smooth and "le re" being most so. He does not, however, demonstrate how the syllables should be used to music.
Most of the treatise consists of tables of diminutions of intervals, small melodies and cadences, categorized by their meter. These several hundred divisions use quintuplets, septuplets, note values from whole notes to 32nd notes in modern notation, and demonstrate immense variety and complexity.
The frontispiece to "Fontegara" shows three recorder players play together with two singers. Like Agricola and Virdung, Ganassi takes for granted that recorders should be played in groups of four, and come in three sizes: F3, C4 and G4. He makes a distinction between solo playing and ensemble playing, noting that what he has said is for solo players, and that when playing with others, it is most important to match them. Unfortunately, Ganassi gives only a few ornamented examples with little context for their use. Nonetheless, Ganassi offers a tantalizing glimpse at a highly developed professional culture and technique of woodwind playing that modern players can scarcely be said to have improved upon.
Gerolamo Cardano's "De Musica" was written around 1546, but not published until 1663 when it was published along with other works by Cardan, who was an eminent philosopher, mathematician and physician as well as a keen amateur recorder player who learned from a professional teacher, Leo Oglonus, as a child in Milan.
His account corroborates that of Ganassi, using the same three basic syllables and emphasizing the importance of breath control and ornamentation in recorder playing, but also documents several aspects of recorder technique otherwise undocumented until the 20th century. These include multiple techniques using the partial closing of the bell: to produce a tone or semitone below the tonic, and to change semitones into dieses (half semitones), which he says can also be produced by "repercussively bending back the tongue". He also adds that the position of the tongue, either extended or turned up towards the palate, can be used to improve, vary, and color notes. He is the first to differentiate between the amount of the breath (full, shallow, or moderate) and the force (relaxed or slow, intense, and the median between them) as well as the different amount of air required for each instrument, and describes a trill or vibrato called a "vox tremula" in which "a tremulous quality in the breath" is combined with a trilling of the fingers to vary the interval from anything between a major third and a diesis. He is also the first writer to mention the recorder in D ("discantus"), which he leaves unnamed.
Composer and singer Philibert Jambe de Fer ( 1515 1566) was the only French author of the 16th century to write about the recorder, in his "Epitome musical". He complains of the French name for the instrument, "fleutte à neuf trouz" ("flute with nine holes") as, in practice, one of the lowermost holes must be plugged, leaving only eight open holes. He prefers "fleute d'Italien" or the Italian "flauto". His fingering chart is notable for two reasons, first for describing fingerings with the 15th produced as a variant on the 14th, and for using the third finger of the lower hand as a buttress finger, although only for three notes in the lower octave. (see also Renaissance structure)
Aurelio Virgiliano's "Il dolcimelo" ( 1600) presents ricercars intended for or playable on the recorder, a description of other musical instruments, and a fingering chart for a recorder in G4 similar to Jambe de Fer's.
The "Syntagma musicum" (1614–20) of Michael Praetorius (1571–1621) in three volumes (a fourth was intended but never finished) is an encyclopedic survey of music and musical instruments. Volume II, "De Organographia" (1619) is of particular interest for its description of no fewer than eight sizes of recorder ("klein Flötlein" or "exilent" in G5, "discant" in C5 or D5, "alt" in G4, "tenor" in C4, "basset" in F3, "bass" in B2, and "grossbass" in F2) as well as the four-holed "gar kleine Plockflötlein".
Praetorius was the first author to explain that recorders can confuse the ear into believing that they sound an octave lower than pitch, which phenomenon has more recently been explained in relation to the recorder's lack of high harmonics. He also shows the different "registers" of consort possible, 2′ (discant, alt, and tenor), 4′ (alt, tenor, and basset), and 8′ (tenor, basset, and bass) (see also Nomenclature). Additionally, he proposed cutting the recorder between the beak and the first finger hole to allow for a kind of tuning slide to raise or lower its pitch, similar to the Baroque practice of adjusting a recorder's pitch by "pulling out" the top joint of the recorder.
The recorders described in Praetorius are of the "stretched hourglass" profile (see above, far right). He gives fingerings like those of Ganassi, and remarks that they normally have a range of an octave and a sixth, although exceptional players could extend that range by a fourth.
Some paintings from the 14th and 15th centuries depict musicians playing what appear to be two end-blown flutes simultaneously. In some cases, the two flutes are evidently disjoint, separate flutes of similar make, played angled away from each other, one pipe in each hand. In others, flutes of the same length have differing hand positions. In a final case, the pipes are parallel, in contact with each other, and differ in length. While the iconographic criteria for a recorder are typically a clearly recognizable labium and a double handed vertical playing technique, such criteria are not prescriptive, and it is uncertain whether any of these depictions should be considered a single instrument, or constitute a kind of recorder. The identification of the instrument depicted is further complicated by the symbolism of the aulos, a double piped instrument associated with the satyr Marsyas of Greek mythology.
An instrument consisting of two attached, parallel, end-blown flutes of differing length, dating to the 15th or 16th century, was found in poor condition near All Souls College in Oxford. The instrument has four holes finger-holes and a thumb hole for each hand. The pipes have an inverted conical "choke" bore (see Renaissance structure). Bob Marvin has estimated that the pipes played a fifth apart, at approximately C5 and G5. The instrument is sui generis. Although the instrument's pipes have thumb holes, the lack of organological precedent makes classification of the instrument difficult. Marvin has used the terms "double recorder" and the categorization-agnostic "flauto doppio" (double flute) to describe the Oxford instrument.
Marvin has designed a "flauto doppio" based on the Oxford instrument, scaled to play at F4 and C5. Italian recorder maker Francesco Livirghi has designed a double recorder or "flauto doppio" with connected, angled pipes of the same length but played with different hand positions, based on iconographic sources. Its pipes play at F4 and B4. Both instruments use fingerings of the makers' design.
In the 1970s, when recorder makers began to make the first models of recorders from the 16th and 17th centuries, such models were not always representative of the playing characteristics of the original instruments. Especially notable is Fred Morgan's much copied "Ganassi" model, based loosely on an instrument in the Vienna Kunsthistorisches museum (inventory number SAM 135), was designed to use the fingerings for the highest notes in Ganassi's tables in Fontegara. As Morgan knew, these notes were not in standard use; indeed Ganassi uses them in only a few of the hundreds of diminutions contained in Fontegara. Historically, such recorders did not exist as a distinct type, and the fingerings given by Ganassi were those of a skilled player particularly familiar with his instruments. When modern music is written for 'Ganassi recorders' it means this type of recorder.
Recorders were probably first used to play vocal music, later adding purely instrumental forms such as dance music to their repertoire. Much of the vocal music of the 15th, 16th and 17th centuries can be played on recorder consorts, and as illustrated in treatises from Virdung to Praetorius, the choice appropriate instruments and transpositions to play vocal music was common practice in the Renaissance. Additionally, some collections such as those of Pierre Attaingnant and Anthony Holborne, indicate that their instrumental music was suitable for recorder consorts. This section first discusses repertoire marked for the recorder, then briefly, other repertoire played on recorder.
In 1505 Giovanni Alvise, a Venetian wind player, offered Francesco Gonzaga of Mantua a motet for eight recorders, however the work has not survived.
Pierre Attaingnant's ( 1528–1549) "Vingt & sept chansons musicales a quatre parties a la fleuste dallement...et a la fleuste a neuf trous" (1533) collects 28 (not 27, as in the title) four-part instrumental motets, nine of which he says were suitable for performance on flutes ("fleustes dallement", German flutes), two on recorders ("fleuestes a neuf trous," Nine holed flutes, "recorders"), and twelve suitable for both. Of the twelve marked for both, seven use "chiavi naturali", or low-clefs typically used for recorders, while the others use the "chiavette" clefs used in the motets marked for flutes. Hence, the seven notated in "chiavi naturali" could be considered more appropriate for recorders. "Vingt et sept chansons" is the first published music marked for a recorder consort. Earlier is a part for Jacobus Barbireau's song "Een vrolic wesen", apparently for recorder, accompanying the recorder fingering chart in "Livre plaisant et tres utile..." (Antwerp, 1529), a partial French translation of Virdung's "Musica getutscht".
Jacques Moderne's "S'ensuyvent plusieurs basses dances tant communes que incommunes" published in the 1530s, depicts a four-part recorder consort such as those described in Virdung, Agricola, Ganassi and others, however the dances are not marked for recorders. His "Musique de joye" (1550) contains ricercares and dances for performance on "espinetes, violons & fleustes".
In 1539–40, Henry VIII of England, also a keen amateur player (see Cultural significance), imported five brothers of the Bassano family from Venice to form a consort, expanded to six members in 1550, forming a group that maintained an exceptional focus on the recorder until at least 1630 when the recorder consort was combined with the other wind groups. Most wind bands consisted of players playing sackbutts, shawms, and other loud instruments doubling on recorder. Some music probably intended for this group survives, including dance music by Augustine and Geronimo Bassano from the third quarter of the 16th century, and the more elaborate fantasias of Jeronimo Bassano ( 1580), four in five parts and one in six parts. Additionally, the Fitzwilliam wind manuscript ("GB-Cfm" 734) contains wordless motets, madrigals and dance pieces, including some by the Bassano family, probably intended for a recorder consort in six parts.
The English members of the Bassano family, having originated in Venice, were also probably familiar with the vocal style, advanced technique, and complex improvised ornamentation described in Ganassi's "Fontegara", and they were probably among the recorder players who Ganassi reports having worked and studied with: when they were brought to England, they were regarded as some of the best wind players in Venice. While most of the music attributed to the consort uses only a range of a thirteenth, it is possible that the Bassano's were familiar with Ganassi's extended range.
Recorders were also played with other instruments, especially in England, where it was called a mixed consort or "broken consort".
Other 16th century composes whose instrumental music can be played well on recorder consorts include
Other notable composers of the Renaissance whose music may be played on the recorder include
The recorder achieved great popularity in the 16th century, and is one of the most common instruments of the Renaissance. From the 15th century onwards, paintings show upper-class men and women playing recorder, and Virdung's didactic treatise "Musica getutscht" (1511), the first of its kind, was aimed at the amateur (see also Documentary evidence). Famously, Henry VIII of England was an avid player of the recorder, and at his death in 1547 an inventory of his possessions included 76 recorders in consorts of various sizes and materials. Some Italian paintings from the 16th-century show aristocracy of both sexes playing the recorder, however many gentlemen found it unbecoming to play because it uses the mouth, preferring the lute and later the viol.
At the turn of the 17th century, playwright William Shakespeare famously referenced the recorder in his most substantial play, "The Tragedy of Hamlet, Prince of Denmark," creating an extended metaphor between manipulation and playing a musical instrument. Poet John Milton also referenced the recorder in his most famous work, the epic poem Paradise Lost published in 1667, in which the recently fallen angels in Hell "move / in perfect phalanx to the Dorian mood / of flutes and soft recorders," recalling both the affect of the Dorian mode as the mode of calling to action, and the use of flutes by the Spartans of ancient Greece, although the specification of the recorder is anachronistic in this context.
Several changes in the construction of recorders took place in the 17th century, resulting in the type of instrument generally referred to as "Baroque" recorders, as opposed to the earlier "Renaissance" recorders. These innovations allowed baroque recorders to possess a tone regarded as "sweeter" than that of the earlier instruments, at the expense of a reduction in volume, particularly in the lowest notes.
The evolution of the Renaissance recorder into the Baroque instrument is generally attributed to the Hotteterre family, in France. They developed the ideas of a more tapered bore, bringing the finger-holes of the lowermost hand closer together, allowing greater range, and enabling the construction of instruments in several jointed sections. The last innovation allowed more accurate shaping of each section and also offered the player minor tuning adjustments, by slightly pulling out one of the sections to lengthen the instrument.
The French innovations were taken to London by Pierre Bressan, a set of whose instruments survive in the Grosvenor Museum, Chester, as do other examples in various American, European and Japanese museums and private collections. Bressan's contemporary, Thomas Stanesby, was born in Derbyshire but became an instrument maker in London. He and his son (Thomas Stanesby junior) were the other important British-based recorder-makers of the early 18th century.
In continental Europe, the Denner family of Nuremberg were the most celebrated makers of this period.
The baroque recorder produces a most brilliant and projecting sound in the second octave, which is more facile and extended than that of earlier recorders, while the lowest notes in its range are relatively weak. Composers such as Bach, Telemann and Vivaldi exploit this property in their concertos for the instrument.
Measured from its lowest to its highest playable note, the baroque alto recorder has a range of at most two octaves and a fifth with many instruments having a smaller range. Even the most developed instruments of the period, however, cannot produce the augmented tonic, third and fourth of the third octave. Notably, Georg Philipp Telemann's concerto TWV 51:F1 makes use some of these notes in the third octave, posing significant technical challenges to the player, perhaps requiring the covering of the bell or other unusual techniques.
During the baroque period, the recorder was traditionally associated with pastoral scenes, miraculous events, funerals, marriages, and amorous scenes. Images of recorders can be found in literature and artwork associated with all of these. Purcell, J. S. Bach, Telemann, and Vivaldi used the recorder to suggest shepherds and imitate birds in their music.
Although the recorder achieved a greater level of standardization in the Baroque than in previous periods, indeed it is the first period in which there was a "standard" size of recorder, ambiguous nomenclature and uncertain organological evidence have led to controversy regarding which instruments should be used in some "flute" parts from the period.
The concertino group of Bach's fourth Brandenburg Concerto in G major, BWV 1049, consists of a "violono principale", and , with ripieno strings. His later harpsichord transcription of this concerto, BWV 1057, lowers the key by a tone, as in all of Bach's harpsichord transcriptions, and is scored for solo harpsichord, two "fiauti à bec" and ripieno strings. The desired instrument for the "fiauti d'echo" parts in BWV 1049 has been a matter of perennial musicological and organological debate for two primary reasons: first, the term "fiauto d'echo" is not mentioned in dictionaries or tutors of the period; and second, the first "fiauto" part uses F#6, a note which is difficult to produce on a Baroque alto recorder in F4.
The instrumentation of BWV 1057 is uncontroversial: "fiauti à bec" unambiguously specifies recorders, and both parts have been modified to fit comfortably on altos in F4, avoiding, for example, an unplayable Eb4 in the second "fiauto" that would have resulted from a simple transposition of a tone.
For the first and last movements of the concerto, two opinions predominate: first, that both recorder parts should be played on alto recorders in F4; and second, that the first part should be played on an alto recorder in G and the second part on an alto in F. Tushaar Power has argued for the alto in G4 on the basis that Bach uses the high F#6, which can be easily played on an alto in G4, but not the low F4, a note not playable on the alto in G4. He corroborates this with other alto recorder parts in Bach's cantatas. Michael Marissen reads the repertoire differently, demonstrating that in other recorder parts, Bach used both the low F4 and F#6, as well as higher notes. Marissen argues that Bach was not as consistent as Power asserts, and that Bach would have almost certainly had access to only altos in F. He corroborates this with examinations of pitch standards and notation in Bach's cantatas, in which the recorder parts are sometimes written as transposing instruments to play with organs that sounded as much as a minor third above written pitch. Marissen also reads Bach's revisions to the recorder parts in BWV 1057 as indicative of his avoidance of F#6 in BWV 1049, a sign that he only used the difficult note when necessary in designing the part for an alto recorder in F4. He posits that Bach avoided F#6 in BWV 1049, at the cost of inferior counterpoint, reinstating them as E6 in BWV 1057.
In the second movement, breaking of beaming in the "fiauto" parts, markings of "f" and "p," the fermata over the final double bar of the first movement, and the 21 bars of rest at the beginning of the third have led some musicologists to argue that Bach intended the use of "echo flutes" distinct from normal recorders in the second movement in particular. The breaking of beaming could be an indication of changes in register or tonal quality, the rests introduced to allow the players time to change instruments, and the markings of "f" and "p" further indicative of register or sound changes. Marissen has demonstrated that the "f" and "p" markings probably indicated tutti and solo sections rather than loud and soft ones.
A number of instruments other than normal recorders have been suggested for the "fiauto d'echo". One of the earliest proposed alternatives, by Thurston Dart, was the use of double flageolets, a suggestion since revealed to be founded on unsteady musicological grounds. Dart did, however, bring to light numerous newspaper references to Paisible's performance on an "echo flute" between 1713 and 1718. Another contemporary reference to the "echo flute" is in Etienne Loulié's "Elements ou principes de musique" (Amsterdan, 1696): "Les sons de deux flutes d'echo sont differents, parce que l'un est fort, & que l'autre est foible" (The sounds of two echo flutes are different, because one is strong and the other is weak). Loulié is unclear on why one would need two echo flutes to play strongly and weakly, and on why it is that echo flutes differ. Perhaps the echo flute was composed in two halves: one which plays strongly, the other weakly? On this we can only speculate.
Surviving instruments which are candidates for echo flutes include an instrument in Leipzig which consists of two recorders of different tonal characteristics joined together at the head and footjoints by brass flanges. There is also evidence of double recorders tuned in thirds, but these are not candidates for the "fiauto" parts in BWV 1049.
Vivaldi wrote three concertos for the , possibly for performance by students at the Ospedale della Pietà in Venice, where he taught and composed in the early 18th century. They feature virtuosic solo writing, and along with his concerto RV 441 and trio sonata RV 86 are his most virtuosic recorder works. They each survive a single hastily written manuscript copy, each titled "Con.to per Flautino" (Concerto for little flute) with the additional note "Gl'istrom.ti trasportati alla 4a" (The instruments transpose by a fourth) in RV 443 and "Gl'istrom.ti alla 4ta Bassa" (The instruments lower by a fourth) in RV 445. The three concertos RV 443, 444, and 445 are notated in C major, C major and A minor respectively. Also of note is the occasional use of notes outside the normal two octave compass of the recorder: the range of the solo sections is two octaves from notated F4 to notated F6, however there is a single notated C4 in the first movement of RV 444, a notated E4 in a tutti section in the first movement of RV 443 and low E4 in multiple tutti sections of RV 445.
A number of possible "flautini" have been proposed as the instrument intended for the performance of these concertos. The first suggestion was the use of the one keyed piccolo, or another small transverse flute, however such instruments had fallen out of use in Venice by the generally accepted time of composition of these concertos in the 1720s, and this opinion is no longer considered well supported. Another suggestion, first proposed by Peter Thalheimer, is the "french" flageolet (see Flageolets below) in G5, which was notated in D4, appearing a fourth lower, possibly explaining the note in the margins of RV 443 and RV 445 ("Gl'istromti transportati alla 4a") and supported by Bismantova (1677 rev. 1694) and Bonanni (1722) which equate "flautino" to the flageolet. However this suggestion has been opposed by the presence of notated F and F which are not within the typical compass of the flageolet, although they may be produced through the covering of the bell, sometimes combined with underblowing, as attested by theorists as early as Cardano (c. 1546) and as late as Bellay (c. 1800).
Two instruments are conventionally accepted today for the performance of these concertos, the sopranino recorder, notated like an alto but sounding an octave higher, and the soprano recorder, following the instruction to transpose the parts down by a fourth. Winfried Michel was first to argue in favor of the soprano recorder in 1983, when he proposed to take Vivaldi at his word and transpose the string parts down a fourth and play the "flautino" part on a soprano recorder in C5 (also "fifth-flute") using the English practice of notating such flutes as transposing instruments using the fingerings of an alto recorder. Michel notes that this transposition allows for the use of the violins' and viola's lowest strings (in sections where they provide the accompaniment without bass) and the lowest two notes of the 'cello. He attributes the presence of notes not in the recorder's normal compass to Vivaldi's haste, noting that these notes do not appear in the solo sections. He has edited editions of RV 443 and RV 445 for soprano recorder in G major and E minor respectively. Federico Maria Sardelli concurs with Michel in supposing that the margin note was intended to allow the performance of the concertos on the soprano recorder on a specific occasion, however concludes that they were probably written for the sopranino recorder in F5, noting that small transverse flutes had fallen out of use in Italy by Vivaldi's time, the paucity of flageolets in Italy, the range of the parts, and uses of the flautino in vocal arias.
The recorder was little used in art music of the Classical and Romantic periods. Researchers have long debated why this change occurred, and to what extent the recorder remained use in the late 18th century, and later the 19th century. A significant question in this debate is which, if any, duct flutes of this period are recorders or successors to recorders.
The recorder work of the latter half of the 18th century most known today is probably a trio sonata by C. P. E. Bach, Wq.163, composed in 1755 an arrangement of a trio sonata for two violins and continuo, scored for the unusual ensemble of viola, bass recorder and continuo. This work is also notable for being perhaps the only significant surviving historical solo work for bass recorder. Also of note are the works of Johann Christoph Schultze ( 1733–1813), who wrote two concertos for the instrument, one in G major and another in B major, written around 1740. The last occurrences of the recorder in art music are apparently by Carl Maria von Weber in "Peter Schmoll und seine Nachbarn" (1801) and "Kleiner Tusch" (1806). Hector Berlioz may have intended "La fuite en Egypte" from "L'enfance du Christ" (1853) for the instrument. Donizetti owned three recorders.
Many reasons supporting the conventional view that the recorder declined have been proposed. The first significant explanation for the recorder's decline was proposed by Waitzman (1967), who proposed six reasons:
In the Baroque, the majority of professional recorder players were primarily oboists or string players. For this reason, the number of professional exponents of the recorder was smaller than that of other woodwinds.
Others attribute the decline of the recorder in part to the flute innovators of the time, such as Grenser, and Tromlitz, who extended the transverse flute's range and evened out its tonal consistency through the addition of keys, or to the supposedly greater dynamic range and volume of the flute. Similar developments occurring in many other orchestral instruments to make them louder, increase their range, and increase their tonal consistency did not simultaneously occur in the case of the recorder.
A complementary view recently advanced by Nikolaj Tarasov is that the recorder, rather than totally disappearing, evolved in similar ways to other wind instruments through the addition of keys and other devices, and remained in use throughout the 19th century, with its direct descendant's popularity overlapping with the late 19th and early 20th century recorder revival. Support for this view rests on the organological classification of some 19th century duct flutes as recorders. For more on this question, see "Other duct flutes".
Duct flutes remained popular even as the recorder waned in the 18th century. As in the instrument's earliest history, questions of the instrument's quiddity are at the forefront of modern debate. The modification and renaming of recorders in the 18th century in order to prolong their use, and the uncertainty of the extent of the recorder's use the late 18th and early 19th centuries have fueled these debates. Some recent researchers contend that some 19th century duct flutes are actually recorders. This article briefly discusses the duct flutes presented as successors to the recorder: the English flageolet and the csakan, which were popular among amateurs in the second half of the 18th century, and the whole of the 19th.
The word "flageolet" has been used since the 16th century to refer to small duct flutes, and the instrument is sometimes designated using general terms such as "flautino" and "flauto piccolo", complicating identification of its earliest form. It was first described by Mersenne in "Harmonie universelle" (1636) as having four fingers on the front, and two thumb holes on the back, with lowest note C6 and a compass of two octaves. Like the recorder, the upper thumb hole is used as an octaving vent. Flageolets were generally small flutes, however their lowest note varies. They were initially popular in France, and it is from there that the flageolet first arrived in England in the seventeenth century, becoming a popular amateur instrument, as the recorder later did. Indeed, when the recorder was introduced to England it was presented as an easy instrument for those who already played the flageolet, and the earliest English recorder tutors are notated in the flageolet tablature of the time, called "dot-way". Notably, the diarist and naval administrator Samuel Pepys (1633–1703) and his wife were both amateur players of the flageolet, and Pepys was later an amateur recorder player.
Starting in the early 1800s, a number of innovations to the flageolet were introduced, including the addition of keys to extend its range and allow it to more easily play accidentals. They also included novel solutions to the problem of condensation: most commonly, a sea sponge was placed inside the wind chamber (the conical chamber above the windway) to soak up moisture, while novel solutions such as the insertion of a thin wooden wedge into the windway, the drilling of little holes in the side of the block to drain condensation and a complex system for draining condensation through a hollowed out block developed, were also developed. Around 1800 in England, the recorder ("English flute," see Name) came to be called an "English flageolet," appropriating the name of the more fashionable instrument. From at least this time to the present, the flageolet in its first form has been called the French flageolet to differentiate it from the so-called English flageolet.
From around 1803, when the London instrument maker William Bainbridge obtained a number of patents for improvements to the English flageolet, instruments were often referred as "improved" or "patent" flageolets with little reference to how they actually differed from their predecessors. In this period, the instrument had six finger holes and single thumb hole, and had as many as six keys. Tarasov reports that the English flageolets of the late 18th century had six finger holes and no thumb hole, and later regained the thumb hole seventh finger hole (see above, right). The English flageolet never reached the level of popularity that the "French" flageolet enjoyed in the 19th century, possibly because the latter instrument was louder. Both remained popular until the beginning of the 20th century.
A significant amount of music was written for the flageolet in the 19th century, such as the etudes of Narcisse Bousquet although much of it was directed at amateurs.
English flageolets that may qualify as recorders are of two types: those early instruments, called "English flageolets," which were actually recorders, and 19th century instruments with seven finger holes and a thumb hole. These instruments are not typically regarded as recorders, however Tarasov has argued for their inclusion in the family.
The csakan (from Hung. "csákány ""pickaxe"), also known by the recorder's old french name "flute douce", was a duct flute in the shape of a walking stick or oboe popular in Vienna from about 1800 to the 1840s. The csakan was played using the fingerings of a recorder in C, and was typically pitched in A or G and played as a transposing instrument. The first documented appearance of the csakan was at a concert in Budapest on February 18, 1807 in a performance by its billed inventor, Anton Heberle ( 1806–16). Tarasov has contested Heberle's status as the inventor of the instrument, and has argued that the csakan grew out of a Hungarian war hammer of the same name, which was converted into a recorder, perhaps for playing military music. Around 1800, it was highly fashionable for make walking sticks with additional functions (e.g., umbrellas, swords, flutes, oboes, clarinets, horns) although the csakan was the most popular of these, and the only one that became a musical instrument in its own right.
The earliest instruments were shaped like a walking stick with a mouthpiece in the handle and had no keys, although they could eventually have up to thirteen keys, along with a tuning slide and a device for narrowing the thumb hole. In the 1820s a csakan "in the pleasing shape of an oboe" was introduced in a "simple" form with a single key and a "complex" form with up to twelve keys like those found on contemporaneous flutes. Well known makers of the csakan included Johann Ziegler and Stephan Koch in Vienna, and Franz Schöllnast in Pressburg. According to accounts left by Schöllnast, the csakan was primarily an amateur instrument, purchased by those who wanted something simple and inexpensive, however there were also accomplished professionals, such as Viennese court oboist Ernst Krähmer (1795–1837) who toured as far afield as Russia, playing the csakan with acclaimed virtuosity.
Around 400 works for the csakan were published in the first half of the 19th century, mainly for csakan solo, csakan duet or csakan with guitar or piano. The csakan's repertoire has not yet been fully explored. Notable composers for the instrument include Heberle and Krähmer, and Tarasov notes that piano works by Beethoven were arranged for csakan and guitar (Beethoven is reported to have owned a walking-stick csakan). Modern recorder makers such as Bernhard Mollenhauer and Martin Wenner have made csakan copies.
Similarities in fingering and design make the csakan at least a close relative of the recorder. Accounts of Krähmer's playing, which report his "diminishing and swelling the notes, up to an almost unbelievable loudness" imply a developed technique using shading and alternate fingerings, far beyond a purely amateur culture of house music. Additionally, Tarasov reports that some recorders by Baroque makers were modified, around 1800, through the addition of keys, including a J. C. Denner (1655–1707) basset recorder in Budapest and an alto by Nikolaus Staub (1664–1734) with added G keys, like the D key on a baroque two-key flute. Another modification is the narrowing of the thumb hole, by way of an ivory plug on the J. C. Denner basset and an alto by Benedikt Gahn (1674–1711), to allow it to serve purely as an octaving vent, as found on many flageolets and csakans. These changes may be archetypal to those found on csakans and flageolets, and constitute an inchoate justification for the continuous development of the Baroque recorder into its 19th-century relatives.
The concept of a recorder "revival" must be considered in the context of the decline of the recorder in the 18th and 19th centuries. The craft of recorder making was continued in some form by a number of families, such as the produced by the Oeggle family, which traces its lineage to the Walch family of recorder makers the careers of the Schlosser family of Zwota. Heinrich Oskar Schlosser (1875–1947) made instruments sold by the firm of Moeck in Celle and helped to design their Tuju series of recorders. The firm Mollenhauer, currently headed by Bernhard Mollenhauer, can trace its origins to historical instrument makers.
The recorder, if it did persist through the 19th century, did so in a manner quite unlike the success it enjoyed in previous centuries, or that it would enjoy in the century to come in. Among the earliest ensembles to begin use of recorders in the 20th century was the Bogenhauser Künstlerkapelle (Bogenhausen Artists' Band) which from 1890 to 1939 used antique recorders and other instruments to play music of all ages, including arrangements of classical and romantic music. Nonetheless, the recorder was considered primarily an instrument of historical interest.
The eventual success of the recorder in the modern era is often attributed to Arnold Dolmetsch. While he was responsible for broadening interest in the United Kingdom beyond the small group of early music specialists, Dolmetsch was not solely responsible for the recorder's broader revival. On the continent his efforts were preceded by those of musicians at the Brussels Conservatoire (where Dolmetsch received his training), and by the German Bogenhauser Künstlerkapelle. Also in Germany, the work of Willibald Gurlitt, Werner Danckerts and Gustav Scheck proceeded quite independently of the Dolmetsches.
Carl Dolmetsch, the son of Arnold Dolmetsch, became one of the first virtuoso recorder players in the 1920s; but more importantly he began to commission recorder works from leading composers of his day, especially for performance at the Haslemere festival which his father ran. Initially as a result of this, and later as a result of the development of a Dutch school of recorder playing led by Kees Otten, the recorder was introduced to serious musicians as a virtuoso solo instrument both in Britain and in northern Europe.
Among the influential virtuosos who figure in the revival of the recorder as a serious concert instrument in the latter part of the 20th century are Ferdinand Conrad, Kees Otten, Frans Brüggen, Roger Cotte, Hans-Martin Linde, Bernard Krainis, and David Munrow. Brüggen recorded most of the landmarks of the historical repertoire and commissioned a substantial number of new works for the recorder. Munrow's 1975 double album "The Art of the Recorder" remains as an important anthology of recorder music through the ages.
Among late 20th-century and early 21st-century recorder ensembles, the trio Sour Cream (led by Frans Brüggen), Flautando Köln, the Flanders Recorder Quartet, Amsterdam Loeki Stardust Quartet and Quartet New Generation have programmed remarkable mixtures of historical and contemporary repertoire. Soloists such as Piers Adams, Dan Laurin and Dorothee Oberlinger, Michala Petri, Maurice Steger.
In the 2012 Charlotte Barbour-Condini became the first recorder player to reach the final of the biennial BBC Young Musician of the Year competition. Recorder player Sophie Westbrooke was a finalist in the 2014 competition.
The first recorders to be played in the modern period were antique instruments from previous periods. Anecdotally, Arnold Dolmetsch was motivated to make his own recorders after losing a bag containing his antique instruments. Recorders made in the early 20th century were imitative of baroque models in their exterior form, but differed significantly in their structure. Dolmetsch introduced English fingering, the now standard fingering for "baroque" model instruments, and standardized the doubled 6th and 7th holes found on a handful of antique instruments by the English makers Stanesby and Bressan. Dolmetsch instruments generally had a large rectangular windway, unlike the curved windways of all historical instruments, and played at modern pitch.
Nearly twice as many pieces have been written for the recorder since its modern revival as were written in all previous epochs. Many of these were composed by avant-garde composers of the latter half of the 20th century who used the recorder for the variety of extended techniques which are possible using its open holes and its sensitivity to articulation.
Modern composers of great stature have written for the recorder, including Paul Hindemith, Luciano Berio, Jürg Baur, Josef Tal, John Tavener, Michael Tippett, Benjamin Britten, Leonard Bernstein, Gordon Jacob, Malcolm Arnold, Steven Stucky and Edmund Rubbra.
Owing to its ubiquity as a teaching instrument and the relative ease of sound production, the recorder has occasionally been used in popular music by groups such as The Beatles; the Rolling Stones (see, for example, "Ruby Tuesday"); Yes, for example, in the song "I've Seen All Good People"; Jefferson Airplane with Grace Slick on "Surrealistic Pillow"; Led Zeppelin ("Stairway to Heaven"); Jimi Hendrix; Siouxsie and the Banshees; Judy Dyble of Fairport Convention; Dido (e.g. "Grafton Street" on "Safe Trip Home"); and Mannheim Steamroller.
The trade of recorder making was traditionally transmitted via apprenticeship. Notable historical makers include the Rafi, Schnitzer and Bassano families in the renaissance; Stanesby (Jr. and Sr.), J.C. and J. Denner, Hotteterre, Bressan, Haka, Heitz, Rippert, Rottenburgh, Steenbergen and Terton. Most of these makers also built other wind instruments such as oboes and transverse flutes. Notably, Jacob Denner is credited with the development of the clarinet from the chalumeau.
Recorder making declined with the instrument's wane in the late 18th century, essentially severing the craft's transmission to the modern age. With few exceptions, the duct flutes manufactured in the 19th and late 18th centuries were intended for amateur or educational use, and were not constructed to the high standard of earlier epochs.
Arnold Dolmetsch, the first to achieve commercial production in the 20th century, began to build recorders in 1919. While these early recorders played at a low pitch like that of the available originals, he did not strive for exactitude in reproduction, and by the 1930s the Dolmetsch family firm, then under the direction of Arnold's son Carl Dolmetsch, was mass-producing recorders at modern pitch with wide, straight windways, and began to produce bakelite recorders shortly after the Second World War. Nonetheless, the Dolmetsch models were innovative for their time and proved influential, particularly in standardizing the English fingering system now standard for modern baroque-style instruments and doubled 6th and 7th holes, which are rare on antique instruments.
In Germany, Peter Harlan began to manufacture recorders in the 1920s, primarily for educational use in the youth movement. Following Harlan's success, numerous makers such as Adler and Mollenhauer began commercial production of recorders, fueling an explosion in the instrument's popularity in Germany. These recorders shared little in common with antiques, with large straight windways, anachronistically pitched consorts, modified fingering systems and other innovations.
In the latter half of the 20th century, historically informed performance practice was on the rise and recorder makers increasingly sought to imitate the sound and character of antiques. The German-American maker Friedrich von Huene was among the first to research recorders held in European collections and produce instruments intended to reproduce the qualities of the antiques. Von Huene and his Australian colleague Frederick Morgan sought to connect the tradition of the historical wind-makers to the modern day with the understanding that doing so creates the best instruments, and those most suited to ancient music.
Virtually all recorders manufactured today claim ascendancy to an antique model and most makers active today can trace their trade directly to one of these pioneering makers.
Today, makers maintaining individual workshops include Ammann, Blezinger, Bolton, Boudreau, Breukink, Brown, Coomber, Cranmore, de Paolis, Ehlert, Grinter (dead), Marvin (dead), Meyer, Musch, Netsch, Prescott, Rohmer, Takeyama, von Huene, and Wenner. French maker Philippe Bolton created an electroacoustic recorder and is among the last to offer mounted bell-keys and double bell-keys for both tenor and alto recorders. Those bell-keys extend easily the range of the instrument to more than three octaves. Invented by Carl Dolmetsch in 1957, he first used the bell-key system publicly in 1958.
In the mid-20th century, German composer and music educator Carl Orff popularized the recorder for use in schools as part of Orff-Schulwerk programs in German schools. Orff's five-volume opus of educational music "Music for Children" contains many pieces for recorders, usually scored for other instruments as well.
Manufacturers have made recorders out of bakelite and other more modern plastics; they are thus easy to produce, hence inexpensive. Because of this, recorders are popular in schools, as they are one of the cheapest instruments to buy in bulk. They are also relatively easy to play at a basic level because sound production needs only breath, and pitch is basically determined by fingering. It is, however, incorrect to assume that mastery is similarly easy—like any other instrument, the recorder requires study to play well and in tune, and significant study to play at an advanced or professional level.
The recorder is a very social instrument. Many recorder players participate in large groups or in one-to-a-part chamber groups, and there is a wide variety of music for such groupings including many modern works. Groups of different sized instruments help to compensate for the limited note range of the individual instruments. Four part arrangements with a soprano, alto, tenor and bass part played on the corresponding recorders are common, although more complex arrangements with multiple parts for each instrument and parts for lower and higher instruments may also be regularly encountered. | https://en.wikipedia.org/wiki?curid=26244 |
Received Pronunciation
Received Pronunciation (RP) is the accent traditionally regarded as the standard for British English. For over a century there has been argument over such issues as the definition of RP, whether it is geographically neutral, how many speakers there are, whether sub-varieties exist, how appropriate a choice it is as a standard and how the accent has changed over time. RP is an accent, so the study of RP is concerned only with matters of pronunciation: other areas relevant to the study of language standards such as vocabulary, grammar and style are not considered.
The introduction of the term "Received Pronunciation" is usually credited to the British phonetician Daniel Jones. In the first edition of the "English Pronouncing Dictionary" (1917), he named the accent "Public School Pronunciation" ("public" being what Americans would term "private"), but for the second edition in 1926, he wrote, "In what follows I call it Received Pronunciation, for want of a better term." However, the term had actually been used much earlier by P. S. Du Ponceau in 1818. A similar term, "received standard," was coined by Henry C. K. Wyld in 1927. The early phonetician Alexander John Ellis used both terms interchangeably but with a much broader definition than Daniel Jones, having said "there is no such thing as a uniform educated pron. of English, and rp. and rs. is a variable quantity differing from individual to individual, although all its varieties are 'received', understood and mainly unnoticed".
According to "Fowler's Modern English Usage" (1965), the correct term is "'the Received Pronunciation'. The word 'received' conveys its original meaning of 'accepted' or 'approved', as in 'received wisdom'."
RP is often believed to be based on the accents of southern England, but it actually has most in common with the Early Modern English dialects of the East Midlands. This was the most populated and most prosperous area of England during the 14th and 15th centuries. By the end of the 15th century, "Standard English" was established in the City of London.
Some linguists have used the term "RP" while expressing reservations about its suitability. The Cambridge-published "English Pronouncing Dictionary" (aimed at those learning English as a foreign language) uses the phrase "BBC Pronunciation" on the basis that the name "Received Pronunciation" is "archaic" and that BBC News presenters no longer suggest high social class and privilege to their listeners. Other writers have also used the name "BBC Pronunciation".
The phonetician Jack Windsor Lewis frequently criticises the name "Received Pronunciation" in his blog: he has called it "invidious", a "ridiculously archaic, parochial and question-begging term" and noted that American scholars find the term "quite curious". He used the term "General British" (to parallel "General American") in his 1970s publication of "A Concise Pronouncing Dictionary of American and British English" and in subsequent publications. The name "General British" is adopted in the latest revision of Gimson's "Pronunciation of English". Beverley Collins and Inger Mees use the term "Non-Regional Pronunciation" for what is often otherwise called RP, and reserve the term "Received Pronunciation" for the "upper-class speech of the twentieth century". Received Pronunciation has sometimes been called "Oxford English", as it used to be the accent of most members of the University of Oxford. The "Handbook of the International Phonetic Association" uses the name "Standard Southern British". Page 4 reads:
In her book "Kipling's English History" (1974) Marghanita Laski refers to this accent as "gentry". "What the Producer and I tried to do was to have each poem spoken in the dialect that was, so far as we could tell, ringing in Kipling's ears when he wrote it. Sometimes the dialect is most appropriately, Gentry. More often, it isn't."
Faced with the difficulty of defining a single standard of RP, some researchers have tried to distinguish between different sub-varieties:
Traditionally, Received Pronunciation has been associated with high social class. It was the "everyday speech in the families of Southern English persons whose men-folk [had] been educated at the great public boarding-schools" and which conveyed no information about that speaker's region of origin before attending the school. An 1891 teacher’s handbook stated “It is the business of educated people to speak so that no-one may be able to tell in what county their childhood was passed”. Nevertheless, in the 19th century some British prime ministers still spoke with some regional features, such as William Ewart Gladstone.
Opinions differ over the proportion of British speakers who have RP as their accent. Trudgill estimated in 1974 that 3% of people in Britain were RP speakers, but this rough estimate has been questioned by J. Windsor Lewis. Upton notes higher estimates of 5% (Romaine, 2000) and 10% (Wells, 1982) but refers to all these as "guesstimates" that are not based on robust research. A recent book with the title "English after RP" discusses "the rise and fall of RP" and describes "phonetic developments between RP and contemporary Standard Southern British (SSB)".
The claim that RP is non-regional is disputed, since it is most commonly found in London and the south east of England. It is defined in the Concise Oxford English Dictionary as "the standard accent of English as spoken in the South of England", and alternative names such as “Standard Southern British” have been used.
Despite RP’s historic high social prestige in Britain, being seen as the accent of those with power, money, and influence, it may be perceived negatively by some as being associated with undeserved privilege and as a symbol of the south-east's political power in Britain. Based on a 1997 survey, Jane Stuart-Smith wrote, "RP has little status in Glasgow, and is regarded with hostility in some quarters". A 2007 survey found that residents of Scotland and Northern Ireland tend to dislike RP. It is shunned by some with left-wing political views, who may be proud of having an accent more typical of the working class.
Since the Second World War, and increasingly since the 1960s, a wider acceptance of regional English varieties has taken hold in education and public life.
In the early days of British broadcasting, RP was almost universally used by speakers of English origin. In 1926 the BBC established an Advisory Committee on Spoken English with distinguished experts, including Daniel Jones, to advise on correct pronunciation and other aspects of broadcast language. This was not successful, and was dissolved in the Second World War.
An interesting departure from the use of RP was the BBC's use of Yorkshire-born Wilfred Pickles as a newsreader during the Second World War (to distinguish BBC broadcasts from German propaganda).
In recent years RP has played a much smaller role in broadcast speech. In fact, as Catherine Sangster points out, “there is not (and never was) an official BBC pronunciation standard”. RP is most often heard in the speech of announcers and newsreaders on BBC Radio 3 and Radio 4, and some TV channels, but non-RP accents are now more widely accepted.
It has been claimed that digital assistants such as Siri, Amazon Alexa or Google Assistant speak with an RP accent in their English versions.
Most English dictionaries published in Britain (including the Oxford English Dictionary) now give phonetically transcribed RP pronunciations for all words. Pronunciation dictionaries represent a special class of dictionary giving a wide range of possible pronunciations: British pronunciation dictionaries are all based on RP, though not necessarily using that name. Daniel Jones transcribed RP pronunciations of words and names in the English Pronouncing Dictionary. Cambridge University Press continues to publish this title, as of 1997 edited by Peter Roach. Two other pronunciation dictionaries are in common use: the "Longman Pronunciation Dictionary", compiled by John C. Wells (using the name "Received Pronunciation"), and Clive Upton's "Oxford Dictionary of Pronunciation for Current English", (now republished as "The Routledge Dictionary of Pronunciation for Current English").
Pronunciation forms an essential component of language learning and teaching; a "model accent" in necessary for learners to aim at, and to act as a basis for description in textbooks and classroom materials. RP has been the traditional choice for teachers and learners of British English. However, the choice of pronunciation model is difficult, and the adoption of RP is in many ways problematical.
Nasals and liquids (, , , , ) may be syllabic in unstressed syllables. The consonant in 'row', 'arrow' in RP is generally a postalveolar approximant, which would normally be expressed with the sign in the International Phonetic Alphabet, but the sign is nonetheless traditionally used for RP in most of the literature on the topic.
Voiceless plosives (, , , ) are aspirated at the beginning of a syllable, unless a completely unstressed vowel follows. (For example, the is aspirated in "impasse", with primary stress on "-passe", but not "compass", where "-pass" has no stress.) Aspiration does not occur when precedes in the same syllable, as in "spot" or "stop". When a sonorant , , , or follows, this aspiration is indicated by partial devoicing of the sonorant. is a fricative when devoiced.
Syllable final , , , and may be either preceded by a glottal stop (glottal reinforcement) or, in the case of , fully replaced by a glottal stop, especially before a syllabic nasal ("bitten" ). The glottal stop may be realised as creaky voice; thus, an alternative phonetic transcription of "attempt" could be .
As in other varieties of English, voiced plosives (, , , ) are partly or even fully devoiced at utterance boundaries or adjacent to voiceless consonants. The voicing distinction between voiced and voiceless sounds is reinforced by a number of other differences, with the result that the two of consonants can clearly be distinguished even in the presence of devoicing of voiced sounds:
As a result, some authors prefer to use the terms "fortis" and "lenis" in place of "voiceless" and "voiced". However, the latter are traditional and in more frequent usage.
The voiced dental fricative () is more often a weak dental plosive; the sequence is often realised as (a long dental nasal). has velarised allophone () in the syllable rhyme. becomes voiced () between voiced sounds.
Examples of short vowels: in "kit", "mirror" and "rabbit", in "foot" and "cook", in "dress" and "merry", in "strut" and "curry", in "trap" and "marry", in "lot" and orange", in ago" and "sofa".
Examples of long vowels: in "fleece", in "goose", in "bear", in "nurse" and "furry", in "north", "force" and "thought", in "father" and "start".
The long mid front vowel is transcribed with the traditional symbol in this article. The predominant realisation in contemporary RP is monophthongal.
RP's long high vowels and are slightly diphthongised, and are often narrowly transcribed in phonetic literature as diphthongs and .
The terms "long" and "short" are relative to each other when applied to the vowel phonemes of RP. Vowels may be phonologically long or short (i.e. belong to the long or the short group of vowel phonemes) but their length is influenced by their context: in particular, they are shortened if a voiceless (fortis) consonant follows in the syllable, so that, for example, the vowel in 'bat' is shorter than the vowel in 'bad' . The process is known as "pre-fortis clipping". Thus phonologically short vowels in one context can be phonetically "longer" than phonologically long vowels in another context. For example, the phonologically long vowel in 'reach' (which ends with a voiceless consonant) may be "shorter" than the phonologically short vowel in the word 'ridge' (which ends with a voiced consonant). Wiik, cited in Cruttenden (2014), published durations of English vowels with a mean value of 17.2 csec. for short vowels before voiced consonants but a mean value of 16.5 csec for long vowels preceding voiceless consonants.
In natural speech, the plosives and often have no audible release utterance-finally, and voiced consonants are partly or completely devoiced (as in ); thus the perceptual distinction between pairs of words such as 'bad' and 'bat', or 'seed' and 'seat' rests mostly on vowel length (though the presence or absence of glottal reinforcement provides an additional cue).
In addition to such length distinctions, unstressed vowels are both shorter and more centralised than stressed ones. In unstressed syllables occurring before vowels and in final position, contrasts between long and short high vowels are neutralised and short and occur (e.g. "happy" , "throughout" ). The neutralisation is common throughout many English dialects, though the phonetic realisation of e.g. rather than (a phenomenon called "happy"-tensing) is not as universal.
Unstressed vowels vary in quality:
The centring diphthongs are gradually being eliminated in RP. The vowel (as in "door", "boar") had largely merged with by the Second World War, and the vowel (as in "poor", "tour") has more recently merged with as well among most speakers, although the sound is still found in conservative speakers. See poor–pour merger. The remaining centring glide is increasingly pronounced as a monophthong , although without merging with any existing vowels.
The diphthong /əʊ/ is pronounced by some RP speakers in a noticeably different way when it occurs before /l/, if that consonant is syllable-final and not followed by a vowel (the context in which /l/ is pronounced as a "dark l"). The realization of /əʊ/ in this case begins with a more back, rounded and sometimes more open vowel quality; it may be transcribed as [ɔʊ] or [ɒʊ]. It is likely that the backness of the diphthong onset is the result of allophonic variation caused by the raising of the back of the tongue for the /l/. If the speaker has "l-vocalization" the /l/ is realized as a back rounded vowel, which again is likely to cause backing and rounding in a preceding vowel as coarticulation effects. This phenomenon has been discussed in several blogs by John C. Wells. In the recording included in this article the phrase 'fold his cloak' contains examples of the /əʊ/ diphthong in the two different contexts. The onset of the pre-/l/ diphthong in 'fold' is slightly more back and rounded than that in 'cloak', though the allophonic transcription does not at present indicate this.
RP also possesses the triphthongs as in "tire", as in "tower", as in "lower", as in "layer" and as in "loyal". There are different possible realisations of these items: in slow, careful speech they may be pronounced as a two-syllable triphthong with three distinct vowel qualities in succession, or as a monosyllabic triphthong. In more casual speech the middle vowel may be considerably reduced, by a process known as smoothing, and in an extreme form of this process the triphthong may even be reduced to a single vowel, though this is rare, and almost never found in the case of . In such a case the difference between , , and in "tower", "tire", and "tar" may be neutralised with all three units realised as or . This type of smoothing is known as the "tower"–"tire", "tower"–"tar" and "tire"–"tar" mergers.
There are differing opinions as to whether in the BATH lexical set can be considered RP. The pronunciations with are invariably accepted as RP. The "English Pronouncing Dictionary" does not admit in BATH words and the "Longman Pronunciation Dictionary" lists them with a § marker of non-RP status. John Wells wrote in a blog entry on 16 March 2012 that when growing up in the north of England he used in "bath" and "glass", and considers this the only acceptable phoneme in RP. Others have argued that is too categorical in the north of England to be excluded. Clive Upton believes that in these words must be considered within RP and has called the opposing view "south-centric". Upton's "Oxford Dictionary of Pronunciation for Current English" gives both variants for BATH words. A. F. Gupta's survey of mostly middle-class students found that was used by almost everyone who was from clearly north of the isogloss for BATH words. She wrote, "There is no justification for the claims by Wells and Mugglestone that this is a sociolinguistic variable in the north, though it is a sociolinguistic variable on the areas on the border [the isogloss between north and south]". In a study of speech in West Yorkshire, K. M. Petyt wrote that "the amount of usage is too low to correlate meaningfully with the usual factors", having found only two speakers (both having attended boarding schools in the south) who consistently used .
Jack Windsor Lewis has noted that the Oxford Dictionary's position has changed several times on whether to include short within its prescribed pronunciation. The "BBC Pronouncing Dictionary of British Names" uses only , but its author, Graham Pointon, has stated on his blog that he finds both variants to be acceptable in place names.
Some research has concluded that many people in the North of England have a dislike of the vowel in BATH words. A. F. Gupta wrote, "Many of the northerners were noticeably hostile to , describing it as 'comical', 'snobbish', 'pompous' or even 'for morons'." On the subject, K. M. Petyt wrote that several respondents "positively said that they did not prefer the long-vowel form or that they really detested it or even that it was incorrect". Mark Newbrook has assigned this phenomenon the name "conscious rejection", and has cited the vowel as "the main instance of conscious rejection of RP" in his research in West Wirral.
John Wells has argued that, as educated British speakers often attempt to pronounce French names in a French way, there is a case for including (as in "bon"), and and (as in "vingt-et-un"), as marginal members of the RP vowel system. He also argues against including other French vowels on the grounds that very few British speakers succeed in distinguishing the vowels in "bon" and "banc", or in "rue" and "roue".
Not all reference sources use the same system of transcription. In particular:
Most of these variants are used in the transcription devised by Clive Upton for the "Shorter Oxford English Dictionary" (1993) and now used in many other Oxford University Press dictionaries.
The linguist Geoff Lindsey has argued that the system of transcription for RP has become outdated and has proposed a new system as a replacement.
Like all accents, RP has changed with time. For example, sound recordings and films from the first half of the 20th century demonstrate that it was usual for speakers of RP to pronounce the sound, as in "land", with a vowel close to , so that "land" would sound similar to a present-day pronunciation of "lend". RP is sometimes known as the Queen's English, but recordings show that even Queen Elizabeth II has changed her pronunciation over the past 50 years, no longer using an -like vowel in words like "land". The change in RP may be observed in the home of "BBC English". The BBC accent of the 1950s is distinctly different from today's: a news report from the 1950s is recognisable as such, and a mock-1950s BBC voice is used for comic effect in programmes wishing to satirise 1950s social attitudes such as the Harry Enfield Show and its "Mr. Cholmondeley-Warner" sketches.
A few illustrative examples of changes in RP during the 20th century and early 21st are given below. A more comprehensive list (using the name 'General British' in place of 'RP') is given in "Gimson's Pronunciation of English".
A number of cases can be identified where changes in the pronunciation of individual words, or small groups of words, have taken place.
The "Journal of the International Phonetic Association" regularly publishes "Illustrations of the IPA" which present an outline of the phonetics of a particular language or accent. It is usual to base the description on a recording of the traditional story of the North Wind and the Sun. There is an IPA illustration of British English (Received Pronunciation).
The speaker (female) is described as having been born in 1953, and educated at Oxford University. To accompany the recording there are three transcriptions: orthographic, phonemic and allophonic.
Phonemic
Allophonic
Orthographic
The North Wind and the Sun were disputing which was the stronger, when a traveller came along wrapped in a warm cloak. They agreed that the one who first succeeded in making the traveller take his cloak off should be considered stronger than the other. Then the North Wind blew as hard as he could, but the more he blew the more closely did the traveller fold his cloak around him, and at last the North Wind gave up the attempt. Then the Sun shone out warmly, and immediately the traveller took off his cloak. And so the North Wind was obliged to confess that the Sun was the stronger of the two.
The following people have been described as RP speakers:
Sources of regular comment on RP
Audio files | https://en.wikipedia.org/wiki?curid=26247 |
Ryan Lackey
Ryan Donald Lackey (born March 17, 1979) is an entrepreneur and computer security professional. He was a co-founder of HavenCo, the world's first data haven. He also speaks at numerous conferences and trade shows, including DEF CON, RSA Data Security Conference, on various topics in the computer security field, and has appeared on the cover of Wired Magazine, in numerous television, radio, and print articles on HavenCo and Sealand. Lackey operated BlueIraq, a VSAT communications and IT company serving the DoD and domestic markets in Iraq and Afghanistan during the US conflicts.
Lackey was born in West Chester, Pennsylvania and has also lived throughout the US and Europe, Anguilla, Sealand, Dubai, and Iraq. As a teenager, he was briefly involved with the Globewide Network Academy. Lackey attended MIT and majored in Course 18 (mathematics). While a student at MIT (he later dropped out due to financial constraints) Lackey became interested in electronic cash and distributed systems, originally for massively multiplayer online gaming. This interest led to attending several conferences (Financial Cryptography 98, various MIT presentations), participating on mailing lists such as "cypherpunks" and "dbs", and eventually implementing patented Chaumian digital cash in an underground library, HINDE, with Ian Goldberg, named after Hinde ten Berge, a Dutch cypherpunk also present at FC98. In part, he contributed to the cypherpunks movement as one of the longest Anonymous remailer operators.
In 1999 Lackey lived in the San Francisco Bay Area after a period in Anguilla before moving to the unrecognized state of Sealand off the coast of the United Kingdom and establishing HavenCo. In December 2002, he left HavenCo following a dispute with other company directors and the Sealand "Royal Family".
Eventually, BlueIraq's business model became economically unfeasible due to an escalation in anti-western violence primarily in the form of Improvised Explosive Devices and troop draw downs. BlueIraq sought venture capital to transform itself into a large general consumer cellular telephone company. However, the 2008 financial crisis and the instability of Iraq and Afghanistan made fund raising impossible.
Lackey returned to the US and located in San Francisco where he worked for a number of start-up companies before applying to Y Combinator. He was accepted into Y Combinator's Summer 2011 round. Lackey founded CryptoSeal, a VPN as a service start-up with a small group of people well known in the computer security community, and secured funding from Ron Conway and a well known venture capital fund. In June 2014, CryptoSeal was acquired by CloudFlare. | https://en.wikipedia.org/wiki?curid=26252 |
Revised Julian calendar
The Revised Julian calendar, also known as the Milanković calendar, or, less formally, new calendar, is a calendar proposed by the Serbian scientist Milutin Milanković in 1923, which effectively discontinued the 340 years of divergence between the naming of dates sanctioned by those Eastern Orthodox churches adopting it and the Gregorian calendar that has come to predominate worldwide. This calendar was intended to replace the ecclesiastical calendar based on the Julian calendar hitherto in use by all of the Eastern Orthodox Church. From 1 March 1600 through 28 February 2800, the Revised Julian calendar aligns its dates with the Gregorian calendar, which was proclaimed in 1582 by Pope Gregory XIII for adoption by the Christian world. The calendar has been adopted by the Orthodox churches of Constantinople, Albania, Alexandria, Antioch, Bulgaria, Cyprus, Greece, and Romania.
The Revised Julian calendar has the same months and month lengths as the Julian calendar, but, in the Revised Julian calendar, years evenly divisible by 100 are not leap years, except that years with remainders of 200 or 600 when divided by 900 remain leap years, e.g. 2000 and 2400 as in the Gregorian Calendar.
A committee composed of members of the Greek government and Greek Orthodox Church was set up to look into the question of calendar reform. It reported in January 1923. It recommended a switch (for civil purposes only) to the "political calendar" devised in 1785 and advocated by Maksim Trpković. Trpković advocated this calendar in preference to the Gregorian because of its greater accuracy and also because the vernal equinox would generally fall on 21 March, the date allocated to it by the church. In the Gregorian, it generally falls on 20 March. As in the Gregorian, end-century years are generally not leap years, but years that give remainder 0 or 400 on division by 900 were to be leap years. The changeover went into effect on 17 February/1 March.
After the promulgation of the royal decree, the Ecumenical Patriarch, Patriarch Meletius IV of Constantinople, issued an encyclical on 3 February recommending the calendar's adoption by Orthodox churches. The matter came up for discussion at a , which deliberated in May and June. Subsequently, it was adopted by several of the autocephalous Orthodox churches. The synod was chaired by the controversial patriarch and representatives were present from the churches of Cyprus, Greece, Romania and Serbia. There were no representatives of the other members of the original Orthodox Pentarchy (the Patriarchates of Jerusalem, Antioch, and Alexandria) or from the largest Orthodox church, the Russian Orthodox Church.
Discussion was lengthy because although Serbia officially supported the political calendar, Milanković (an astronomical delegate to the synod representing the Kingdom of Serbs, Croats and Slovenes) pressed for the adoption of his own version, in which the centennial leap years would be those giving remainder 200 or 600 when divided by 900 and the equinox would generally fall on 20 March (as in the Gregorian). Under the official proposal the equinox would sometimes fall on 22 March. This might make Easter fall outside its canonical limits due to the requirement that the Easter full moon follow the equinox. Also his scheme maximised the time during which the political calendar and the Gregorian would run in tandem.
Milanković's arguments won the day. In its decision the conference noted that "the difference between the length of the political year of the new calendar and the Gregorian is so small that only after 877 years it is observed difference of dates." The same decision provided that the coming should be called , thus dropping thirteen days. It then adopted the leap year rule of Milanković. The political calendar was preferred over the Gregorian because its mean year was within two seconds of the then current length of the "mean" tropical year. The present "vernal equinox" year, however, is about 12 seconds longer, in terms of mean solar days.
The synod also proposed the adoption of an astronomical rule for Easter: Easter was to be the Sunday after the midnight-to-midnight day at the meridian of the Church of the Holy Sepulchre in Jerusalem (35°13′47.2″ E or UT+22055 for the small dome) during which the first full moon after the vernal equinox occurs. Although the instant of the full moon must occur after the instant of the vernal equinox, it may occur on the same day. If the full moon occurs on a Sunday, Easter is the following Sunday. Churches that adopted this calendar did so on varying dates. However, all Eastern Orthodox churches continue to use the Julian calendar to determine the date of Easter (except for the Finnish Orthodox Church and the Estonian Orthodox Church, which now use the Gregorian Easter).
The following are Gregorian minus Revised Julian date differences, calculated for the beginning of January and March in each century year, which is where differences arise or disappear, until AD 10000. These are exact arithmetic calculations, not depending on any astronomy. A negative difference means that the proleptic Revised Julian calendar was behind the proleptic Gregorian calendar. The Revised Julian calendar is the same as the Gregorian calendar from 1 March 1600 to 28 February 2800, but the following day would be 1 March 2800 (RJ) or 29 February 2800 (G); this difference is denoted as '+1' in the table. 2900 is a leap year in Revised Julian, but not Gregorian: 29 February 2900 (RJ) is the same as 28 February 2900 (G) and the next day will be 1 March 2900 in both calendars - hence the '0' notation.
In 900 Julian years there are = 225 leap days. The Revised Julian leap rule omits seven of nine century leap years, leaving leap days per 900-year cycle. Thus the calendar mean year is 365 + days, but this is actually a double-cycle that reduces to 365 + = 365.24 days, or exactly 365 days 5 hours 48 minutes 48 seconds, which is exactly 24 seconds shorter than the Gregorian mean year of 365.2425 days, so in the long term on "average" the Revised Julian calendar pulls ahead of the Gregorian calendar by one day in 3600 years.
The number of days per Revised Julian cycle = 900 × 365 + 218 = 328,718 days. Taking mod 7 leaves a remainder of 5, so like the Julian calendar, but unlike the Gregorian calendar, the Revised Julian calendar cycle does not contain a whole number of weeks. Therefore, a full repetition of the Revised Julian leap cycle with respect to the seven-day weekly cycle is seven times the cycle length = 7 × 900 = 6300 years.
The epoch of the original Julian calendar was on the Saturday before the Monday that was the epoch of the Gregorian calendar. In other words, Gregorian = Julian .
However, the Revised Julian reform not only changed the leap rule but also made the epoch the same as that of the Gregorian calendar. This seems to have been carried out implicitly, and even scientific articles make no mention of it.
Nevertheless, it is impossible to implement calendrical calculations and calendar date conversion software without appreciating this detail and taking the 2-day shift (with the original Julian calendar) into account. If the original Julian calendar epoch is mistakenly used in such calculations then there is no way to reproduce the currently accepted dating of the Revised Julian calendar, which yields no difference between Gregorian and Revised Julian dates from the 17th to the 28th centuries and most other centuries since the start of the Christian era (including the two first).
The following is a scatter plot of actual astronomical northward equinox moments as numerically integrated by SOLEX 11 using DE421 mode with extended (80-bit) floating point precision, high integration order (18th order), and forced solar mass loss ("forced" means taken into account at all times). SOLEX can automatically search for northern hemisphere spring equinox moments by finding when the solar declination crosses the celestial equator northward, and then it outputs that data as the Terrestrial Time day and fraction of day relative to at noon (J2000.0 epoch). The progressive tidal slowing of the Earth rotation rate was accounted for by subtracting "ΔT" as calculated by the Espenak-Meeus polynomial set recommended at the NASA Eclipses web site to obtain the J2000.0-relative Universal Time moments, which were then properly converted to Revised Julian dates and Jerusalem local apparent time, taking local apparent midnight as the beginning of each calendar day. The year range of the chart was limited to dates before the year AD 4400 -by then "ΔT" is expected to accumulate to about six hours, with an uncertainty of less than hours.
The chart shows that the long-term equinox drift of the Revised Julian calendar is quite satisfactory, at least until AD 4400 . The medium-term wobble spans about two days because, like the Gregorian calendar, the leap years of the Revised Julian calendar are not smoothly spread: they occur mostly at intervals of four years but there are occasional eight-year gaps (at 7 out of 9 century years). Evidently each of the authorities responsible for the Gregorian and Revised Julian calendars, respectively, accepted a modest amount of medium-term equinox wobble for the sake of traditionally perceived leap rule mental arithmetic simplicity. Therefore, the wobble is essentially a curiosity that is of no practical or ritual concern.
The new calendar has been adopted by Orthodox churches as follows:
Adopting churches are known as New calendarists. The new calendar has not been adopted by the Orthodox churches of Jerusalem, Russia, Serbia (including the uncanonical Macedonian Orthodox Church), Georgia, Ukraine (as well as the churches loyal to Moscow), Mount Athos and the Greek Old Calendarists. Although Milanković stated that the Russian Orthodox Church adopted the new calendar in 1923, the present church continues to use the Julian calendar for both its fixed festivals and for Easter. A solution to this conundrum is to hypothesize that it was accepted only by the short-lived schismatic Renovationist Church, which had seized church buildings with the support of the Soviet government while Patriarch Tikhon was under house arrest. After his release, on , he declared that all Renovationist decrees were without grace, presumably including its acceptance of the new calendar.
The basic justification for the new calendar is the known errors of the Julian calendar, which will in the course of time lead to a situation in which those following the Julian calendar (in the Northern Hemisphere) will be reckoning the month of December (and the feast of Christ's Nativity) during the heat of summer, August and its feasts during the deep cold of winter, Easter during the autumn season, and the November feasts in the springtime. This would conflict with the Church's historic practice of celebrating Christ's birth on , a date chosen for a number of reasons. One of the reasons mentioned by Bennet is the time of the winter solstice, when the days begin to lengthen again as the physical sun makes its reappearance, along with the fact that Christ has traditionally been recognized by Christians as the metaphorical and spiritual sun who fulfills Malachi's prophetic words: "the sun of righteousness will shine with healing in its wings" (Malachi 4:2). The identification, based on this prophecy, of Jesus Christ as the "sun of righteousness" is found many times in writings of the early Church fathers and follows from many New Testament references linking Jesus with imagery of sun and light.
The defenders of the new calendar do not regard the Julian calendar as having any particular divine sanction (for more on this, see below); rather, they view the Julian calendar as a device of human technology, and thus subject to improvement or replacement just as many other devices of technology that were in use at the dawn of the Church have been replaced with newer forms of technology.
Supporters of the new calendar can also point to certain pastoral problems that are resolved by its adoption.
(1) Parishes observing the Julian calendar are faced with the problem that parishioners are supposed to continue fasting throughout western Christmas and New Year, seasons when their families and friends are likely to be feasting and celebrating New Year, often with parties, use of liquor, etc. This situation presents obvious temptations, which are eliminated when the new calendar is adopted.
(2) Another pastoral problem is the tendency of some local American media to focus attention each year on the (N.S.) / (O.S.) celebration of Christmas, even in localities where most Orthodox parishes follow the new calendar. So too, in all likelihood, do certain non-Orthodox churches profit from the Orthodox remaining Old Style, since the observance of Christmas among the Orthodox tends to focus attention on ethnic identifications of the feast, rather than on its Christian, dogmatic significance; which, in turn, tends to foster the impression in the public mind that for the Orthodox, the feast of Christ's Nativity is centered on the observance of the Julian date of that feast, rather than on the commemoration of Christ's birth. Such a focus appears to the defenders of the Revised Julian calendar and to many non-Orthodox as well, as a practice that is charming and quaint, but also anachronistic, unscientific and hence ultimately unreasonable and even cultish.
(3) Some Orthodox themselves may unwittingly reinforce this impression by ignorance of their own faith and by a consequential exclusive, or excessive, focus on the calendar issue: it has been observed, anecdotally, that some Russians cannot cite "any" difference in belief or practice between their faith and the faith of western Christians, "except" for the 13-day calendar difference.
Against the new calendar, the argument is made that inasmuch as the use of the Julian calendar was implicit in the decision of the First Ecumenical Council at Nicaea (325), no authority less than an Ecumenical Council may change this decision. However, the fact is that that Council made no decision or decree at all concerning the Julian calendar. Its silence constituted an implicit acceptance not of the Julian calendar, but of the civil calendar, which happened to be, at that time, the Julian calendar (the explicit decision of Nicaea being concerned, rather, with the date of Easter). By virtue of this, defenders of the new calendar argue that no decision by an Ecumenical Council was or is necessary today in order to "revise" (not abandon) the Julian calendar; and further, that by making the revision, the Church stays with the spirit of Nicaea I by keeping with the civil calendar in all its essentials—while conversely, failure to keep with the civil calendar could be seen as a departure from the spirit of Nicaea I in this respect. Lastly, it is argued that since the adoption of the new calendar evidently involves no change in or departure from the theological or the ethical teachings of Orthodox Christianity, but rather amounts to a merely disciplinary or administrative change—a clock correction of sorts—the authority to enact that change falls within the competency of contemporary, local episcopal authority. Implicit acceptance of this line of reasoning, or something very close to it, underlies the decision to adopt the new calendar by those Orthodox churches that have done so.
It follows that, in general, the defenders of the new calendar hold the view that in localities where the Church's episcopal authority has elected to adopt the new calendar, but where some have broken communion with those implementing this change, it is those who have broken communion who have in fact introduced the disunity, rather than the new calendar itself or those who have adopted it — although most would agree that attempts at various times to mandate the use of the new calendar through compulsion, have magnified the disunity.
To the objection that the new calendar has created problems by adjusting only the fixed calendar, while leaving all of the commemorations in the moveable cycle on the original Julian calendar, the obvious answer, of course, is that the 1923 Synod, which adopted the new calendar, did in fact change the moveable calendar as well, and that calendar problems introduced as a result of the adoption of the (fixed) new calendar alone, would not have existed had the corrections to the moveable calendar also been implemented.
According to the defenders of the new calendar, the argument that the 25 December (N.S.) observance of Christmas is a purely "secular" observance and is therefore an unsuitable time for Orthodox Christians to celebrate Christ's Nativity, is plainly inaccurate, since the observances of Christ's birth among western Christians (and today, among many Orthodox Christians) obviously occur overwhelmingly in places of worship and involve hymns, prayers, scripture readings, religious dramas, liturgical concerts, and the like. Defenders of the new calendar further note that, to the extent that is a secular observance in the western world, (i.e., O.S.) appears to be becoming one as well, in Orthodox countries that continue to follow the old calendar. In Russia, for example, is no longer a spiritual holiday for Orthodox Christians alone, but has now become a national (hence secular) holiday for all Russians, including non-Orthodox Christians, people of other religions, and nonbelievers. Where this will lead in the end remains to be seen.
Among other arguments by defenders of the new calendar are those made on the basis of "truth" (notwithstanding that the detractors of that calendar make the claim that the Old Style date, , is the "true" celebration of Christ's Nativity). Arguments from truth can take two forms: (1) If a calendar is a system for reckoning time based on the motions of astronomical bodies—specifically the movements of Sun and Moon, in the case of the church calendar—and if precision or accuracy is understood as one aspect of truth, then a calendar that is more accurate and precise with respect to the motions of those bodies must be regarded as truer than one that is less precise. In this regard, some of those who champion the old calendar as "truth" (rather than for pastoral reasons, as seems to be the case with the national churches that adhere to it) may appear, to those following the new calendar, as the defenders of a fiction. (2) Some defenders of the new calendar argue that the celebration, in any way or form, of two feasts of Christ's Nativity within the same liturgical year is not possible, since according to the faith there is only one celebration of that feast in a given year. On this basis, they argue that those who prefer to observe a "secular" feast of the Nativity on and a "religious" one on , err in respect of the truth that there is but one feast of the Nativity each year.
While the new calendar has been adopted by many of the smaller national churches, a majority of Orthodox Christians continue to adhere to the traditional Julian calendar, and there has been much acrimony between the two parties over the decades since the change, leading sometimes even to violence, especially in Greece.
Critics see the change in calendar as an unwarranted innovation, influenced by Western society. They say that no sound theological reason has been given for changing the calendar, that the only reasons advanced are social. The proposal for change was introduced by Meletios Metaxakis, Ecumenical Patriarch of Constantinople, a patriarch whose canonical status has been disputed.
The argument is also made that since the use of the Julian calendar was implicit in the decision of the First Ecumenical Council at Nicaea (325), which standardized the calculation of the date of Easter, no authority less than an Ecumenical Council may change it. It is further argued that the adoption of the new calendar in some countries and not in others has broken the liturgical unity of the Eastern Orthodox churches, undoing the decision made by the council of bishops at Nicaea to decree that all local churches celebrate Easter on the same day. The emperor Constantine, writing to the bishops absent from the Council to notify them of the decision, argued, "Think, then, how unseemly it is, that on the same day some should be fasting whilst others are seated at a banquet".
Liturgical objections to the new calendar stem from the fact that it adjusts only those liturgical celebrations that occur on fixed calendar dates, leaving all of the commemorations on the moveable cycle on the original Julian calendar. This upsets the harmony and balance of the liturgical year. (This would not have been a problem if the recommendations of the 1923 synod to use an astronomical rule to reckon the date of Easter, as outlined above, had not been rejected.) This disruption is most noticeable during Great Lent. Certain feast days are designed to fall during Lent, such as the feast of the Forty Martyrs of Sebaste. The Feast of the Annunciation is also intended to fall either before Easter or during Bright Week. Sometimes, Annunciation will fall on the day of Easter itself, a very special concurrence known as "Kyrio-Pascha", with special liturgical practices appointed for such an occurrence. However, under the new calendar, "Kyrio-Pascha" becomes an impossibility. The Apostles' Fast displays the most difficult aspect of the new calendar. The fast begins on the moveable cycle and ends on the fixed date of 29 June; since the new calendar is 13 days ahead of the traditional Julian calendar, the Apostles' Fast is 13 days shorter for those who follow the new calendar, and some years it is completely abrogated. Furthermore, critics of the new calendar point out the advantage to celebrating Nativity separately from the secular observances of Christmas and New Year, which are associated with partying and alcohol consumption.
Critics also point out that proponents of the new calendar tend to use worldly rather than spiritual justification for changing the calendar: wanting to "party with everyone else" at Christmas; concern that the gradual shift in the Julian calendar will somehow negatively affect the celebration of feasts that are linked to the seasons of the year. However, opponents counter that the seasons are reversed in the southern hemisphere, where the liturgical celebrations are no less valid. The validity of this argument is questionable, since the feasts of the Orthodox Church were not changed no matter where they were celebrated, and Orthodox services were held in the southern hemisphere with little issue centuries before the introduction of the new calendar.
Proponents also argue that the new calendar is somehow more "scientific", but opponents argue that science is not the primary concern of the Church; rather, the Church is concerned with other-worldliness, with being "in the world, but not of it", fixing the attention of the faithful on eternity. Scientifically speaking, neither the Gregorian calendar nor the new calendar is absolutely precise. This is because the solar year cannot be evenly divided into 24-hour segments. So any public calendar is imprecise; it is simply an agreed-upon designation of days.
From a spiritual perspective, Old Calendarists also point to a number of miraculous occurrences that occur on the old calendar exclusively, such as the "descent of the cloud on the mount" on the feast of the Transfiguration. After the calendar change was instituted, the followers of the old calendar in Greece apparently witnessed the appearance of a cross in the sky, visible to thousands on the feast of the Exaltation of the Holy Cross, 1925, of which eyewitness accounts were recorded.
For such special events, if the original Julian date and year is known then the option always exists to calculate what was the proleptic Revised Julian date of that event and then observe its anniversary on that day, if that could be socially and ritually accepted.
The calendrical arithmetic discussed here is adapted from Gregorian and Julian calendar arithmetic published by Dershowitz and Reingold, although those authors explicitly ignored the Revised Julian calendar. Their book, referred to hereinafter as "CC3", should be consulted for methods to handle BC dates and the traditional omission of a year zero, both of which are ignored here. They define the MOD operator as x MOD y = x − y × floor(x / y), because that expression is valid for negative and floating point operands, returning the remainder from dividing x by y while discarding the quotient. Expressions like floor(x / y) return the quotient from dividing x by y while discarding the remainder.
"isLeapYear" = ("year" MOD 4 = 0)
IF "isLeapYear" THEN
END IF
Calendrical calculations are made consistent and straightforward for arithmetic operations if dates are first converted to an ordinal number of days relative to an agreed-upon epoch, in this case the Revised Julian epoch, which was the same as the Gregorian epoch. To find the difference between any two Revised Julian dates, convert both to ordinal day counts and simply subtract. To find a past or future date, convert a given date to an ordinal day count, subtract or add the desired number of days, then convert the result to a Revised Julian date.
The arithmetic given here will not "crash" if an invalid date is given. To verify that a given date is a valid Revised Julian date, convert it to an ordinal day count and then back to a Revised Julian date—if the final date differs from the given date then the given date is invalid. This method should also be used to validate any implementation of calendrical arithmetic, by iteratively checking thousands of random and sequential dates for such errors.
To convert a Revised Julian date to any other calendar, first convert it to an ordinal day count, and then all that is needed is a function to convert the ordinal days count to that calendar.
To convert a date from any other calendar to a Revised Julian date, first convert that calendar date to an ordinal day count, then convert ordinal days to the Revised Julian date.
The following constant defined midnight at the start of Revised Julian date Monday, as the beginning of the first ordinal day. This moment was Julian day number 1721425.5.
CC3 outlines functions for Gregorian and Julian calendar conversions, as well as many other calendars, always calculating in terms of the ordinal day number, which they call the "fixed date" or "rata die" (RD), assigning the number 1 to the Gregorian calendar epoch. The arithmetic herein, by using the same ordinal day numbering epoch, is fully compatible with all CC3 functions for calendrical calculations and date inter-conversions.
One can assign a different integer to the Revised Julian epoch, for the purpose of numbering ordinal days relative to some other epoch, but if you do so then one must take the epoch difference into account when using any CC3 calendar functions and when converting an ordinal day number to a weekday number.
Optionally the ordinal day number can include a fractional component to represent the time as the elapsed fraction of a day. The ordinal day number of the J2000 moment ( noon) was 730120.5.
Convert a "year", "month", and "day" to the corresponding fixed day number:
If "month" is after February then subtract 1 day for a leap year or subtract 2 days for a common year:
Finally subtract a day for each prior century year (most of which are non-leap) and then add back in the number of prior century leap years:
Convert an ordinal day number to the corresponding Revised Julian "year", "month", and "day", starting by removing any fractional time-of-day portion:
Finally, calculate the day number within the month by subtracting the Fixed days count for the start of the month from the originally given Fixed days count, and then add one day:
Convert the ordinal number of days since the Revised Julian epoch to a weekday number (Sunday=1 through Saturday = 7):
Don't be tempted to omit subtracting the "RJepoch" just because it is offset by adding +1. As written, this expression is robust even if you assign a value other than one to the epoch. | https://en.wikipedia.org/wiki?curid=26253 |
Reform of the date of Easter
A reform of the date of Easter has been proposed several times because the current system for determining the date of Easter is seen as presenting two significant problems:
There have been controversies about the "correct" date of Easter since antiquity, leading to schisms and excommunications or even executions due to heresy, but most Christian churches today agree on certain points. Easter should therefore be celebrated:
There is less agreement whether Easter also should occur:
The disagreements have been particularly about the determination of moon phases and the equinox, some still preferring astronomical observation from a certain location (usually Jerusalem, Alexandria, Rome or local), most others following nominal approximations of these in either the Hebrew, Julian or Gregorian calendar using different lookup tables and cycles in their algorithms. Deviations may also result from different definitions of the start of the "day", i.e. dusk, sunset, midnight, dawn or sunrise, and the decision whether the respective starts of astronomical spring, Paschal full moon and Easter Sunday may occur in a single day as long as they are observed in that order.
It has been proposed that the first problem could be resolved by making Easter occur on a date fixed relative to the western Gregorian calendar every year, or alternatively on a Sunday within a fixed range of seven or eight dates. While tying Easter to one fixed date would serve to underline the belief that it commemorates an actual historical event, without an accompanying calendar reform that changes the pattern of the days of the week (itself a subject of religious controversy) or adopted a leap week, it would also break the tradition of Easter always being on a Sunday, established since the 2nd century and by now deeply embedded in the liturgical practice and theological understanding of almost all Christian denominations.
The Second Ecumenical Council of the Vatican agreed in 1963 to accept a fixed Sunday in the Gregorian calendar as the date for Easter as long as other Christian churches agreed on it as well. They also agreed in principle to adopt a civil calendar reform as long as there were never any days outside the cycle of seven days per week.
The Pepuzites, a 5th-century sect, celebrated Easter on the Sunday following April 6 (in the Julian calendar). This is equivalent to the Sunday closest to April 9. The April 6 date was apparently arrived at because it was equivalent to the 14th of the month of Artemisios in an earlier calendar used in the area, hence, the 14th of the first month of spring.
The two most widespread proposals for fixing the date of Easter would set it on either the second Sunday in April (8 to 14, week 14 or 15), or the Sunday after the second Saturday in April (9 to 15). They only differ in years with dominical letter G or AG where 1 April is a Sunday. In both schemes, account has been taken of the fact that—in spite of the many difficulties in establishing the dates of the historical events involved—many scholars attribute a high degree of probability to Friday 7 April 30, as the date of the crucifixion of Jesus, which would make 9 April the date of the Resurrection. Another date which is supported by many scholars is 3 April 33, | https://en.wikipedia.org/wiki?curid=26254 |
Robert Lowth
Robert Lowth (; 27 November 1710 – 3 November 1787) was a Bishop of the Church of England, Oxford Professor of Poetry and the author of one of the most influential textbooks of English grammar.
Lowth was born in Hampshire, Great Britain, the son of Dr William Lowth, a clergyman and Biblical commentator. He was educated at Winchester College and became a scholar of New College, Oxford in 1729. Lowth obtained his BA in 1733 and his Master of Arts degree in 1737. In 1735, while still at Oxford, Lowth took orders in the Anglican Church and was appointed vicar of Ovington, Hampshire, a position he retained until 1741, when he was appointed Oxford Professor of Poetry.
Bishop Lowth made a translation of the book of Isaiah, first published in 1778. The Seventh-day Adventist theologian E. J. Waggoner said in 1899 that Lowth's translation of Isaiah was "without doubt, as a whole, the best English translation of the prophecy of Isaiah".
In 1750 he was appointed Archdeacon of Winchester. In 1752 he resigned the professorship at Oxford and married Mary Jackson. Shortly afterwards, in 1753, Lowth was appointed rector of East Woodhay. In 1754 he was awarded a Doctorate in Divinity by Oxford University, for his treatise on Hebrew poetry entitled "Praelectiones Academicae de Sacra Poesi Hebraeorum" ("On the Sacred Poetry of the Hebrews"). This derives from a series of lectures and was originally published in Latin. An English translation was published by George Gregory in 1787 as ""Lectures on the Sacred Poetry of the Hebrews"". This and subsequent editions include the life of Bishop Lowth as a preface. There was a further edition issued in 1815. This was republished in North America in 1829 with some additional notes. However, apart from those notes, the 1829 edition is less useful to a modern reader. This is because the editor of that edition chose to revert to citing many of the scriptural passages that Lowth uses as examples, and some of the annotations by Michaelis (Johann David Michaelis) and others, in Latin.
Lowth was appointed a fellow of the Royal Societies of London and Göttingen in 1765. He was consecrated bishop of St David's in 1766; however, before the end of the year he was transferred to the see of Oxford. He remained Bishop of Oxford until 1777 when he was appointed Bishop of London as well as dean of the chapel royal and privy councillor. In 1783 he was offered the chance to become Archbishop of Canterbury, but declined due to failing health.
Lowth was good friends with the Scottish Enlightenment figure David Hume, as noted by the prominent Scottish bookseller Andrew Millar. Millar commented that "Hume and he are very great, tho' one orthodox and ye other Hedretox".
Lowth wrote a Latin epitaph, "Cara, Vale" ("Dear one, farewell!") on the death of his daughter Maria. Much admired in the late 18th and early 19th centuries, it was set to music by the English composer John Wall Callcott.
Lowth died in 1787, and was buried in the churchyard of All Saints Church, Fulham.
Lowth seems to have been the first modern Bible scholar to notice or draw attention to the poetic structure of the Psalms and much of the prophetic literature of the Old Testament. In Lecture 19 he sets out the classic statement of parallelism, which remains the most fundamental category for understanding Hebrew poetry. He identifies three forms of parallelism, the synonymous, antithetic and synthetic (i.e., balance only in the manner of expression without either synonymy or antithesis). This idea has been influential in Old Testament Studies to the present day.
Lowth is also remembered for his publication in 1762 of "A Short Introduction to English Grammar". Prompted by the absence of simple and pedagogical grammar textbooks in his day, Lowth set out to remedy the situation. Lowth's grammar is the source of many of the prescriptive shibboleths that are studied in schools, and established him as the first of a long line of usage commentators who judge the English language in addition to describing it. An example of both is one of his footnotes: ""Whose" is by some authors made the possessive case of "which", and applied to things as well as persons; I think, improperly."
His most famous contribution to the study of grammar may have been his tentative suggestion that sentences ending with a preposition—such as "what did you ask for?"—are inappropriate in formal writing. (This is known as preposition stranding.) In what may have been intentional self-reference, Lowth used that very construction in discussing it. "This is an Idiom which our language is strongly inclined to; it prevails in common conversation, and suits very well with the familiar style in writing; but the placing of the Preposition before the Relative is more graceful, as well as more perspicuous; and agrees much better with the solemn and elevated Style."2 Others had previously expressed this opinion; the earliest known is John Dryden in 1672.
Lowth's method included criticising "false syntax"; his examples of false syntax were culled from Shakespeare, the King James Bible, John Donne, John Milton, Jonathan Swift, Alexander Pope, and other famous writers. His understanding of grammar, like that of all linguists of his period, was influenced by the study of Latin, though he was aware that this was problematic and condemned "forcing the English under the rules of a foreign Language"1. Thus Lowth condemns Addison's sentence "Who should I meet the other night, but my old friend?" on the grounds that the thing acted upon should be in the "Objective Case" (corresponding, as he says earlier, to an oblique case in Latin), rather than taking this example and others as evidence from noted writers that "who" can refer to direct objects.
Lowth's dogmatic assertions appealed to those who wished for certainty and authority in their language. Lowth's grammar was not written for children; however, within a decade after it appeared, versions of it adapted for the use of schools had appeared, and Lowth's stylistic opinions acquired the force of law in the schoolroom. The textbook remained in standard usage throughout educational institutions until the early 20th century.
Lowth has been regarded as the first imagery critic of Shakespeare's plays and highlighted the importance of the imagery in the interpretation of motives and actions of characters and dramatic movement of the plot and narrative structure.3
1"A Short Introduction to English Grammar", p. 107, condemning Richard Bentley's "corrections" of some of Milton's constructions.
2"Ibid"., pp. 127–128.
3Notes & Queries (OUP) in 1983 Vol. 30, Page 55-58 by Sailendra Kumar Sen, "Robert Lowth :the first imagery critic of Shakespeare". | https://en.wikipedia.org/wiki?curid=26255 |
Robert Askin
Sir Robert William Askin, GCMG (4 April 1907 – 9 September 1981), was an Australian politician and the 32nd Premier of New South Wales from 1965 to 1975, the first representing the Liberal Party. He was born in 1907 as Robin William Askin, but always disliked his first name and changed it by deed poll in 1971. Before being knighted in 1972, however, he was generally known as Bob Askin. Born in Sydney in 1907, Askin was educated at Sydney Technical High School. After serving as a bank officer and as a Sergeant in the Second World War, Askin joined the Liberal Party and was elected to the seat of Collaroy at the 1950 election.
Askin quickly rose through party ranks, eventually becoming Deputy Leader following Walter Howarth's resignation in July 1954. When long-serving party leader Vernon Treatt announced his resignation in August 1954, Askin put his name forward to replace him. At the vote, he became deadlocked against Pat Morton and Askin asked his former commanding officer Murray Robson to take the leadership instead. Robson did not live up to expectations and was deposed in September 1955 by Morton, who then became Leader. Askin remained as Deputy until, after leading the party to a second electoral defeat in 1959, Morton was deposed and Askin was elected to succeed him. At the May 1965 election, Askin presented the Liberal Party as a viable alternative government. He won a narrow victory, ending a 24-year Labor hold on government.
Askin's time in office was marked by a significant increase in public works programs, strong opposition to an increase in Commonwealth powers, laissez-faire economic policies and wide-ranging reforms in laws and regulations such as the Law Reform Commission, the introduction of consumer laws, legal aid, breath-testing of drivers, the liberalisation of liquor laws and the restoration of Postal voting in NSW elections. More controversial changes included the 1967 abolition of Sydney City Council and increased rates of development in Sydney, often at the expense of architectural heritage and historic buildings. This culminated in the 'Green ban' movement of the 1970s led by the Union movement to conserve the heritage of Sydney.
At the end of his term, after winning another three elections, Askin was the longest-serving Premier of New South Wales; his record has since been overtaken by Neville Wran and Bob Carr. Askin remains the longest-serving Leader of the New South Wales Liberal Party. Since his death in 1981, however, Askin's legacy has been tarnished by persistent allegations that he was involved in organised crime and official corruption.
Robin William Askin was born in Sydney, New South Wales on 4 April 1907, at the Crown Street Women's Hospital, the eldest of three sons of Ellen Laura Halliday (née Rowe) and William James Askin, an Adelaide-born sailor and worker for New South Wales Railways. His parents later married on 29 September 1916. Askin spent his early years in Stuart Town before his family moved to Glebe, a working-class inner-city suburb of Sydney. After primary education at Glebe Public School, Askin was awarded a bursary to study at Sydney Technical High School, where he sat in the same class as the future aviator Charles Kingsford Smith. At school he gained good marks, with a particular interest in Mathematics and History, and enjoyed swimming and Rugby League. He completed his Intermediate Certificate in 1921.
At the age of 15, after a short time in the electrical trade, in 1922 Askin joined the Government Savings Bank of New South Wales as a Clerk. However, when the Savings Bank closed due to the Great Depression in 1931, he joined the Rural Bank of New South Wales. Between 1925 and 1929 Askin served part-time as a Lieutenant in the 55th Battalion, Citizens Military Forces. On 5 February 1937 Askin married Mollie Isabelle Underhill, a typist at the bank, at Gilbert Park Methodist Church, Manly. They lived in Manly for the rest of their lives. He began his interest in politics by assisting in Percy Spender's successful campaign for Askin's local seat of Warringah as an Independent candidate at the 1937 Federal election. In 1940 Askin was appointed manager of the Bank service department, which focused on public relations. He served as vice-president from 1939 to 1940 and President from 1940 to 1941 of the Rural Bank branch of the United Bank Officers' Association.
Askin enlisted as a Private in the Second Australian Imperial Force on 30 March 1942. An instructor with the 14th Infantry Training Battalion at Dubbo, he was appointed Acting Corporal, then reverted to Private. In November 1942 he joined the 2/31st Infantry Battalion in New Guinea, where he served for two months. He was in New Guinea for another six months from July 1943. Landing at Balikpapan, Borneo, in July 1945, Askin was promoted to Sergeant under Lieutenant Colonel Murray Robson. When hostilities ceased, he unsuccessfully attempted to set up an import business in Bandjermasin. Returning to Australia in February 1946, he was demobilised on 22 March.
Upon demobilisation, Askin returned to work at the Rural Bank, managing its travel department. However, his interest in politics arose again when he assisted his former commanding officer, Lieutenant Colonel Robson, in retaining his seat of Vaucluse at the 1947 state election for the newly formed Liberal Party, which Askin then joined. Rapidly rising through the party ranks, Askin soon became President of the Liberals' Manly branch and supported Bill Wentworth's successful bid for the new seat of Mackellar at the 1949 election.
Askin gained preselection for and won the newly created seat of Collaroy, located in the Northern Beaches, at the 17 June 1950 election, gaining 63.69% of the vote. The Leader of the Liberal Party since 1946, Vernon Treatt led the Liberal/Country Coalition at the election, which resulted in a hung parliament, with Treatt's Coalition gaining 12 seats and a swing of 6.7% for a total of 46 seats. With the Labor Party also holding 46 seats, the balance of power lay with the two re-elected Independent Labor member, James Geraghty and John Seiffert, who had been expelled from the party for disloyalty during the previous parliament. Under a legalistic interpretation of the ALP rules, Seiffert was readmitted to the party and, together with the support of Geraghty, Premier James McGirr and Labor were able to stay in power. As the new local member for a constituency covering most of the Northern Beaches from North Manly to Pittwater, Askin protested against the lack of government development and services in the area, such as sewerage, education, and transport.
Labor's near-defeat weakened McGirr's position and he was replaced as premier by Joseph Cahill in April 1952. Cahill had won popular support as a vigorous and impressive minister who had resolved problems with New South Wales' electricity supply and in his first 10 months as premier had reinvigorated the party. He appeared decisive and brought order to the government's chaotic public works program. In addition, he attacked the increasingly unpopular federal Coalition government of Robert Menzies. All this contributed to Treatt's Coalition being defeated at the 14 February 1953 election, with a total loss of ten seats and a swing against them of 7.2%. Askin retained his seat with 63.35%.
With confidence in his leadership demolished, Treatt's Liberal Party descended into factional in-fighting culminating in the resignation of Deputy Leader Walter Howarth on 22 July 1954, who publicly announced it on 4 July citing that he felt that Treatt doubted his loyalty. He was replaced by now-Party Whip Askin. The resignation split the party and sparked a leadership challenge from Pat Morton. At the party meeting on 6 July, Treatt narrowly defeated Morton with 12 votes to 10. With party support eroded, Treatt did not remain long as leader afterwards. On Friday 6 August 1954, Treatt announced that he would resign as leader. At the following party meeting, after a deadlocked vote between Askin and Morton, Askin asked his friend Murray Robson to nominate and subsequently he was elected to succeed Treatt.
Like other senior members of the party, after having no conservative government since Alexander Mair in 1941, Robson had no experience in government, had little interest in policy and alienated many party members by trying to forge a closer alliance with Michael Bruxner's Country Party. Over a year after Robson assumed the leadership, at a party meeting on 20 September 1955, senior party member Ken McCaw moved that the leadership be declared vacant, citing that Robson's leadership lacked the qualities necessary for winning the next election. The motion was carried 15 votes to 5. Morton was then elected unopposed as leader, with Askin remaining as Deputy Leader.
Morton then led the party to defeat at the election on 3 March 1956. The Coalition gained six seats, reducing the government's majority from twenty to six. Askin retained Collaroy with 70.14%. Morton again led the opposition to the ballot at the 21 March 1959 election, which resulted in an overall gain of three seats but the loss of two seats to Labor. After counting was finalised the Cahill Government was left with an overall majority of four seats. Askin retained his seat with 71.09%.
Morton's refusal to give up his many business interests while as leader led many to accuse him of being a 'part-time leader' and together with his second election loss, eroded confidence in his leadership. On 14 July 1959, three Liberal MLAs called on Morton to resign, stating that the party needed a full-time leader and that Morton no longer commanded the majority support of his colleagues. Morton refused and instead called an emergency meeting on 17 July to confirm his leadership. By this time, Askin had emerged as one of the main opponents to his longtime friend and former commander. However, he and the other major challenger to Morton's leadership, Eric Willis declared that they would only take the leadership if they were given an absolute majority of 28 votes. At the party meeting, a spill motion to remove Morton as leader carried by two votes. Willis then surprised many by deciding not to put his name forward for nomination, leaving Askin to take the leadership unopposed. Willis was eventually elected as Deputy Leader. Upon election, Askin declared that "One of my main tasks will be to sell our [Liberal Party] ideas and principles to the working man."
When Premier Cahill died on 22 October 1959, he was replaced by Askin's friend and parliamentary contemporary, Robert "Bob" Heffron, which tended to calm his aggression and opposition towards the government. At the March 1962 election, Labor had been in power for 21 years and Heffron had since been Premier for 2 and a half years. Heffron was 72 at the time of the election and his age and the longevity of the government were made issues by the Askin's opposition which described it as being composed of "tired old men". The standing of Heffron's government suffered when the electors rejected its proposal to abolish the New South Wales Legislative Council at a referendum in April 1961, being the first time Labor had lost a state electoral poll in 20 years. Askin's successful opposition campaign centred on warning of a Labor-dominated single house subject to "Communist and Trades Hall influence".
Labor's policies for the election included the establishment of a Department of Industrial Development to reduce unemployment, free school travel, aid to home buyers and commencing the construction of the Sydney–Newcastle Freeway as a toll-road. By contrast, Askin put forward a wide-ranging program of reform and addressed contentious issues including the introduction of State Aid for private schools, making rent control fairer and the legalisation of off-course betting on horse races. Askin accused the state government of allowing the transport infrastructure of the state to decline and promised to build the Newcastle freeway without a toll, to construct the Eastern Suburbs Railway and to plan for a second crossing of Sydney Harbour. Askin also made promises for more resources in mental health and district hospitals.
Despite these promises, Askin and the new Country Party Leader, Charles Cutler, lost the election to Heffron, mainly due to the adverse reactions of voters towards the November 1960 "horror budget" and credit squeeze made by the federal Coalition government under Menzies. The Coalition lost five seats, despite a small swing of 0.16% and the Coalition gaining the support of prominent media businessman, Frank Packer, who helped project the image of Askin and the Liberals as a viable alternative government. Askin retained his seat with 72.53%.
The 1965 campaign against the Labor Government—led since April 1964 by Jack Renshaw—a government widely perceived to be tired and devoid of ideas, was notable for being one of Australia's first "presidential-style" campaigns, with Askin being the major focus of campaigning and a main theme of "With Askin You'll Get Action". He received vigorous support from the newspapers and TV stations owned by Packer. At the May 1965 election, the Liberal/Country Coalition gained 49.8% of the vote to 43.3% to the ALP. While the Liberals took only two seats from Labor, Askin got the support of the two independent members, Douglas Darby (Manly) and Harold Coates (Hartley), giving him enough support to end Labor's 24-year run in power. He officially took office on 1 May, with Charles Cutler of the Country Party as Deputy Premier.
The Askin Government was sworn in by the Governor of New South Wales, Sir Eric Woodward, on 13 May at Government House. It was the first to be headed by the Liberal Party since the main non-Labor party in the state adopted the Liberal banner; being one of only three Liberals to win power from Labor. Askin, who served as his own Treasurer, heavily involved himself in the business of Government, while also maintaining a range of social agendas and regular outings to the racetrack or Rugby League games. One of the privileges of office was the access to a Ministerial car and personal driver, which became particularly important for Askin, who did not drive. On one occasion when Askin was supposed to drive a new Holden from the factory assembly line during a visit, Askin arranged for his driver, Russ Ferguson, to be hidden on the car floor working the controls while Askin held the wheel.
Askin's government was marked by strong opposition to an increase in Commonwealth powers, a tough stance on "law and order" issues, laissez-faire economic policies, and aggressive support for industrial and commercial development. At his first Cabinet meeting, Askin restored direct air services between Sydney and Dubbo, and required Jørn Utzon, the Danish architect then working on the Sydney Opera House, to provide a final price and completion date for the Opera House, which had gone past the original estimates for both. His Public Works Minister Davis Hughes began to assert control over the project and demanded that costs be reined in. This brought him into direct conflict with Utzon and in February 1966, after a bitter standoff and the suspension of progress payments by Hughes, Utzon resigned, sparking a major public outcry. Two weeks after the first Government meeting, the Askin Government abolished the tow-away system for Sydney and Newcastle. In 1966 the University of New South Wales awarded him an honorary Doctor of Letters (D.Litt.).
Despite a hostile Legislative Council, an extended drought and various industrial disputes, Askin and his Government passed several reforms. Among them were the removal of trading-hours restrictions on small businesses, abolishing juries for motor accident damage cases, extending the hours for liquor trading, thereby bringing an end to the "Six o'clock swill". The Government also moved into legal and local government reforms, attacking pollution and restoring the previously abolished postal voting rights in state elections. Askin also addressed the demands of the New England New State Movement by holding a referendum in 1967, which was defeated by a large margin.
Many of his government's reforms were due to his Minister for Justice, John Maddison, and Attorney-General Sir Kenneth McCaw, who initiated the establishment of the Law Reform Commission of New South Wales, the introduction of consumer laws, an ombudsman, legal aid, health labels on cigarette packs, breath-testing of drivers, limits on vehicle emissions, the liberalisation of liquor laws, and compensation for victims of violent crime. There was also a new National Parks and Wildlife Service to assist environment conservation and protection. Despite these positive reforms, Askin's government maintained a brutal prison and corrective regime that was to culminate in the Bathurst Gaol riots in 1970 and 1974.
Askin, along with his Minister for Local Government, Pat Morton, oversaw the rapid escalation of building development in inner-city Sydney and the central business district, which followed in the wake of his controversial 1967 abolition of Sydney City Council and a redistribution of municipal electoral boundaries that was aimed at reducing the power of the rival Labor Party. On its abolition, Morton commented that it was "essential for Sydney's progress" and replaced the City Council with a Commission, headed by another former Liberal leader, Vernon Treatt.
The Sydney metropolitan area at the time was marked by increasing strains on state infrastructure and Askin's Government's pro-development stance was largely attributed as an attempt to alleviate these problems. Despite this, the newly established State Planning Authority were continuously criticised for not being totally accountable to the public, particularly as the pro-business Sydney Commissioners worked side by side with the Planning authority to increase developments in the Sydney CBD to their highest levels ever, embodied by the construction of the MLC Centre, the demolition of the Theatre Royal, Sydney and the Australia Hotel. Other controversial schemes proposed by his government were a massive freeway system that was planned to be driven through the hearts of historic inner-city suburbs including Glebe and Newtown and an equally ambitious scheme of 'slum clearance' that would have brought about the wholescale destruction of the historic areas of Woolloomooloo and The Rocks. This eventually culminated in the 1970s Green ban movement led by Unions Leader Jack Mundey, to protect the architectural heritage of Sydney.
At the 24 February 1968 election, Askin increased his previously tenuous majority, scoring a six-seat swing against Labor's Renshaw and an overall majority of 12 over the Labor Party and the two Independents. Askin retained his seat with 70.97%. It was the first time since the UAP/Country Coalition won three consecutive elections from 1932 to 1938 that a non-Labor government in New South Wales had been reelected.
In mid-1968 Askin famously became embroiled in a media controversy over the reporting of several words spoken to the United States Chamber of Commerce lunch in Sydney on 32 July 1968 (also the day Opposition Leader Renshaw resigned, to be replaced by Pat Hills), in which he spoke of the October 1966 state visit by United States President Lyndon B. Johnson. Askin had joined Prime Minister Harold Holt, President Johnson and the American Ambassador, Ed Clark, in a drive through the Sydney CBD. As Johnson's motorcade drove into Liverpool Street, several anti-Vietnam War protesters, including Graeme Dunstan, threw themselves in front of the car carrying them. As Askin later recalled, a police officer had informed him that some communists were obstructing the route. Askin claimed he had instructed the officer to drag them off. As the car moved on, he then said to Johnson "half-jocularly": "what I ought to have told him was to ride over them", to which Johnson replied "a man after my own heart". At the subsequent luncheon, Askin instead reported that he had said the remark to the police officer, which a journalist attending the event later reported it as "Run over the bastards."
As Treasurer, Askin focused on the state budget and on Commonwealth-State financial relations. His attitude towards the Commonwealth and the Federal was shaped by his first premiers' conference in 1965 when Prime Minister Menzies negotiated with the Victorian premier Henry Bolte to achieve an extra grant of funds for Victoria at the expense of the other states and closed the conference before the other Premiers could object. At subsequent premiers' conferences he opposed the 'centralising' tendencies of Canberra and became a strong advocate of the rights of the states.
With John Gorton becoming Prime Minister after Holt's death, Askin came into conflict with the Commonwealth Government over Gorton's determination to maintain federal command over taxation and in June 1968 declared that he could veto any form of state taxation. In late 1969, Askin, with Bolte, organised an 'emergency' premiers' conference, without Gorton, to publicise the disadvantages of the States, a move that was partly responsible for the party deposition of Gorton in 1971.
Askin had a greater dislike for Gorton's successor, William McMahon and received financial support from McMahon only when Askin threatened to release a NSW "horror budget" that could damage Federal Liberal voting intentions. However, when McMahon lost the 1972 election to Labor Leader Gough Whitlam, relations between Sydney and Canberra got even worse. Whitlam's centralising economic policies and decision to end legal appeals to the Privy Council of the United Kingdom drew criticism from Askin.
At the 13 February 1971 state election, the Coalition suffered a swing of four seats, but still managed a narrow win against Labor and new leader Pat Hills, taking 49 seats–a bare majority of one–in the expanded 96-seat Legislative Assembly.
Throughout his time as Premier, he was assisted by Charles Cutler as Deputy Premier and Leader of the Country Party. Cutler served as Acting Premier at times when Askin was suffering from illness, having suffered two heart attacks in 1969 and 1973. In 1972 the Eastern Orthodox Church of Antioch presented Askin with the Order of St Peter and St Paul for his services to ethnic minorities.
In 1971 Askin changed his name from "Robin" to "Robert" by a deed poll. On 1 January 1972, he was appointed a Knight Commander of the Order of St Michael and St George (KCMG). Later that year, taking advantage of unease at the increasingly erratic Labor government of Gough Whitlam and the increasing economic problems seen to be caused by it, Askin called an early election for 1973. However, a setback arose in the northern Sydney seat of Gordon, when the Liberal member and Education Minister, Harry Jago, failed to nominate his candidacy, thereby losing the seat to the Democratic Labor Party before the election took place. However, the Coalition went to a record fourth win against the ALP, led by Pat Hills, increasing the Liberal/Country majority by four seats and making Askin the only major party leader to win four consecutive terms for Premier until Neville Wran of the ALP. Askin contested the election in Pittwater, replacing his former seat of Collaroy. In 1973 he was appointed an Officer of the Lebanese National Order of the Cedar.
His last term in office was marked by tension between the NSW and Victorian Governments and a view that Askin was getting out of touch with the voters. Late in 1974, Askin announced his resignation, and his last intervention was to support his Minister for Lands, Thomas Lewis, in his bid to be Askin's successor instead of the Deputy Leader and Minister for Education, Sir Eric Willis. It was reported that Lewis had offered to upgrade Askin's knighthood from Knight Commander (KCMG) to Knight Grand Cross (GCMG) of the Order of St Michael and St George, while Willis was uncommitted. Askin retired from politics in January 1975 and was succeeded by Lewis as Premier. On 14 June 1975 he was elevated to Knight Grand Cross, for his service as Premier. His resignation began a turbulent year for the government. Lewis was ousted in a party room coup by Willis in 1976, but Willis only lasted four months before losing the 1976 election to Labor, ending the longest unbroken run for a non-Labor government since World War I.
Askin's health declined still further after 1975, and he died of heart failure on 9 September 1981 in St Vincent's Hospital, Sydney. The next day, the "Sydney Morning Herald" editorialised that he was "one of the ablest, most industrious and colourful political leaders of Australia's post-war era".
His state funeral, held on 14 September, was attended by over 1,000 mourners including Prime Minister Malcolm Fraser, Premier Neville Wran, Mervyn Wood, Justice Lionel Murphy and former NSW Labor Premier and former Governor-General Sir William McKell.
There have been persistent allegations that Askin, allegedly assisted by then Police Commissioner Norman Allan, oversaw the creation of a lucrative network of corruption and bribery that involved politicians, public servants and police and the nascent Sydney organised crime syndicates.
When questioned about his wealth, Askin always attributed it to the salary from his high public office, his frugal lifestyle, good investments and canny punting. After his death the Australian Taxation Office audited his estate, and although it made no finding of criminality, it determined that a substantial part of it came from undisclosed income derived from sources other than shares or gambling.
With Askin's death in 1981, investigative journalists were freed from the threat of legal action under Australia's defamation laws. Stories about his reputed corruption were published almost immediately. Most notable of these was an article that appeared in "The National Times" co-written by David Marr and David Hickie. Headlined "Askin: friend of organised crime", it was published on the day of Askin's funeral. This was followed by David Hickie's book "The Prince and The Premier", which detailed Askin's long involvement in illegal bookmaking and allegations that he had received substantial and long-running payoffs from organised crime figures.
In 2007, the centenary of Askin's birth went largely unnoticed with the Liberal Party distancing itself from him.
The allegations of corruption against Askin were revived in 2008 when Alan Saffron, the son of the late Sydney crime boss Abe Saffron, published a biography of his father in which he alleged that Saffron had paid bribes to major public officials including Askin, former police commissioner Norman Allan, and other leading figures whom he claimed he could not name because they were still alive. Alan Saffron alleged that his father made payments of between A$5000 and $10,000 per week to both men over many years, that Askin and Allan both visited Saffron's office on several occasions, that Allan also visited the Saffron family home, and that Abe Saffron paid for an all-expenses overseas trip for Allan and a young female 'friend'. He also alleged that, later in Askin's premiership, Abe Saffron became the "bagman" for Sydney's illegal liquor and prostitution rackets and most illegal gambling activities, collecting payoffs that were then passed to Askin, Allan and others, in return for which his father was completely protected. | https://en.wikipedia.org/wiki?curid=26259 |
Redshift
In physics, redshift is a phenomenon where electromagnetic radiation (such as light) from an object undergoes an increase in wavelength. Whether or not the radiation is visible, "redshift" means an increase in wavelength, equivalent to a decrease in wave frequency and photon energy, in accordance with, respectively, the wave and quantum theories of light.
Neither the emitted nor perceived light is necessarily red; instead, the term refers to the human perception of longer wavelengths as red, which is at the section of the visible spectrum with the longest wavelengths. Examples of redshifting are a gamma ray perceived as an X-ray, or initially visible light perceived as radio waves. The opposite of a redshift is a blueshift, where wavelengths shorten and energy increases. However, redshift is a more common term and sometimes blueshift is referred to as negative redshift.
There are three main causes of redshifts in astronomy and cosmology:
Knowledge of redshifts and blueshifts has been used to develop several terrestrial technologies such as Doppler radar and radar guns. Redshifts are also seen in the spectroscopic observations of astronomical objects. Its value is represented by the letter "z."
A special relativistic redshift formula (and its classical approximation) can be used to calculate the redshift of a nearby object when spacetime is flat. However, in many contexts, such as black holes and Big Bang cosmology, redshifts must be calculated using general relativity. Special relativistic, gravitational, and cosmological redshifts can be understood under the umbrella of frame transformation laws. There exist other physical processes that can lead to a shift in the frequency of electromagnetic radiation, including scattering and optical effects; however, the resulting changes are distinguishable from true redshift and are not generally referred to as such (see section on physical optics and radiative transfer).
The history of the subject began with the development in the 19th century of wave mechanics and the exploration of phenomena associated with the Doppler effect. The effect is named after Christian Doppler, who offered the first known physical explanation for the phenomenon in 1842. The hypothesis was tested and confirmed for sound waves by the Dutch scientist Christophorus Buys Ballot in 1845. Doppler correctly predicted that the phenomenon should apply to all waves, and in particular suggested that the varying colors of stars could be attributed to their motion with respect to the Earth. Before this was verified, however, it was found that stellar colors were primarily due to a star's temperature, not motion. Only later was Doppler vindicated by verified redshift observations.
The first Doppler redshift was described by French physicist Hippolyte Fizeau in 1848, who pointed to the shift in spectral lines seen in stars as being due to the Doppler effect. The effect is sometimes called the "Doppler–Fizeau effect". In 1868, British astronomer William Huggins was the first to determine the velocity of a star moving away from the Earth by this method. In 1871, optical redshift was confirmed when the phenomenon was observed in Fraunhofer lines using solar rotation, about 0.1 Å in the red.
In 1887, Vogel and Scheiner discovered the "annual Doppler effect", the yearly change in the Doppler shift of stars located near the ecliptic due to the orbital velocity of the Earth. In 1901, Aristarkh Belopolsky verified optical redshift in the laboratory using a system of rotating mirrors.
The earliest occurrence of the term "red-shift" in print (in this hyphenated form) appears to be by American astronomer Walter S. Adams in 1908, in which he mentions "Two methods of investigating that nature of the nebular red-shift".
Beginning with observations in 1912, Vesto Slipher discovered that most spiral galaxies, then mostly thought to be spiral nebulae, had considerable redshifts. Slipher first reports on his measurement in the inaugural volume of the "Lowell Observatory Bulletin". Three years later, he wrote a review in the journal "Popular Astronomy". In it he states that "the early discovery that the great Andromeda spiral had the quite exceptional velocity of –300 km(/s) showed the means then available, capable of investigating not only the spectra of the spirals but their velocities as well." Slipher reported the velocities for 15 spiral nebulae spread across the entire celestial sphere, all but three having observable "positive" (that is recessional) velocities. Subsequently, Edwin Hubble discovered an approximate relationship between the redshifts of such "nebulae" and the distances to them with the formulation of his eponymous Hubble's law. These observations corroborated Alexander Friedmann's 1922 work, in which he derived the Friedmann–Lemaître equations. | https://en.wikipedia.org/wiki?curid=26262 |
Rob Reiner
Robert Norman Reiner (born March 6, 1947) is an American actor, comedian, and filmmaker. As an actor, Reiner first came to national prominence with the role of Michael Stivic on the CBS sitcom "All in the Family" (1971–1979), a performance that earned him two Primetime Emmy Awards.
As a director, Reiner was recognized by the Directors Guild of America Awards with nominations for the coming of age drama "Stand by Me" (1986), the romantic comedy "When Harry Met Sally..." (1989), and the military courtroom drama "A Few Good Men" (1992), the last of which also earned him a nomination for the Academy Award for Best Picture. He has also received four nominations for the Golden Globe Award for Best Director.
Reiner's other major directorial film credits include the heavy metal mockumentary "This Is Spinal Tap" (1984), the romantic comedy fantasy adventure "The Princess Bride" (1987), the psychological horror-thriller "Misery" (1990), the romantic comedy-drama "The American President" (1995), the buddy comedy-drama "The Bucket List" (2007), and the biographical political drama "LBJ" (2016).
Reiner also appeared in a number of his own films and various others, including "Throw Momma from the Train" (1987), "Sleepless in Seattle" (1993), "Bullets Over Broadway" (1994), "The First Wives Club" (1996), "Primary Colors" (1998), "EDtv" (1999), and "The Wolf of Wall Street" (2013).
Reiner was born into a Jewish family in the Bronx, New York, on March 6, 1947. He is the son of Estelle Reiner (née Lebost; 1914–2008), an actress, and Carl Reiner (1922–2020), a renowned comedian, actor, writer, producer and director.
As a child, Reiner lived at 48 Bonnie Meadow Road in New Rochelle, New York; the home of the fictional Petrie family in "The Dick Van Dyke Show", created by Rob's father, was 148 Bonnie Meadow Lane. He studied at the UCLA Film School.
In the late 1960s, Reiner acted in bit roles in several television shows including "Batman", "The Andy Griffith Show", "Room 222", "Gomer Pyle, U.S.M.C." and "The Beverly Hillbillies". He began his career writing for the "Smothers Brothers Comedy Hour" in 1968 and 1969, with Steve Martin as his writing partner as the two youngest writers on the show.
Two years later, Reiner became famous playing Michael Stivic, Archie Bunker's liberal son-in-law, on Norman Lear's 1970s situation comedy "All in the Family", which was the most-watched television program in the United States for five seasons (1971–1976). The character's nickname, "Meathead", (given to him by his cantankerous father-in-law Archie) became closely associated with him, even after he had left the role and went on to build a career as a director. Reiner has stated, "I could win the Nobel Prize and they'd write 'Meathead wins the Nobel Prize'." For his performance, Reiner won two Emmy Awards in addition to three other nominations and five Golden Globe nominations. After an extended absence, Reiner returned to television acting with a recurring role on "New Girl" (2012–2018).
In 1972, Reiner, Phil Mishkin, and Gerry Isenberg created the situation comedy "The Super" for ABC. Starring Richard S. Castellano, the show depicted the life of the harried Italian American superintendent of a New York City apartment building and ran for 10 episodes in the summer of 1972. Reiner and Mishkin co-wrote the premiere episode.
Beginning in the 1980s, Reiner became known as a director of several successful Hollywood films that spanned many different genres. Some of his earlier films include cult classics such as the rock-band mockumentary "This Is Spinal Tap" (1984) and the comedic fantasy film "The Princess Bride" (1987), as well as his period piece coming of age tale "Stand by Me" (1986). He often collaborates with film editor Robert Leighton, whom he also shares with fellow director-actor Christopher Guest as their go-to editor.
Reiner has gone on to direct other critically and commercially successful films with his own company, Castle Rock Entertainment. These include the romantic comedy "When Harry Met Sally..." (1989), which has been critically ranked among the all-time best of its genre, the tense thriller "Misery" (1990), for which Kathy Bates won the Academy Award for Best Actress, and his most commercially successful work, the military courtroom drama "A Few Good Men" (1992), which was nominated for the Academy Award for Best Picture. Subsequent films directed by Reiner include the political romance "The American President" (1995), the courtroom drama "Ghosts of Mississippi" (1996), and the uplifting comedy "The Bucket List" (2007).
Reiner has continued to act in supporting roles in a number of films and television shows, including "Throw Momma from the Train" (1987), "Sleepless in Seattle" (1993), "Bullets Over Broadway" (1994), "The First Wives Club" (1996), "Primary Colors" (1998), "EDtv" (1999), "New Girl" (2012–2018), and "The Wolf of Wall Street" (2013). He has also parodied himself with cameos in works such as "" (2003) and "30 Rock" (2010).
Reiner has devoted considerable time and energy to liberal activism. His lobbying as an anti-smoking advocate, in particular, prompted his likeness to be used in a satirical role in a "South Park" episode titled "Butt Out".
Reiner is a co-founder of the American Foundation for Equal Rights, which initiated the court challenge against California Proposition 8 which banned same-sex marriage in the state.
In 1998, Reiner chaired the campaign to pass Prop 10, the California Children and Families Initiative, which created First 5 California, a program of early childhood development services, funded by a tax on tobacco products. He served as the first chairman of First 5 California, from 1999 to 2006. Reiner came under criticism for campaigning for a ballot measure (Prop 82) to fund state-run preschools while still chair of the First Five Commission, causing him to resign from his position on March 29, 2006. An audit was conducted, and it concluded that the state commission did not violate state law and that it had clear legal authority to conduct its public advertising campaigns related to preschool. Prop 82 failed to win approval, garnering only 39.1% support.
Reiner is a member of the Social Responsibility Task Force, an organization advocating moderation where social issues (such as violence and tobacco use) and the entertainment industry meet. He is also active in environmental issues, and he successfully led the effort to establish California's Ahmanson Ranch as a state park and wildlife refuge rather than as a commercial real estate development. He introduced Spinal Tap at the London Live Earth concert in July 2007.
Reiner was mentioned as a possible candidate to run against California Governor Arnold Schwarzenegger in 2006 but decided not to run for personal reasons. He campaigned extensively for Democratic presidential nominee Al Gore in the 2000 presidential election, and he campaigned in Iowa for Democratic presidential candidate Howard Dean just before the 2004 Iowa caucuses. He endorsed Hillary Clinton for president for the 2008 election. In 2015, he donated US$10,000 to Correct the Record, a political action committee which supported Hillary Clinton's 2016 presidential campaign. Since the 2016 election, he has continued to campaign against Donald Trump, calling him a racist, sexist, anti-gay, and anti-Semitic, and compared him to the Nazi police at Auschwitz. Reiner said that Harvey Weinstein is a "bad guy" but Trump is "also an abuser".
Reiner serves on the Advisory Board of the Committee to Investigate Russia.
Reiner endorsed Joe Biden for president for the 2020 election. In March 2020, Reiner demanded that President Trump be removed from office due to what he considers Trump's poor handling of the COVID-19 pandemic.
Rob Reiner married actress/director Penny Marshall in 1971. Marshall's daughter, actress Tracy Reiner ("A League of Their Own"), was from a previous marriage to Michael Henry. Reiner and Marshall divorced in 1981.
Reiner was introduced to his future (and current) wife, photographer Michele Singer, while directing "When Harry Met Sally". The meeting not only resulted in his deciding to change the ending of that film, but he also married Singer in 1989. They have three children, Jake (born 1991), Nick (born 1993), and Romy (born 1998). In 1997, Reiner and Singer founded the "I Am Your Child Foundation," and in 2004, they founded the "Parents' Action for Children," a non-profit organization with a dual purpose: a) to raise awareness of the importance of a child's early years by producing and distributing celebrity-hosted educational videos for parents, and b) to advance public policy through parental education and advocacy.
Reiner has stated that his childhood home was not observantly Jewish, although he did have a Bar Mitzvah ceremony; Reiner's father Carl has acknowledged that he himself became an atheist as the Holocaust progressed. Rob identified himself as having no religious affiliation on the January 13, 2012, episode of "Real Time with Bill Maher" and as an atheist. Reiner later told Huffington Post contributor Debra Oliver that while he rejected organized religion, he was sympathetic to the ideas of Buddhism.
In addition to his four children, Reiner has five grandchildren, through his adopted daughter Tracy.
Television
Film | https://en.wikipedia.org/wiki?curid=26264 |
Robin Wright
Robin Gayle Wright (born April 8, 1966) is an American actress and director. She is the recipient of eight Primetime Emmy Award nominations and has earned a Golden Globe Award and a Satellite Award for her work in television.
Wright first gained attention for her role in the NBC Daytime soap opera "Santa Barbara", as Kelly Capwell from 1984 to 1988. She then made the transition to film, starring in the romantic comedy fantasy adventure film "The Princess Bride" (1987). This role led Wright to further success in the film industry, with starring roles in films such as "Forrest Gump" (1994), the romantic drama "Message in a Bottle" (1999), the superhero drama-thriller "Unbreakable" (2000), the historical drama "The Conspirator" (2010), the biographical sports drama "Moneyball" (2011), the mystery thriller "The Girl with the Dragon Tattoo" (2011), the biographical drama "Everest" (2015), the superhero film "Wonder Woman" (2017), and the neo-noir science fiction film "Blade Runner 2049" (2017).
Wright starred as Claire Underwood in the Netflix political drama web television series "House of Cards", for which she won the Golden Globe Award for Best Actress – Television Series Drama in 2013, making her the first actress to win a Golden Globe for a web television series. Wright has also received six Primetime Emmy Award nominations in the Outstanding Lead Actress category for "House of Cards" (one for each season), and two nominations in the Outstanding Drama Series category in 2016 and 2017 as a producer on the show.
Wright is also one of the highest paid actresses in the United States, earning US$420,000 per episode for her role in "House of Cards" in 2016.
Wright was born in Dallas, Texas, to Gayle Gaston, a cosmetics saleswoman, and Freddie Wright, a pharmaceutical company employee. Wright was raised in San Diego, California. She attended La Jolla High School in La Jolla where Gregory Peck and Raquel Welch attended and Taft High School in Woodland Hills, Los Angeles, where Lisa Kudrow and Ice Cube also attended.
Wright began her career as a model, when she was 14. At the age of 18, she played Kelly Capwell in the NBC Daytime soap opera "Santa Barbara", for which she received several Daytime Emmy Award nominations.
She transitioned into feature film work as Princess Buttercup in the cult film "The Princess Bride" (1987). She gained critical acclaim in her role as Jenny Curran in "Forrest Gump" (1994), receiving Golden Globe Award and Screen Actors Guild nominations for Best Supporting Actress.
In 1996 she starred in the lead role of the film adaptation of Daniel Defoe's "Moll Flanders" (1996), for which she received a Satellite Award Nomination for Best Actress in a Drama. She was nominated for a Screen Actors Guild Award for Best Actress for her role in "She's So Lovely" (1997), a film in which she co-starred with her then-husband Sean Penn. Wright received her third Screen Actors Guild Award nomination for her role in the television film "Empire Falls" (2005).
Since 2013, Wright has appeared in the Netflix political drama web television series "House of Cards" in the role of Claire Underwood, the ruthless wife of political mastermind Frank Underwood. On January 12, 2014, she won a Golden Globe for the role, becoming the first actress to win the award for an online-only web television series; she was nominated for the same award the following year. She also received nominations for the Primetime Emmy Award in 2013 and 2014 for the same role. Following Season 4 in 2016, Wright stated that she felt Claire Underwood was the equal of Frank Underwood and demanded equal pay for her performance as her co-star Kevin Spacey; Netflix acquiesced. In 2017, for her performance in the fifth season, Wright was nominated for her fifth consecutive Primetime Emmy nomination for Outstanding Lead Actress in a Drama Series. For the years 2014, 2016, and 2017, Wright received Best Actress in a Drama Series nominations for the Critics' Choice Television Awards, with her being the only nomination for the show in December 2017.
In October 2017, Wright was set as the show's new lead, following sexual misconduct allegations against Spacey, which resulted in him being fired from the sixth and final season. For her last appearance as Underwood, her performance was acclaimed - described as a "commanding performance [that] is more than enough to keep [the final season] standing strong" - earning her her final nominations for the role at the Screen Actors Guild and Primetime Emmy Awards in 2019. For the latter, she became one of seven women to be nominated for the category six or more times for the same show (the first in 10 years since Mariska Hargitay for " ").
In 2017, Wright played General Antiope in "Wonder Woman" (2017), alongside Gal Gadot and Chris Pine. She appears in the "Blade Runner" sequel "Blade Runner 2049" alongside Ryan Gosling, Harrison Ford, and Jared Leto.
In April 2019, it was announced that Wright would be helming her feature film directorial debut in the film "Land". Wright would also be starring as its lead, Edee Mathis, a lawyer in grief who retreats to the Shoshone National Forest in Wyoming. Sales for the film would start at Cannes the following month. Filming began by October that year and was picked up by distributor Focus Features.
From 1986 to 1988, Wright was married to actor Dane Witherspoon, whom she met in 1984 on the set of the soap opera "Santa Barbara".
In 1989, Wright became involved with actor Sean Penn following his divorce from Madonna. Wright was offered the role of Maid Marian in the film "", but turned it down because she was pregnant. Their daughter, Dylan Frances, was born in April 1991. She backed out of the role of Abby McDeere in "The Firm" (1993) due to her pregnancy with her second child, and their son, Hopper Jack, was born in August 1993.
After breaking up and getting back together, Wright and Penn married in 1996. Their on-and-off relationship seemingly ended in divorce plans, announced in December 2007, but the divorce petition was withdrawn four months later at the couple's request. In February 2009, Wright and Penn attended the 81st Academy Awards together, at which Penn won Best Actor. Penn filed for legal separation in April 2009, but withdrew the petition in May. On August 12, 2009, Wright filed for divorce, declaring she had no plans to reconcile. The divorce was finalized on July 22, 2010.
In February 2012, Wright began dating actor Ben Foster, and their engagement was announced in January 2014. The couple called off their engagement in November 2014, but reunited in January 2015. On August 29, 2015, they announced they were ending their second engagement. In 2017, Wright began dating Clement Giraudet, a Saint Laurent executive, and they secretly wed in August 2018 in La Roche-sur-le-Buis, France.
Wright is the Honorary Spokesperson for the Dallas, Texas-based non-profit The Gordie Foundation.
In 2014, Wright co-partnered with two California-based companies; Pour Les Femmes and The SunnyLion. The SunnyLion donates a portion of its profits to the Raise Hope For Congo movement.
Wright is an activist for human rights in the Democratic Republic of the Congo. She is the narrator and executive producer of the documentary "When Elephants Fight" which highlights how multinational mining corporations and politicians in the Democratic Republic of Congo threaten human rights, and perpetuate conflict in the region. Wright is a supporter of "Stand With Congo", the human rights campaign behind the film. In 2016, she spoke publicly in support of the campaign at a film screening at the TriBeCa Film Institute in New York City, in media interviews, with journalists, and across her social media accounts. | https://en.wikipedia.org/wiki?curid=26265 |
Ryuichi Sakamoto
Sakamoto began his career while at university in the 1970s as a session musician, producer, and arranger. His first major success came in 1978 as co-founder of YMO. He concurrently pursued a solo career, releasing the experimental electronic fusion album "Thousand Knives" in 1978. Two years later, he released the album "B-2 Unit". It included the track "Riot in Lagos", which was significant in the development of electro and hip hop music. He went on to produce more solo records, and collaborate with many international artists, David Sylvian, Carsten Nicolai, Youssou N'Dour, and Fennesz among them. Sakamoto composed music for the opening ceremony of the 1992 Barcelona Olympics, and his composition "Energy Flow" (1999) was the first instrumental number-one single in Japan's Oricon charts history.
As a film-score composer, Sakamoto has won an Oscar, a BAFTA, a Grammy, and 2 Golden Globe Awards. "Merry Christmas, Mr. Lawrence" (1983) marked his debut as both an actor and a film-score composer; its main theme was adapted into the single "Forbidden Colours" which became an international hit. His most successful work as a film composer was "The Last Emperor" (1987), after which he continued earning accolades composing for films such as "The Sheltering Sky" (1990), "Little Buddha" (1993), and "The Revenant" (2015). On occasion, Sakamoto has also worked as a composer and a scenario writer on anime and video games. In 2009, he was awarded the Ordre des Arts et des Lettres from the Ministry of Culture of France for his contributions to music.
Sakamoto entered the Tokyo National University of Fine Arts and Music in 1970, earning a B.A. in music composition and an M.A. with special emphasis on both electronic and ethnic music. He studied ethnomusicology there with the intention of becoming a researcher in the field, due to his interest in various world music traditions, particularly the Japanese (especially Okinawan), Indian and African musical traditions. He was also trained in classical music and began experimenting with the electronic music equipment available at the university, including synthesizers such as the Buchla, Moog, and ARP. One of Sakamoto's classical influences was Claude Debussy, who he described as his "hero" and stated that “Asian music heavily influenced Debussy, and Debussy heavily influenced me. So, the music goes around the world and comes full circle.”
In 1975, Sakamoto collaborated with percussionist Tsuchitori Toshiyuki to release "Disappointment-Hateruma". After working as a session musician with Haruomi Hosono and Yukihiro Takahashi in 1977, the trio formed the internationally successful electronic music band Yellow Magic Orchestra (YMO) in 1978. Known for their seminal influence on electronic music, the group helped pioneer electronic genres such as electropop/technopop, synthpop, cyberpunk music, ambient house, and electronica. The group's work has had a lasting influence across genres, ranging from hip hop and techno to acid house and general melodic music. Sakamoto was the songwriter and composer for a number of the band's hit songs—including "Yellow Magic (Tong Poo)" (1978), "Technopolis" (1979), "Nice Age" (1980), "Ongaku" (1983) and "You've Got to Help Yourself" (1983)—while playing keyboards for many of their other songs, including international hits such as "Computer Game/Firecracker" (1978) and "Rydeen" (1979). He also sang on several songs, such as "Kimi ni Mune Kyun" (1983). Sakamoto's composition "Technopolis" (1979) was credited as a contribution to the development of techno music, while the internationally successful "Behind the Mask" (1978)—a synthpop song in which he sang vocals through a vocoder—was later covered by a number of international artists, including Michael Jackson and Eric Clapton.
Sakamoto released his first solo album "Thousand Knives of Ryūichi Sakamoto" in mid-1978 with the help of Hideki Matsutake—Hosono also contributed to the song "Thousand Knives". The album experimented with different styles, such as "Thousand Knives" and "The End of Asia"—in which electronic music was fused with traditional Japanese music—while "Grasshoppers" is a more minimalistic piano song. The album was recorded from April to July 1978 with a variety of electronic musical instruments, including various synthesizers, such as the KORG PS-3100, a polyphonic synthesizer; the Oberheim Eight-Voice; the Moog III-C; the Polymoog, the Minimoog; the Micromoog; the Korg VC-10, which is a vocoder; the KORG SQ-10, which is an analog sequencer; the Syn-Drums, an electronic drum kit; and the microprocessor-based Roland MC-8 Microcomposer, which is a music sequencer that was programmed by Matsutake and played by Sakamoto. A version of the song "Thousand Knives" was released on the Yellow Magic Orchestra's 1981 album "BGM". This version was one of the earliest uses of the Roland TR-808 drum machine, for YMO's live performance of "1000 Knives" in 1980 and their "BGM" album release in 1981.
In 1980, Sakamoto released the solo album "B-2 Unit", which has been referred to as his "edgiest" record and is known for the electronic song "Riot in Lagos", which is considered an early example of electro music (electro-funk), as Sakamoto anticipated the beats and sounds of electro. Early electro and hip hop artists, such as Afrika Bambaata and Kurtis Mantronik, were influenced by the album—especially "Riot in Lagos"—with Mantronik citing the work as a major influence on his electro hip hop group Mantronix. "Riot in Lagos" was later included in Playgroup's compilation album "Kings of Electro" (2007), alongside other significant electro compositions, such as Hashim's "Al-Naafyish" (1983).
According to "Dusted Magazine", Sakamoto's use of squelching bounce sounds and mechanical beats was later incorporated in early electro and hip hop music productions, such as “Message II (Survival)” (1982), by Melle Mel and Duke Bootee; “Magic’s Wand” (1982), by Whodini and Thomas Dolby; Twilight 22's “Electric Kingdom” (1983); and Kurt Mantronik's "" (1985). The 1980 release of "Riot in Lagos" was listed by "The Guardian" in 2011 as one of the 50 key events in the history of dance music.
Among other tracks on "B-2 Unit", "Differencia" has, according to "Fact", "relentless tumbling beats and a stabbing bass synth that foreshadows jungle by nearly a decade". Some tracks on the album also foreshadow genres such as IDM, broken beat, and industrial techno, and the work of producers such as Actress and Oneohtrix Point Never. For several tracks on the album, Sakamoto worked with UK reggae producer Dennis Bovell, incorporating elements of afrobeat and dub music.
Also in 1980, Sakamoto released the single "War Head/Lexington Queen", an experimental synthpop and electro record, and began a long-standing collaboration with David Sylvian, when he co-wrote and performed on the Japan track "Taking Islands In Africa". In the following year, Sakamoto collaborated with Talking Heads and King Crimson guitarist Adrian Belew and Robin Scott for an album titled "Left-Handed Dream". Following Japan's dissolution, Sakamoto worked on another collaboration with Sylvian, a single entitled "Bamboo Houses/Bamboo Music" in 1982. Sakamoto's 1980 collaboration with Kiyoshiro Imawano, "Ikenai Rouge Magic", topped the Oricon singles chart.
In 1983, Sakamoto starred alongside David Bowie in director Nagisa Oshima's "Merry Christmas Mr. Lawrence". In addition to acting in the film, Sakamoto also composed the film's musical score and again collaborated with Sylvian on the film's main theme ("Forbidden Colours") – which became a minor hit. In a 2016 interview, Sakamoto reflected on his time acting in the film, claiming that he "hung out" with Bowie every evening for a month while filming on location. He remembered Bowie as "straightforward" and "nice", while also lamenting the fact that he never mustered the courage to ask for Bowie's help while scoring the film's soundtrack as he believed Bowie was too "concentrated on acting".
Sakamoto released a number of solo albums during the 1980s. While primarily focused on the piano and synthesizer, this series of albums included collaborations with artists such as Sylvian, David Byrne, Thomas Dolby, Nam June Paik and Iggy Pop. Sakamoto would alternate between exploring a variety of musical styles, ideas and genres—captured most notably in his 1983 album "Illustrated Musical Encyclopedia" —and focusing on a specific subject or theme, such as the Italian Futurism movement .
As his solo career began to extend outside Japan in the late 1980s, Sakamoto's explorations, influences and collaborators also developed further. "Beauty" (1989) features a track list that combines pop with traditional Japanese and Okinawan songs, as well as guest appearances by Jill Jones, Robert Wyatt, Brian Wilson and Robbie Robertson. "Heartbeat" (1991) and "Sweet Revenge" (1994) features Sakamoto's collaborations with a global range of artists such as Roddy Frame, Dee Dee Brave, Marco Prince, Arto Lindsay, Youssou N'Dour, David Sylvian and Ingrid Chavez.
In 1995 Sakamoto released "Smoochy", described by the "Sound On Sound" website as Sakamoto's "excursion into the land of easy-listening and Latin", followed by the "1996" album, which featured a number of previously released pieces arranged for solo piano, violin and cello. During December 1996 Sakamoto, composed the entirety of an hour-long orchestral work entitled "Untitled 01" and released as the album "Discord" (1998). The Sony Classical release of "Discord" was sold in a jewel case that was covered by a blue-colored slipcase made of foil, while the CD also contained a data video track. In 1998 the Ninja Tune record label released the "Prayer/Salvation Remixes", for which prominent electronica artists such as Ashley Beedle and Andrea Parker remixed sections from the "Prayer" and "Salvation" parts of "Discord". Sakamoto collaborated primarily with guitarist David Torn and DJ Spooky—artist Laurie Anderson provides spoken word on the composition—and the recording was condensed from nine live performances of the work, recorded during a Japanese tour. "Discord" was divided into four parts: "Grief", "Anger", "Prayer" and "Salvation"; Sakamoto explained in 1998 that he was "not religious, but maybe spiritual" and "The Prayer is to anybody or anything you want to name." Sakamoto further explained:
In 1998, Italian ethnomusicologist Massimo Milano published "Ryuichi Sakamoto. Conversazioni" through the Padova, Arcana imprint. All three editions of the book were published in the Italian language. Sakamoto's next album, "BTTB" (1998)—an acronym for "Back to the Basics"—was a fairly opaque reaction to the prior year's multilayered, lushly orchestrated "Discord". The album comprised a series of original pieces on solo piano, including "Energy Flow" (a major hit in Japan) and a frenetic, four-hand arrangement of the Yellow Magic Orchestra classic "Tong Poo". On the "BTTB" U.S. tour, he opened the show performing a brief avant-garde DJ set under the stage name DJ Lovegroove.
Sakamoto's long-awaited "opera" "LIFE" was released in 1999, with visual direction by Shiro Takatani, artistic director of Dumb Type. It premiered with seven sold-out performances in Tokyo and Osaka. This ambitious multi-genre multi-media project featured contributions by over 100 performers, including Pina Bausch, Bernardo Bertolucci, Josep Carreras, the Dalai Lama and Salman Rushdie.
Sakamoto teamed with cellist Jaques Morelenbaum (a member of his "1996" trio), and Morelenbaum's wife, Paula, on a pair of albums celebrating the work of bossa nova pioneer Antonio Carlos Jobim. They recorded their first album, "Casa" (2001), mostly in Jobim's home studio in Rio de Janeiro, with Sakamoto performing on the late Jobim's grand piano. The album was well received, having been included in the list of "The New York Times"s top albums of 2002. A live album, "Live in Tokyo", and a second album, "A Day in New York", soon followed. Sakamoto and the Morelenbaums would also collaborate on N.M.L. No More Landmine, an international effort to raise awareness for the removal of landmines. The trio would release the single "Zero Landmine", which also featured David Sylvian, Brian Eno, Kraftwerk, Cyndi Lauper, and Haruomi Hosono & Yukihiro Takahashi, the other two founding members of Yellow Magic Orchestra, amongst nearly one hundred other performers.
Sakamoto collaborated with Alva Noto (an alias of Carsten Nicolai) to release "Vrioon", an album of Sakamoto's piano clusters treated by Nicolai's unique style of digital manipulation, involving the creation of "micro-loops" and minimal percussion. The two produced this work by passing the pieces back and forth until both were satisfied with the result. This debut, released on German label Raster-Noton, was voted record of the year 2004 in the electronica category by British magazine "The Wire". They then released "Insen" (2005)—while produced in a similar manner to Vrioon, this album is somewhat more restrained and minimalist. They keep on collaborating and have released two more albums: "utp_" (2008) and "Summvs" (2011).
In 2005, Finnish mobile phone manufacturer Nokia hired Sakamoto to compose ring and alert tones for their high-end phone, the Nokia 8800. In 2006, Nokia offered the ringtones for free on their website. Around this time, a reunion with YMO cofounders Hosono and Takahashi caused a stir in the Japanese press. They released a single "Rescue" in 2007 and a DVD "HAS/YMO" in 2008. In July 2009, Sakamoto was honored as Officier of Ordre des Arts et des Lettres at the French embassy in Tokyo.
Throughout the latter part of the 2000s, Sakamoto collaborated on several projects with visual artist Shiro Takatani, including the installations "LIFE - fluid, invisible, inaudible..." (2007–2013), commissioned by YCAM, Yamaguchi, "collapsed" and "silence spins" at the Museum of Contemporary Art Tokyo in 2012 and 2013 Sharjah Biennial (U.A.E.), "LIFE-WELL" in 2013 and a special version for Park Hyatt Tokyo's 20th anniversary in 2014, and he did music for the joint performance "LIFE-WELL" featuring the actor Noh/Kyogen Mansai Nomura, and for Shiro Takatani's performance "ST/LL" in 2015.
In 2013, Sakamoto was a jury member at the 70th Venice International Film Festival. The jury viewed 20 films and was chaired by filmmaker Bernardo Bertolucci.
In 2014, Sakamoto became the first Guest Artistic Director of The Sapporo International Art Festival 2014 (SIAF2014).
On July 10, Sakamoto released a statement indicating that he had been diagnosed with oropharyngeal cancer in late June of the same year. He announced a break from his work while he sought treatment and recovery. On August 3, 2015, Sakamoto posted on his website that he was "in great shape ... I am thinking about returning to work" and announced that he would be providing music for Yoji Yamada's "Haha to Kuraseba" ("Living with My Mother"). In 2015, Sakamoto also composed the score for the Alejandro González Iñárritu's film, "The Revenant," for which he received a Golden Globe nomination.
In January 2017 it was announced that Sakamoto would release a solo album in April 2017 through Milan Records; the new album, titled "async", was released on March 29, 2017 to critical acclaim. In February 2018, he was selected to be on the jury for the main competition section of the 68th Berlin International Film Festival.
On June 14, 2018, a documentary about the life and work of Sakamoto, entitled "Coda", was released. The film follows Sakamoto as he recovers from cancer and resumes creating music, protests nuclear power plants following the Fukushima Daiichi Nuclear Disaster, and creates field recordings in a variety of locales. Directed by Stephen Nomura Schible, the documentary was met with critical praise.
Sakamoto's production credits represent a prolific career in this role. In 1983, he produced Mari Iijima's debut album "Rosé", the same year that the Yellow Magic Orchestra was disbanded. Sakamoto subsequently worked with artists such as Thomas Dolby; Aztec Camera, on the "Dreamland" (1993) album; and Imai Miki, co-producing her 1994 album "A Place In The Sun". In 1996, Sakamoto produced "Mind Circus", the first single from actress Miki Nakatani, leading to a collaboration period spanning 9 singles and 7 albums though 2001.
Roddy Frame, who worked with Sakamoto as a member of Aztec Camera, explained in a 1993 interview preceding the release of "Dreamland" that he had had to wait a lengthy period of time before he was able to work with Sakamoto, who wrote two soundtracks, a solo album and the music for the opening ceremony at the Barcelona Olympics, prior to working with Frame over four weeks in a New York studio. Frame said that he was impressed by the work of YMO and the "Merry Christmas Mr Lawrence" soundtrack, explaining: "That's where you realise that the atmosphere around his compositions is actually in the writing - it's got nothing to do with synthesisers." Frame's decision to ask Sakamoto was finalized after he saw his performance at the Japan Festival that was held in London, United Kingdom. Of his experience recording with Sakamoto, Frame said:
Sakamoto began working in films, as a composer and actor, in Nagisa Oshima's "Merry Christmas Mr. Lawrence" (1983), for which he composed the score, title theme, and the duet "Forbidden Colours" with David Sylvian. Sakamoto later composed Bernardo Bertolucci's "The Last Emperor" (1987), which earned him the Academy Award with fellow composers David Byrne and Cong Su. In that same year, he composed the score to the cult-classic anime film "". Sakamoto also went on to compose the score of the opening ceremony for the 1992 Summer Olympics in Barcelona, Spain, telecast live to an audience of over a billion viewers.
Other films scored by Sakamoto include Pedro Almodóvar's "Tacones lejanos (High Heels)" (1991), Bertolucci's "The Little Buddha" (1993), Oliver Stone's "Wild Palms" (1993), John Maybury's "" (1998), Brian De Palma's "Snake Eyes" (1998) and "Femme Fatale" (2002), Oshima's "Gohatto" (1999), Jun Ichikawa's (director of the Mitsui ReHouse commercial from 1997 to 1999 starring Chizuru Ikewaki and Mao Inoue) "Tony Takitani" (2005).
Several tracks from Sakamoto's earlier solo albums have also appeared in film soundtracks. In particular, variations of "Chinsagu No Hana" (from "Beauty") and "Bibo No Aozora" (from "1996") provide the poignant closing pieces for Sue Brooks's "Japanese Story" (2003) and Alejandro González Iñárritu's "Babel" (2006), respectively. In 2015, Sakamoto teamed up with Iñárritu to score his film, "The Revenant" starring Leonardo DiCaprio and Tom Hardy.
Sakamoto has also acted in several films: perhaps his most notable performance was as the conflicted Captain Yonoi in "Merry Christmas Mr Lawrence", alongside Takeshi Kitano and British rock singer David Bowie. He also played roles in "The Last Emperor" (as Masahiko Amakasu) and Madonna's "Rain" music video.
Sakamoto's first of three marriages occurred in 1972, but ended in divorce two years later—Sakamoto has a daughter from this relationship. Sakamoto then married popular Japanese pianist and singer Akiko Yano in 1982, following several musical collaborations with her, including touring work with the Yellow Magic Orchestra. Sakamoto's second marriage ended in August 2006, 14 years after a mutual decision to live separately—Yano and Sakamoto raised one daughter, J-pop singer Miu Sakamoto. He has lived with his manager and wife Norika Sora since around 1990 and has two children with her.
Beginning in June 2014, Sakamoto took a year-long hiatus after he was diagnosed with oropharyngeal cancer. In 2015, he returned, stating: "Right now I'm good. I feel better. Much, much better. I feel energy inside, but you never know. The cancer might come back in three years, five years, maybe 10 years. Also the radiation makes your immune system really low. It means I'm very welcoming [of] another cancer in my body."
Sakamoto is a member of the anti-nuclear organization Stop Rokkasho and has demanded the closing of the Hamaoka Nuclear Power Plant. In 2012, he organized the No Nukes 2012 concert, which featured performances by 18 groups, including Yellow Magic Orchestra and Kraftwerk. Sakamoto is also known as a critic of copyright law, arguing in 2009 that it is antiquated in the information age. He argued that in "the last 100 years, only a few organizations have dominated the music world and ripped off both fans and creators" and that "with the internet we are going back to having tribal attitudes towards music."
In 2015 Sakamoto also supported opposition to the relocation of Marine Corps Air Station Futenma in the Oura bay in Henoko, with a new and Okinawan version of his 2004 single "Undercooled" whose sales partially contributed to the "Henoko Fund", aimed to stop the relocation of the base on Okinawa.
In 2006 Sakamoto, in collaboration with Japan's largest independent music company Avex Group, founded , a record label seeking to change the manner in which music is produced. Sakamoto has explained that Commmons is not his label, but is a platform for all aspiring artists to join as equal collaborators, to share the benefits of the music industry. On the initiative's "About" page, the label is described as a project that "aims to find new possibilities for music, while making meaningful contribution to culture and society." The name "Commmons" is spelt with three "m"s because the third "m" stands for music.
Sakamoto has won a number of awards for his work as a film composer, beginning with his score for "Merry Christmas, Mr. Lawrence" (1983) winning him the BAFTA Award for Best Film Music. His greatest award success was for scoring "The Last Emperor" (1987), which won him the Academy Award for Best Original Score, Golden Globe Award for Best Original Score, and Grammy Award for Best Score Soundtrack Album for a Motion Picture, Television or Other Visual Media, as well as a BAFTA nomination.
His score for "The Sheltering Sky" (1990) later won him his second Golden Globe Award, and his score for "Little Buddha" (1993) received another Grammy Award nomination. In 1997, his collaboration with Toshio Iwai, "Music Plays Images X Images Play Music", was awarded the Golden Nica, the grand prize of the Prix Ars Electronica competition. He also contributed to the Academy Award winning soundtrack for "Babel" (2006) with several pieces of music, including the "Bibo no Aozora" closing theme. In 2009, he was awarded the Ordre des Arts et des Lettres from France's Ministry of Culture for his musical contributions. His score for "The Revenant" (2015) was nominated for the Golden Globe and BAFTA, and won Best Musical Score from the Dallas–Fort Worth Film Critics Association.
The music video for "Risky", written and directed by Meiert Avis, also won the first ever MTV "Breakthrough Video Award". The ground breaking video explores transhumanist philosopher FM-2030's (Persian: فریدون اسفندیاری) ideas of "Nostalgia for the Future", in the form of an imagined love affair between a robot and one of Man Ray's models in Paris in the late 1930s. Additional inspiration was drawn from Jean Baudrillard, Edvard Munch's 1894 painting "Puberty", and Roland Barthes "Death of the Author". The surrealist black and white video uses stop motion, light painting, and other retro in-camera effects techniques. Meiert Avis shot Sakamoto while at work on the score for "The Last Emperor" in London. Sakamoto also appears in the video painting words and messages to an open shutter camera. Iggy Pop, who performs the vocals on "Risky", chose not to appear in the video, allowing his performance space to be occupied by the surrealist era robot.
Sakamoto won the Golden Pine Award (Lifetime Achievement) at the 2013 International Samobor Film Music Festival, along with Clint Eastwood and Gerald Fried.
Solo studio albums | https://en.wikipedia.org/wiki?curid=26272 |
Roger the Dodger
Roger the Dodger, whose real name is Roger Dawson, is a fictional character featured regularly in the UK comic "The Beano". His strip consists solely of Roger's basic remit to avoid doing chores and homework which usually involves him concocting complex and ultimately disastrous plans, the undoing of which results in him being punished (usually by his long-suffering father). To perform these tasks he enlists the help of his many 'dodge books'.
He first appeared in issue 561, dated 18 April 1953. His appearance is vaguely similar to that of Dennis the Menace from the same magazine; he wears a black-and-red chequered jumper, black trousers and takes better care of his hair than his equally mischievous counterpart. He also used to have a white tie, but it seems to have disappeared. Originally drawn by Ken Reid, Gordon Bell took over in 1959, but Roger dodged his way out of the Beano in 1960. He returned, drawn by Bob McGrath, in April 1961. Ken Reid was re-commissioned to draw the strip in 1962, and Robert Nixon when Reid left D. C. Thomson & Co. in 1964. When Nixon left in 1973, Tom Lavery began drawing the strip, who was then followed by Frank McDiarmid in 1976.
Ten years later, after Euan Kerr took over as Beano editor, Nixon returned, drawing in a noticeably different style from the one before. Roger's strip was given a second page in 1986. Between 1986 and 1992, a spin-off strip appeared at the end called "Roger the Dodger's Dodge Clinic". Readers would write in with problems, and Roger would try to find a dodge for it (which would usually go wrong). Winning suggestions would win a transistor radio and special scroll. Roger is often shown in other Beano characters' stories offering "help", which he took to a new level in Beano issue 2648 from April 1993. This issue marked Roger's 40th birthday, and to celebrate he made appearances in every strip in the comic.
Nixon continued drawing it until his death in October 2002, though due to the strips being drawn months in advance, his strips continued appearing in the Beano until the end of January 2003, when artist Barrie Appleby took over. He drew the strip until 2011, when he stopped to concentrate on Dennis and Gnasher, though Trevor Metcalfe drew a few strips in 2003 and 2004, and there have also been some Robert Nixon reprints during 2005 and 2006. Since Appleby stopped drawing Roger, the comic has run reprints of Robert Nixon strips from the 1980s. Along with the Nixon reprints, Roger's Dodge Diary was introduced on the second half of Roger's pages, where Beano readers can send in their own dodges. In each one, Roger says a good thing, a bad thing and the results of the dodge. When the Beano was revamped on 8 August 2012, Appleby started drawing Roger again and Roger's parents were made younger. In the 75th birthday issue released on 24 July 2013, Jamie Smart took over as artist. On 9 April 2014, Wayne Thompson replaced Jamie Smart as Roger's artist, until Barrie Appleby returned to draw the strip temporarily, before Wayne Thompson surprisingly returned. In 2017, writing duties for the strip were taken over by Danny Pearson.
Roger is currently the second longest-running character in the Beano, behind only Dennis the Menace. However, if taken into account the strip's absence in 1960 then he would be the third longest-running behind Minnie the Minx.
Roger is a crafty-looking ten-year-old boy who sports a red-and-black chequered jersey with a white shirt collar poking out. He is also usually attired in a white tie, however, the Barrie Appleby strips dropped this around 2005. Upon his debut, Roger sported the usual school boy shorts, however he began wearing trousers during the 1970s.
Roger, unlike most Beano characters, does not go out of his way to cause chaos and mayhem. Instead, he chooses to watch from the sidelines, dodging responsibilities and punishments.
18 April 1953: Roger The Dodger made his debut in issue 561, drawn by Ken Reid.
1959: Gordon Bell becomes the artist.
1960: Roger's first series ends.
April 1961: Roger returns to the Beano, drawn by Bob McGrath.
1962: Reid is artist again.
1964: Robert Nixon takes over.
1973: Tom Lavery takes over.
1976: Frank McDiarmid takes over.
1986: Nixon returns to draw Roger again, he moves to two pages and Roger The Dodger's Dodge Clinic is introduced.
1992: Roger The Dodger's Dodge Clinic ends.
April 1993: Roger's 40th anniversary is celebrated.
January 2003: Barrie Appleby takes over, after Nixon's death.
2011: Appleby stops drawing Roger to focus on Dennis and Gnasher, and Roger's Dodge Diary is introduced alongside Nixon reprints.
2012: Appleby resumes drawing Roger after Nigel Parkinson takes over Dennis and Gnasher.
July 2013: Jamie Smart takes over as artist.
April 2014: Wayne Thompson takes over as artist.
July 2014: Barrie Appleby returns as artist.
In 1994 Roger was going to feature in The Beano Videostars, he appears on the VHS cover, but not in the video itself (likely because his checkered jersey made him too hard to animate). He made a non-speaking appearance in an advert of the Beano comic with Dennis the Menace and Gnasher, Minnie the Minx, "Billy Whizz", Teacher from "The Bash Street Kids" and "Biffo the Bear". Roger the Dodger was never had a voice actor. | https://en.wikipedia.org/wiki?curid=26273 |
Robert Nozick
Robert Nozick (; November 16, 1938 – January 23, 2002) was an American philosopher. He held the Joseph Pellegrino University Professorship at Harvard University, and was president of the American Philosophical Association. He is best known for his books "Philosophical Explanations" (1981), which included his counterfactual theory of knowledge, and "Anarchy, State, and Utopia" (1974), a libertarian answer to John Rawls' "A Theory of Justice" (1971), in which Nozick also presented his own theory of utopia as one in which people can freely choose the rules of the society they enter into. His other work involved ethics, decision theory, philosophy of mind, metaphysics and epistemology. His final work before his death, "Invariances" (2001), introduced his theory of evolutionary cosmology, by which he argues invariances, and hence objectivity itself, emerged through evolution across possible worlds.
Nozick was born in Brooklyn to a family of Jewish descent. His mother was born Sophie Cohen, and his father was a Jew from the Russian shtetl who had been born with the name Cohen and who ran a small business.
Nozick attended the public schools in Brooklyn. He was then educated at Columbia University (A.B. 1959, "summa cum laude"), where he studied with Sidney Morgenbesser, and later at Princeton University (Ph.D. 1963) under Carl Hempel, and at Oxford University as a Fulbright Scholar (1963–1964). At one point he joined the youth branch of Norman Thomas's Socialist Party. In addition, at Columbia he founded the local chapter of the Student League for Industrial Democracy, which in 1962 changed its name to Students for a Democratic Society.
That same year, after receiving his bachelor of arts degree in 1959, he married Barbara Fierer. They had two children, Emily and David. The Nozicks eventually divorced and he remarried, to the poet Gjertrud Schnackenberg. Nozick died in 2002 after a prolonged struggle with stomach cancer. | https://en.wikipedia.org/wiki?curid=26275 |
Richard III of England
Richard III (2 October 145222 August 1485) was King of England and Lord of Ireland from 1483 until his death in 1485. He was the last king of the House of York and the last of the Plantagenet dynasty. His defeat and death at the Battle of Bosworth Field, the last decisive battle of the Wars of the Roses, marked the end of the Middle Ages in England. He is the protagonist of "Richard III", one of William Shakespeare's history plays.
When his brother Edward IV died in April 1483, Richard was named Lord Protector of the realm for Edward's eldest son and successor, the 12-year-old Edward V. Arrangements were made for Edward's coronation on 22 June 1483. Before the king could be crowned, the marriage of his parents was declared bigamous and therefore invalid. Now officially illegitimate, their children were barred from inheriting the throne. On 25 June, an assembly of lords and commoners endorsed a declaration to this effect and proclaimed Richard as the rightful king. He was crowned on 6 July 1483. The young princes, Edward and his younger brother Richard, Duke of York, were not seen in public after August and accusations circulated that they had been murdered on Richard's orders.
There were two major rebellions against Richard during his reign. In October 1483, an unsuccessful revolt was led by staunch allies of Edward IV and Richard's former ally, Henry Stafford, 2nd Duke of Buckingham. Then in August 1485, Henry Tudor and his uncle, Jasper Tudor, landed in southern Wales with a contingent of French troops and marched through Pembrokeshire, recruiting soldiers. Henry's forces defeated Richard's army near the Leicestershire town of Market Bosworth. Richard was slain, making him the last English king to die in battle. Henry Tudor then ascended the throne as Henry VII.
Richard's corpse was taken to the nearby town of Leicester and buried without pomp. His original tomb monument is believed to have been removed during the English Reformation, and his remains were lost, as they were believed to have been thrown into the River Soar. In 2012, an archaeological excavation was commissioned by the Richard III Society on the site previously occupied by Greyfriars Priory Church. The University of Leicester identified the skeleton found in the excavation as that of Richard III as a result of radiocarbon dating, comparison with contemporary reports of his appearance, and comparison of his mitochondrial DNA with that of two matrilineal descendants of his eldest sister, Anne of York, Duchess of Exeter. He was reburied in Leicester Cathedral on 26 March 2015.
Richard was born on 2 October 1452 at Fotheringhay Castle in Northamptonshire, the eleventh of the twelve children of Richard, Duke of York, and Cecily Neville, and the youngest to survive infancy. His childhood coincided with the beginning of what has traditionally been labelled the 'Wars of the Roses', a period of political instability and periodic open civil war in England during the second half of the fifteenth century, between the Yorkists, who supported Richard's father (a potential claimant to the throne of King Henry VI from birth), and opposed the regime of Henry VI and his wife, Margaret of Anjou, and the Lancastrians, who were loyal to the crown. In 1459, his father and the Yorkists were forced to flee England, whereupon Richard and his older brother George were placed in the custody of their aunt the Duchess of Buckingham, and possibly of the Archbishop of Canterbury.
When their father and elder brother Edmund, Earl of Rutland, were killed at the Battle of Wakefield on 30 December 1460, Richard and George were sent by their mother to the Low Countries. They returned to England following the defeat of the Lancastrians at the Battle of Towton. They participated in the coronation of Richard's eldest brother as King Edward IV on 28 June 1461, when Richard was named Duke of Gloucester and made both a Knight of the Garter and a Knight of the Bath. Edward appointed him the sole Commissioner of Array for the Western Counties in 1464 when he was 11. By the age of 17, he had an independent command.
Richard spent several years during his childhood at Middleham Castle in Wensleydale, Yorkshire, under the tutelage of his cousin the Earl of Warwick, later known as 'the Kingmaker' because of his role in the Wars of the Roses. Warwick supervised Richard's training as a knight; in the autumn of 1465 Edward IV granted Warwick £1000 for the expenses of his younger brother's tutelage. With some interruptions, Richard stayed at Middleham either from late 1461 until early 1465, when he was 12 or from 1465 until his coming of age in 1468, when he turned 16. While at Warwick's estate, it is likely that he met both Francis Lovell, who would be his firm supporter later in his life, and Warwick's younger daughter, his future wife Anne Neville.
It is possible that even at this early stage Warwick was considering the king's brothers as strategic matches for his daughters, Isabel and Anne: young aristocrats were often sent to be raised in the households of their intended future partners, as had been the case for the young dukes' father, Richard of York. As the relationship between the king and Warwick became strained, Edward IV opposed the match. During Warwick's lifetime, George was the only royal brother to marry one of his daughters, the eldest, Isabel, on 12 July 1469, without the king's permission. George joined his father-in-law's revolt against the king, while Richard remained loyal to Edward, even though rumour coupled Richard's name with Anne Neville until August 1469.
Richard and Edward were forced to flee to Burgundy in October 1470 after Warwick defected to the side of the former Lancastrian queen Margaret of Anjou. In 1468, Richard's sister Margaret had married Charles the Bold, the Duke of Burgundy, and the brothers could expect a welcome there. Edward was restored to the throne in the spring of 1471, following the battles of Barnet and Tewkesbury, in both of which the eighteen-year-old Richard played a crucial role.
During his adolescence, and due to a cause that is unknown, Richard developed a sideways curvature of the spine. In 2014, after the discovery of Richard's remains, the osteoarchaeologist Dr. Jo Appleby, of Leicester University's School of Archaeology and Ancient History, imaged the spinal column and reconstructed a model using 3D printing, and concluded that though the spinal scoliosis looked dramatic, it probably did not cause any major physical deformity that could not be disguised by clothing.
Following a decisive Yorkist victory over the Lancastrians at the Battle of Tewkesbury, Richard married Anne Neville, the younger daughter of the Earl of Warwick, on 12 July 1472. By the end of 1470 Anne had previously been wedded to Edward of Westminster, only son of Henry VI, to seal her father's allegiance to the Lancastrian party. Edward died at the Battle of Tewkesbury on 4 May 1471, while Warwick had died at the Battle of Barnet on 14 April 1471. Richard's marriage plans brought him into conflict with his brother George. John Paston's letter of 17 February 1472 makes it clear that George was not happy about the marriage but grudgingly accepted it on the basis that "he may well have my Lady his sister-in-law, but they shall part no livelihood". The reason was the inheritance Anne shared with her elder sister Isabel, whom George had married in 1469. It was not only the earldom that was at stake; Richard Neville had inherited it as a result of his marriage to Anne Beauchamp, who was still alive (and outlived both her daughters) and was technically the owner of the substantial Beauchamp estates, her own father having left no male heirs.
The Croyland Chronicle records that Richard agreed to a prenuptial contract in the following terms: "the marriage of the Duke of Gloucester with Anne before-named was to take place, and he was to have such and so much of the earl's lands as should be agreed upon between them through the mediation of arbitrators; while all the rest were to remain in the possession of the Duke of Clarence".
The date of Paston's letter suggests the marriage was still being negotiated in February 1472. In order to win his brother George's final consent to the marriage, Richard renounced most of Warwick's land and property including the earldoms of Warwick (which the Kingmaker had held in his wife's right) and Salisbury and surrendered to Clarence the office of Great Chamberlain of England. Richard retained Neville's forfeit estates he had already been granted in the summer of 1471: Penrith, Sheriff Hutton and Middleham, where he later established his marital household.
The requisite Papal dispensation was obtained dated 22 April 1472. Michael Hicks has suggested that the terms of the dispensation deliberately understated the degrees of consanguinity between the couple, and the marriage was therefore illegal on the ground of first degree consanguinity following George's marriage to Anne's sister Isabel. First-degree consanguinity applied in the case of Henry VIII and his brother's widow Catherine of Aragon. In their case, the papal dispensation was obtained after Catherine declared the first marriage had not been consummated. In Richard's case, there would have been first-degree consanguinity if Richard had sought to marry Isabel (in case of widowhood) after she had married his brother George, but no such consanguinity applied for Anne and Richard. Richard's marriage to Anne was never declared null, and it was public to everyone including secular and canon lawyers for 13 years.
In June 1473, Richard persuaded his mother-in-law to leave the sanctuary and come to live under his protection at Middleham. Later in the year, under the terms of the 1473 Act of Resumption, George lost some of the property he held under royal grant and made no secret of his displeasure. John Paston's letter of November 1473 says that the king planned to put both his younger brothers in their place by acting as "a stifler atween them".
Early in 1474, Parliament assembled and King Edward attempted to reconcile his brothers by stating that both men, and their wives, would enjoy the Warwick inheritance just as if the Countess of Warwick "was naturally dead". The doubts cast by Clarence on the validity of Richard and Anne's marriage were addressed by a clause protecting their rights in the event they were divorced (i.e. of their marriage being declared null and void by the Church) and then legally remarried to each other, and also protected Richard's rights while waiting for such a valid second marriage with Anne. The following year, Richard was rewarded with all the Neville lands in the north of England, at the expense of Anne's cousin, George Neville. From this point, Clarence seems to have fallen steadily out of King Edward's favour, his discontent coming to a head in 1477 when, following Isabel's death, he was denied the opportunity to marry Mary of Burgundy, the stepdaughter of his sister Margaret, even though Margaret approved the proposed match. There is no evidence of Richard's involvement in George's subsequent conviction and execution on a charge of treason.
Richard was granted the Duchy of Gloucester on 1 November 1461, and on 12 August the next year was awarded large estates in northern England, including the lordships of Richmond in Yorkshire, and Pembroke in Wales. He gained the forfeited lands of the Lancastrian John de Vere, Earl of Oxford, in East Anglia. In 1462, on his birthday, he was made Constable of Gloucester and Corfe Castles and Admiral of England, Ireland and Aquitaine and appointed Governor of the North, becoming the richest and most powerful noble in England. On 17 October 1469, he was made Constable of England. In November, he replaced William Hastings, 1st Baron Hastings, as Chief Justice of North Wales. The following year, he was appointed Chief Steward and Chamberlain of Wales. On 18 May 1471, Richard was named Great Chamberlain and Lord High Admiral of England. Other positions followed: High Sheriff of Cumberland for life, Lieutenant of the North and Commander-in-Chief against the Scots and hereditary Warden of the West March. Two months later, on 14 July, he gained the Lordships of the strongholds Sheriff Hutton and Middleham in Yorkshire and Penrith in Cumberland, which had belonged to Warwick the Kingmaker. It is possible that the grant of Middleham seconded Richard's personal wishes.
During the latter part of Edward IV's reign, Richard demonstrated his loyalty to the king, in contrast to their brother George who had allied himself with Warwick when the earl rebelled towards the end of the 1460s. Following Warwick's 1470 rebellion, before which he had made peace with Margaret of Anjou and promised the restoration of Henry VI to the English throne, Richard, William, Lord Hastings and Anthony Woodville, Earl Rivers escaped capture at Doncaster by Warwick's brother, Lord Montague. On 2 October they sailed from King's Lynn in two ships; Edward landed at Marsdiep and Richard at Zeeland. It was said that, having left England in such haste as to possess almost nothing, Edward was forced to pay their passage with his fur cloak; certainly, Richard borrowed three pounds from Zeeland's town bailiff. They were attainted by Warwick's only Parliament on 26 November. They resided in Bruges with Louis de Gruthuse, who had been the Burgundian Ambassador to Edward's court, but it was not until Louis XI of France declared war on Burgundy that Charles, Duke of Burgundy, assisted their return, providing, along with the Hanseatic merchants, £20,000, 36 ships and 1200 men. They departed Flushing for England on 11 March 1471. Warwick's arrest of local sympathisers prevented them from landing in Yorkist East Anglia and on 14 March, after being separated in a storm, their ships ran ashore at Holderness. The town of Hull refused him entry, and Edward gained entry to York by using the same claim as Henry of Bolingbroke had before deposing Richard II in 1399; that is, that he was merely reclaiming the Dukedom of York rather than the crown. It was in Edward's attempt to regain his throne that Gloucester began to demonstrate his skill as a military commander.
Once Edward had regained the support of Clarence, he mounted a swift and decisive campaign to regain the Crown through combat; it is believed that Richard was his principal lieutenant as some of the king's earliest support came from members of Richard's affinity, including Sir James Harrington and Sir William Parr, who brought 600 men-at-arms to them at Doncaster. He may have led the vanguard at the Battle of Barnet, in his first command, on 14 April 1471, where he outflanked the Duke of Exeter's wing, although the degree to which his command was fundamental may have been exaggerated. That his personal household sustained losses indicates he was in the thick of the fighting. A contemporary source is clear about his holding the vanguard for Edward at Tewkesbury, deployed against the Lancastrian vanguard under the Duke of Somerset on 4 May 1471, and his role two days later, as Constable of England, sitting alongside John Howard as Earl Marshal, in the trial and sentencing of leading Lancastrians captured after the battle.
At least in part resentful of the French king's previous support of his Lancastrian opponents, and possibly in support of his brother-in-law Charles the Bold, Duke of Burgundy, Edward went to parliament in October 1472 for funding a military campaign, and eventually landed in Calais on 4 July 1475. Richard's was the largest private contingent of his army. Although well known to have publicly been against the eventual treaty signed with Louis XI at Picquigny (and absent from the negotiations, in which one of his rank would have been expected to take a leading role), he acted as Edward's witness when the king instructed his delegates to the French court, and received 'some very fine presents' from Louis on a visit to the French king at Amiens. In refusing other gifts, which included 'pensions' in the guise of 'tribute', he was joined only by Cardinal Bourchier. He supposedly disapproved of Edward's policy of personally benefitting—politically and financially—from a campaign paid for out of a parliamentary grant, and hence out of public funds. Any military prowess was therefore not to be revealed further until the last years of Edward's reign.
Richard controlled the north of England until Edward IV's death. There, and especially in the city of York, he was highly regarded; although it has been questioned whether this view was reciprocated by Richard. Edward IV set up the Council of the North as an administrative body in 1472 to improve government control and economic prosperity and benefit the whole of Northern England. Kendall and later historians have suggested that this was with the intention of making Richard the "Lord of the North"; Peter Booth, however, has argued that "instead of allowing his brother the Duke of Gloucester "carte blanche", [Edward] restricted his influence by using his own agent, Sir William Parr." Richard served as its first Lord President from 1472 until his accession to the throne. On his accession, he made his nephew John de la Pole, 1st Earl of Lincoln, president and formally institutionalised it as an offshoot of the royal Council; all its letters and judgements were issued on behalf of the king and in his name. The council had a budget of 2000 marks per annum (approximately £1320) and had issued "Regulations" by July of that year: councillors to act impartially and declare vested interests, and to meet at least every three months. Its main focus of operations was Yorkshire and the north-east, and its primary responsibilities were land disputes, keeping of the king's peace, and punishing lawbreakers.
Richard's increasing role in the north from the mid-1470s to some extent explains his withdrawal from the royal court. He had been Warden of the West March on the Scottish border since 10 September 1470, and again from May 1471; he used Penrith as a base while 'taking effectual measures' against the Scots, and 'enjoyed the revenues of the estates' of the Forest of Cumberland while doing so. It was at the same time that the duke was appointed sheriff of Cumberland five consecutive years, being described as 'of Penrith Castle' in 1478. By 1480, war with Scotland was looming; on 12 May that year he was appointed Lieutenant-General of the North (a position created for the occasion) as fears of a Scottish invasion grew. Louis XI of France had attempted to negotiate a military alliance with Scotland (in the tradition of the "Auld Alliance"), with the aim of attacking England, according to a contemporary French chronicler. Richard had the authority to summon the Border Levies and issue Commissions of Array to repel the Border raids. Together with the Earl of Northumberland, he launched counter-raids, and when the king and council formally declared war in November 1480, he was granted £10,000 for wages. The king failed to arrive to lead the English army and the result was intermittent skirmishing until early 1482. Richard witnessed the treaty with Alexander, Duke of Albany, brother of the Scottish king James III. Northumberland, Stanley, Dorset, Sir Edward Woodville, and Richard with approximately 20,000 men took the town of Berwick almost immediately. The castle held until 24 August 1482, when Richard recaptured Berwick-upon-Tweed from the Kingdom of Scotland. Although it is debatable whether the English victory was due more to internal Scottish divisions rather than any outstanding military prowess by Richard, it was the last time that the Royal Burgh of Berwick changed hands between the two realms.
On the death of Edward IV on 9 April 1483, his 12-year-old son, Edward V, succeeded him. Richard was named Lord Protector of the Realm and at William Hastings' urging, Richard assumed his role and left his base in Yorkshire for London. On 29 April, as previously agreed, Richard and his cousin, the Duke of Buckingham, met Queen Elizabeth's brother, Anthony Woodville, 2nd Earl Rivers, at Northampton. At the queen's request, Earl Rivers was escorting the young king to London with an armed escort of 2000 men, while Richard and Buckingham's joint escort was 600 men.
The young king himself had been sent further south to Stony Stratford. At first convivial, Richard had Earl Rivers, his nephew Richard Grey and his associate, Thomas Vaughan, arrested. They were taken to Pontefract Castle, where they were executed on 25 June on the charge of treason against the Lord Protector after appearing before a tribunal led by Henry Percy, 4th Earl of Northumberland. Earl Rivers had appointed Richard as executor of his will.
After having Earl Rivers arrested, Richard and Buckingham moved to Stony Stratford, where Richard informed the young king of a plot aimed at denying him his role as protector and whose perpetrators had been dealt with. He proceeded to escort the king to London. They entered the city on 4 May, displaying the carriages of weapons Earl Rivers had taken with his 2000-man army. Richard first accommodated Edward in the Bishop's apartments; then, on Buckingham's suggestion, the king was moved to the royal apartments of the Tower of London, where kings customarily awaited their coronation.
Within the year 1483, Richard had moved himself to the grandeur of Crosbyes Place (Crosby Hall) then in Bishopsgate in the City of London. Robert Fabyan, in his ‘The new chronicles of England and of France’, writes that "the Duke caused the King" (Edward V) "to be removed unto the Tower and his broder with hym, and the Duke lodged himselfe in Crosbyes Place in Bisshoppesgate Strete." In "Holinshed's Chronicles" of England, Scotland, and Ireland, he accounts that "little by little all folke withdrew from the Tower, and drew unto Crosbies in Bishops gates Street, where the Protector kept his houshold. The Protector had the resort; the King in maner desolate."
On hearing the news of her brother's 30 April arrest, the dowager queen fled to sanctuary in Westminster Abbey. Joining her were her son by her first marriage, Thomas Grey, 1st Marquess of Dorset; her five daughters; and her youngest son, Richard, Duke of York.
On 10/11 June, Richard wrote to Ralph, Lord Neville, the City of York and others asking for their support against "the Queen, her blood adherents and affinity," whom he suspected of plotting his murder. At a council meeting on 13 June at the Tower of London, Richard accused Hastings and others of having conspired against him with the Woodvilles and accusing Jane Shore, lover to both Hastings and Thomas Grey, of acting as a go-between. According to Thomas More, Hastings was taken out of the council chambers and summarily executed in the courtyard, while others, like Lord Thomas Stanley and John Morton, Bishop of Ely, were arrested. Hastings was not attainted and Richard sealed an indenture that placed Hastings' widow, Katherine, directly under his own protection. Bishop Morton was released into the custody of Buckingham.
On 16 June, the dowager queen agreed to hand over the Duke of York to the Archbishop of Canterbury so that he might attend his brother Edward's coronation, still planned for 22 June.
A clergyman is said to have informed Richard that Edward IV's marriage to Elizabeth Woodville was invalid because of Edward's earlier union with Eleanor Butler, making Edward V and his siblings illegitimate. The identity of the informant, known only through the memoirs of French diplomat Philippe de Commines, was Robert Stillington, the Bishop of Bath and Wells. On 22 June, a sermon was preached outside Old St. Paul's Cathedral declaring Edward IV's children bastards and Richard the rightful king. Shortly after, the citizens of London, both nobles and commons, convened and drew up a petition asking Richard to assume the throne. He accepted on 26 June and was crowned at Westminster Abbey on 6 July. His title to the throne was confirmed by Parliament in January 1484 by the document "Titulus Regius".
The princes, who were still lodged in the royal residence of the Tower of London at the time of Richard's coronation, disappeared from sight after the summer of 1483. Although after his death Richard III was accused of having Edward and his brother killed, notably by More and in Shakespeare's play, the facts surrounding their disappearance remain unknown. Other culprits have been suggested, including Buckingham and even Henry VII, although Richard remains a suspect.
After the coronation ceremony, Richard and Anne set out on a royal progress to meet their subjects. During this journey through the country, the king and queen endowed King's College and Queens' College at Cambridge University, and made grants to the church. Still feeling a strong bond with his northern estates, Richard later planned the establishment of a large chantry chapel in York Minster with over 100 priests. Richard also founded the College of Arms.
In 1483, a conspiracy arose among a number of disaffected gentry, many of whom had been supporters of Edward IV and the "whole Yorkist establishment". The conspiracy was nominally led by Richard's former ally and first cousin once removed Henry Stafford, 2nd Duke of Buckingham, although it had begun as a Woodville-Beaufort conspiracy (being "well underway" by the time of the duke's involvement). Indeed, Davies has suggested that it was "only the subsequent parliamentary attainder that placed Buckingham at the centre of events", in order to blame a single disaffected magnate motivated by greed, rather than "the embarrassing truth" that those opposing Richard were actually "overwhelmingly Edwardian loyalists". It is possible that they planned to depose Richard III and place Edward V back on the throne, and that when rumours arose that Edward and his brother were dead, Buckingham proposed that Henry Tudor should return from exile, take the throne and marry Elizabeth of York, elder sister of the Tower Princes. However, it has also been pointed out that as this narrative stems from Richard's own parliament of 1484, it should probably be treated "with caution". For his part, Buckingham raised a substantial force from his estates in Wales and the Marches. Henry, in exile in Brittany, enjoyed the support of the Breton treasurer Pierre Landais, who hoped Buckingham's victory would cement an alliance between Brittany and England.
Some of Henry Tudor's ships ran into a storm and were forced to return to Brittany or Normandy, while Henry himself anchored off Plymouth for a week before learning of Buckingham's failure. Buckingham's army was troubled by the same storm and deserted when Richard's forces came against them. Buckingham tried to escape in disguise, but was either turned in by a retainer for the bounty Richard had put on his head, or was discovered in hiding with him. He was convicted of treason and beheaded in Salisbury, near the Bull's Head Inn, on 2 November. His widow, Catherine Woodville, later married Jasper Tudor, the uncle of Henry Tudor, who was in the process of organising another rebellion.
Richard made overtures to Landais, offering military support for Landais's weak regime under Duke Francis II of Brittany in exchange for Henry. Henry fled to Paris, where he secured support from the French regent Anne of Beaujeu, who supplied troops for an invasion in 1485. The French government, recalling Richard's effective disowning of the Treaty of Picquigny and refusal to accept the accompanying French pension, would not have welcomed the accession of one known to be unfriendly to France.
On 22 August 1485, Richard met the outnumbered forces of Henry Tudor at the Battle of Bosworth Field. Richard rode a white courser. The size of Richard's army has been estimated at 8,000 and Henry's at 5,000, but exact numbers are not known. All that can be said is that the Royal army 'substantially' outnumbered Tudor's. The traditional view of the king's famous cries of "Treason!" before falling was that during the battle Richard was abandoned by Lord Stanley (made Earl of Derby in October), Sir William Stanley, and Henry Percy, 4th Earl of Northumberland. However, the role of Northumberland is unclear; his position was with the reserve—behind the king's line—and he could not easily have moved forward without a general royal advance, which did not take place. Indeed, the physical confines behind the crest of Ambion Hill, combined with a difficulty of communications, probably physically hampered any attempt he made to join the fray. Despite appearing "a pillar of the Ricardian regime", and his previous loyalty to Edward IV, Lord Stanley's wife, Lady Margaret Beaufort, was Henry Tudor's mother, and Stanley's inaction, combined with his brother's entering the battle on Tudor's behalf was fundamental to Richard's defeat. The death of John Howard, Duke of Norfolk, his close companion, may have had a demoralising effect on Richard and his men. Either way, Richard led a cavalry charge deep into the enemy ranks in an attempt to end the battle quickly by striking at Henry Tudor himself.
Accounts note that King Richard fought bravely and ably during this manoeuvre, unhorsing Sir John Cheyne, a well-known jousting champion, killing Henry's standard bearer Sir William Brandon and coming within a sword's length of Henry Tudor before being surrounded by Sir William Stanley's men and killed. The Burgundian chronicler Jean Molinet says that a Welshman struck the death-blow with a halberd while Richard's horse was stuck in the marshy ground. It was said that the blows were so violent that the king's helmet was driven into his skull. The contemporary Welsh poet Guto'r Glyn implies a leading Welsh Lancastrian Rhys ap Thomas, or one of his men, killed the king, writing that he "killed the boar, shaved his head". The identification in 2013 of King Richard's body shows that the skeleton had 11 wounds, eight of them to the skull, clearly inflicted in battle and suggesting he had lost his helmet. Professor Guy Rutty, from the University of Leicester, said: "The most likely injuries to have caused the king's death are the two to the inferior aspect of the skull—a large sharp force trauma possibly from a sword or staff weapon, such as a halberd or bill, and a penetrating injury from the tip of an edged weapon." The skull showed that a blade had hacked away part of the rear of the skull. Richard III was the last English king to be killed in battle.
Polydore Vergil, Henry Tudor's official historian, recorded that "King Richard, alone, was killed fighting manfully in the thickest press of his enemies". Richard's naked body was then carried back to Leicester tied to a horse, and early sources strongly suggest that it was displayed in the collegiate Church of the Annunciation of Our Lady of the Newarke, prior to being buried at Greyfriars Church in Leicester. In 1495, Henry VII paid for a marble and alabaster monument. According to a discredited tradition, during the Dissolution of the Monasteries, his body was thrown into the River Soar, although other evidence suggests that a memorial stone was visible in 1612, in a garden built on the site of Greyfriars. The exact location was then lost, owing to more than 400 years of subsequent development, until archaeological investigations in 2012 (see the Discovery of remains section) revealed the site of the garden and Greyfriars church. There was a memorial ledger stone in the choir of the cathedral, since replaced by the tomb of the king, and a stone plaque on Bow Bridge where tradition had falsely suggested that his remains had been thrown into the river.
According to another tradition, Richard consulted a seer in Leicester before the battle who foretold that "where your spur should strike on the ride into battle, your head shall be broken on the return". On the ride into battle, his spur struck the bridge stone of Bow Bridge in the city; legend states that as his corpse was carried from the battle over the back of a horse, his head struck the same stone and was broken open.
Henry Tudor succeeded Richard to become Henry VII and sought to cement the succession by marrying the Yorkist heiress Elizabeth of York, Edward IV's daughter and Richard III's niece.
Richard and Anne produced one son, Edward, who was born between 1474 and 1476. He was created Earl of Salisbury on 15 February 1478, and Prince of Wales on 24 August 1483, and died in March 1484, less than two months after he had been formally declared heir apparent. After his son's death, he named his nephew Edward, Earl of Warwick as his heir. After his wife's death, he named his nephew John de la Pole, Earl of Lincoln, the son of his sister Elizabeth as his successor, and commenced negotiations with John II of Portugal to marry John's sister, Joanna, a pious young woman who had already turned down several suitors because of her preference for the religious life.
Richard had two acknowledged illegitimate children, John of Gloucester and Katherine Plantagenet. Also known as 'John of Pontefract', John of Gloucester was appointed Captain of Calais in 1485. Katherine married William Herbert, 2nd Earl of Pembroke in 1484. Neither the birth dates nor the names of the mothers of either of the children is known. Katherine was old enough to be wedded in 1484, when the age of consent was twelve, and John was knighted in September 1483 in York Minster, and so most historians agree that they were both fathered when Richard was a teenager. There is no evidence of infidelity on Richard's part after his marriage to Anne Neville in 1472 when he was around 20. This has led to a suggestion by the historian A. L. Rowse that Richard "had no interest in sex".
Michael Hicks and Josephine Wilkinson have suggested that Katherine's mother may have been Katherine Haute, on the basis of the grant of an annual payment of 100 shillings made to her in 1477. The Haute family was related to the Woodvilles through the marriage of Elizabeth Woodville's aunt, Joan Woodville, to William Haute. One of their children was Richard Haute, Controller of the Prince's Household. Their daughter, Alice, married Sir John Fogge; they were ancestors to queen consort Catherine Parr, sixth wife of King Henry VIII. They also suggest that John's mother may have been Alice Burgh. Richard visited Pontefract from 1471, in April and October 1473, and in early March 1474, for a week. On 1 March 1474, he granted Alice Burgh £20 a year for life "for certain special causes and considerations". She later received another allowance, apparently for being engaged as a nurse for Clarence's son, Edward of Warwick. Richard continued her annuity when he became king. John Ashdown-Hill has suggested that John was conceived during Richard's first solo expedition to the eastern counties in the summer of 1467 at the invitation of John Howard and that the boy was born in 1468 and named after his friend and supporter. Richard himself noted John was still a minor (not being yet 21) when he issued the royal patent appointing him Captain of Calais on 11 March 1485, possibly on his seventeenth birthday.
Both of Richard's illegitimate children survived him, but they seem to have died without issue and their fate after Richard's demise at Bosworth is not certain. John received a £20 annuity from Henry VII, but there are no mentions of him in contemporary records after 1487 (the year of the Battle of Stoke Field). He may have been executed in 1499, though no record of this exists beyond an assertion by George Buck over a century later. Katherine apparently died before her cousin Elizabeth of York's coronation on 25 November 1487, since her husband Sir William Herbert is described as a widower by that time. Katherine's burial place was located in the London parish church of St James Garlickhithe, between Skinner's Lane and Upper Thames Street. The mysterious Richard Plantagenet, who was first mentioned in Francis Peck's "Desiderata Curiosa" (a two-volume miscellany published 1732–1735) was said to be a possible illegitimate child of Richard III and is sometimes referred to as "Richard the Master-Builder" or "Richard of Eastwell", but it has also been suggested he could have been Richard, Duke of York, one of the missing Princes in the Tower. He died in 1550.
Richard's Council of the North, described as his "one major institutional innovation", derived from his ducal council following his own viceregal appointment by Edward IV; when Richard himself became king, he maintained the same conciliar structure in his absence. It officially became part of the royal council machinery under the presidency of John de la Pole, Earl of Lincoln in April 1484, based at Sandal Castle in Wakefield. It is considered to have greatly improved conditions for northern England, as it was, in theory at least, intended to keep the peace and punish lawbreakers, as well as resolving land disputes. Bringing regional governance directly under the control of central government, it has been described as the king's "most enduring monument", surviving unchanged until 1641.
In December 1483, Richard instituted what later became known as the Court of Requests, a court to which poor people who could not afford legal representation could apply for their grievances to be heard. He also improved bail in January 1484, to protect suspected felons from imprisonment before trial and to protect their property from seizure during that time. He founded the College of Arms in 1484, he banned restrictions on the printing and sale of books, and he ordered the translation of the written Laws and Statutes from the traditional French into English. He ended the arbitrary benevolence (a device by which Edward IV raised funds), made it punishable to conceal from a buyer of land that a part of the property had already been disposed of to somebody else, required that land sales be published, laid down property qualifications for jurors, restricted the abusive Courts of Piepowders, regulated cloth sales, instituted certain forms of trade protectionism, prohibited the sale of wine and oil in fraudulent measure, and prohibited fraudulent collection of clergy dues, among others. Churchill implies he improved the law of trusts.
Richard's death at Bosworth resulted in the end of the Plantagenet dynasty, which had ruled England since the succession of Henry II in 1154. The last legitimate male Plantagenet, Richard's nephew, Edward, Earl of Warwick (son of Richard III's brother Clarence), was executed by Henry VII in 1499. The only extant direct male line of Plantagenets is the House of Beaufort, headed today by Henry Somerset, 12th Duke of Beaufort, but the Beaufort line was barred from the succession by Henry IV.
There are numerous contemporary, or near-contemporary, sources of information about the reign of Richard III. These include the "Croyland Chronicle", Commines' "Mémoires", the report of Dominic Mancini, the Paston Letters, the Chronicles of Robert Fabyan and numerous court and official records, including a few letters by Richard himself. However, the debate about Richard's true character and motives continues, both because of the subjectivity of many of the written sources, reflecting the generally partisan nature of writers of this period, and because of the fact that none was written by men with an intimate knowledge of Richard, even if they had met him in person.
During Richard's reign, the historian John Rous praised him as a "good lord" who punished "oppressors of the commons", adding that he had "a great heart". In 1483 the Italian observer Mancini reported that Richard enjoyed a good reputation and that both "his private life and public activities powerfully attracted the esteem of strangers". His bond to the City of York, in particular, was such that on hearing of Richard's demise at the battle of Bosworth the City Council officially deplored the King's death, at the risk of facing the victor's wrath.
During his lifetime he was the subject of some attacks. Even in the North in 1482 a man was prosecuted for offences against the Duke of Gloucester, saying he did 'nothing but grin at' the city of York. In 1484 the discreditory actions took the form of hostile placards, the only surviving one being William Collingbourne's lampoon of July 1484 "The Cat, the Rat, and Lovell the Dog, all rule England under a Hog" which was pinned to the door of St. Paul's Cathedral and referred to the King himself (the Hog) and his most trusted councillors William Catesby, Richard Ratcliffe and Francis Viscount Lovell. On 30 March 1485 Richard felt forced to summon the Lords and London City Councillors to publicly deny the rumours that he had poisoned Queen Anne and that he had planned a marriage to his niece Elizabeth, at the same time ordering the Sheriff of London to imprison anyone spreading such slanders. The same orders were issued throughout the realm, including York where the royal pronouncement recorded in the City Records dates 5 April 1485 and carries specific instructions to suppress seditious talk and remove and destroy evidently hostile placards unread.
As for Richard's physical appearance, most contemporary descriptions bear out the evidence that aside from having one shoulder higher than the other (with chronicler Rous not able to correctly remember which one, as slight as the difference was), Richard had no other noticeable bodily deformity. John Stow talked to old men who, remembering him, said "that he was of bodily shape comely enough, only of low stature" and a German traveller, Nicolas von Poppelau, who spent ten days in Richard's household in May 1484, describes him as "three fingers taller than himself...much more lean, with delicate arms and legs and also a great heart." Six years after Richard's death, in 1491, a schoolmaster named William Burton, on hearing a defence of Richard, launched into a diatribe, accusing the dead King of being 'a hypocrite and a crookback...who was deservedly buried in a ditch like a dog.'
Richard's death encouraged the furtherance of this later negative image by his Tudor successors due to the fact that it helped to legitimise Henry VII's seizure of the throne. The Richard III Society contends that this means that 'a lot of what people thought they knew about Richard III was pretty much propaganda and myth building.' The Tudor characterisation culminated in the famous fictional portrayal of him in Shakespeare's play "Richard III" as a physically deformed, Machiavellian villain, ruthlessly committing numerous murders in order to claw his way to power; Shakespeare's intention perhaps being to use Richard III as a vehicle for creating his own Marlowesque protagonist. Rous himself, in his "History of the Kings of England", written during Henry VII's reign, initiated the process. He reversed his earlier position, and now portrayed Richard as a freakish individual who was born with teeth and shoulder-length hair after having been in his mother's womb for two years. His body was stunted and distorted, with one shoulder higher than the other, and he was "slight in body and weak in strength". Rous also attributes the murder of Henry VI to Richard, and claims that he poisoned his own wife. Jeremy Potter, a former Chair of the Richard III Society, claims that "At the bar of history Richard III continues to be guilty because it is impossible to prove him innocent. The Tudors ride high in popular esteem."
Polydore Vergil and Thomas More expanded on this portrayal, emphasising Richard's outward physical deformities as a sign of his inwardly twisted mind. More describes him as "little of stature, ill-featured of limbs, crook-backed ... hard-favoured of visage". Vergil also says he was "deformed of body ... one shoulder higher than the right". Both emphasise that Richard was devious and flattering, while planning the downfall of both his enemies and supposed friends. Richard's good qualities were his cleverness and bravery. All these characteristics are repeated by Shakespeare, who portrays him as having a hunch, a limp and a withered arm. With regard to the "hunch", the second quarto edition of "Richard III" (1598) used the term "hunched-backed" but in the First Folio edition (1623) it became "bunch-backed".
Richard's reputation as a promoter of legal fairness persisted, however. William Camden in his "Remains Concerning Britain" (1605) states that Richard, "albeit he lived wickedly, yet made good laws". Francis Bacon also states that he was "a good lawmaker for the ease and solace of the common people". In 1525, Cardinal Wolsey upbraided the aldermen and Mayor of London for relying on a statute of Richard to avoid paying an extorted tax (benevolence) but received the reply 'although he did evil, yet in his time were many good acts made.'
Despite this, the image of Richard as a ruthless power-grabber remained dominant in the 18th and 19th centuries. The 18th century philosopher and historian David Hume described him as a man who used dissimulation to conceal "his fierce and savage nature" and who had "abandoned all principles of honour and humanity". Hume acknowledged that some historians have argued "that he was well qualified for government, had he legally obtained it; and that he committed no crimes but such as were necessary to procure him possession of the crown", but he dismissed this view on the grounds that Richard's exercise of arbitrary power encouraged instability. The most important late 19th-century biographer of the king was James Gairdner, who also wrote the entry on Richard in the "Dictionary of National Biography". Gairdner stated that he had begun to study Richard with a neutral viewpoint, but became convinced that Shakespeare and More were essentially correct in their view of the king, despite some exaggerations.
Richard was not without his defenders, the first of whom was George Buck, a descendant of one of the king's supporters, who completed a historical account of Richard's life in 1619. Buck attacked the "improbable imputations and strange and spiteful scandals" related by Tudor writers, including Richard's alleged deformities and murders. He located lost archival material, including the "Titulus Regius", but also claimed to have seen a letter written by Elizabeth of York, according to which Elizabeth sought to marry the king. Though the book was published in 1646, Elizabeth's supposed letter was never produced. Documents which later emerged from the Portuguese Royal archives show that after Queen Anne's death, Richard's ambassadors were sent on a formal errand to negotiate a double marriage between Richard and the Portuguese King's sister Joana, of Lancastrian descent, and between Elizabeth of York and Joana's cousin Duke Manuel (later King of Portugal).
Significant among Richard's defenders was Horace Walpole. In "Historic Doubts on the Life and Reign of King Richard the Third" (1768), Walpole disputed all the alleged murders and argued that Richard may have acted in good faith. He also argued that any physical abnormality was probably no more than a minor distortion of the shoulders. However, he retracted his views in 1793 after the Terror, stating he now believed that Richard could have committed the crimes he was charged with, although Pollard observes that this retraction is frequently overlooked by later admirers of Richard. Other defenders of Richard include the noted explorer Clements Markham, whose "Richard III: His Life and Character" (1906) replied to the work of Gairdner. He argued that Henry VII killed the princes and that the bulk of evidence against Richard was nothing more than Tudor propaganda. An intermediate view was provided by Alfred Legge in "The Unpopular King" (1885). Legge argued that Richard's "greatness of soul" was eventually "warped and dwarfed" by the ingratitude of others.
Some twentieth-century historians have been less inclined to moral judgement, seeing Richard's actions as a product of the unstable times. In the words of Charles Ross, "the later fifteenth century in England is now seen as a ruthless and violent age as concerns the upper ranks of society, full of private feuds, intimidation, land-hunger, and litigiousness, and consideration of Richard's life and career against this background has tended to remove him from the lonely pinnacle of Villainy Incarnate on which Shakespeare had placed him. Like most men, he was conditioned by the standards of his age." The Richard III Society, founded in 1924 as "The Fellowship of the White Boar", is the oldest of several groups dedicated to improving his reputation. Other contemporary historians still describe him as, a "power-hungry and ruthless politician" who was still most probably "ultimately responsible for the murder of his nephews."
Apart from Shakespeare, Richard appears in many other works of literature. Two other plays of the Elizabethan era predated Shakespeare's work. The Latin-language drama "Richardus Tertius" (first known performance in 1580) by Thomas Legge is believed to be the first history play written in England. The anonymous play "The True Tragedy of Richard III" (c. 1590), performed in the same decade as Shakespeare's work, was probably an influence on Shakespeare. Neither of the two plays places any emphasis on Richard's physical appearance, though the "True Tragedy" briefly mentions that he is "A man ill shaped, crooked backed, lame armed" adding that he is "valiantly minded, but tyrannous in authority". Both portray him as a man motivated by personal ambition, who uses everyone around him to get his way. Ben Jonson is also known to have written a play "Richard Crookback" in 1602, but it was never published and nothing is known about its portrayal of the king.
Marjorie Bowen's 1929 novel "Dickon" set the trend for pro-Ricardian literature. Particularly influential was "The Daughter of Time" (1951) by Josephine Tey, in which a modern detective concludes that Richard III is innocent in the death of the Princes. Other novelists such as Valerie Anand in the novel "Crown of Roses" (1989) have also offered alternative versions to the theory that he murdered them. Sharon Kay Penman, in her historical novel "The Sunne in Splendour", attributes the death of the Princes to the Duke of Buckingham. In the mystery novel "The Murders of Richard III" by Elizabeth Peters (1974) the central plot revolves around the debate as to whether Richard III was guilty of these and other crimes. A sympathetic portrayal of Richard III is given in "The Founding" (1980), the first volume in "The Morland Dynasty" series by Cynthia Harrod-Eagles.
One film adaptation of Shakespeare's play "Richard III" is the 1955 version directed and produced by Laurence Olivier, who also played the lead role. Also notable are the 1995 film version starring Ian McKellen, set in a fictional 1930s fascist England, and "Looking for Richard", a 1996 documentary film directed by Al Pacino, who plays the title character as well as himself. The play has been adapted for television on several occasions.
On 24 August 2012, the University of Leicester and Leicester City Council, in association with the Richard III Society, announced that they had joined forces to begin a search for the remains of King Richard. The search for Richard III was led by Philippa Langley of the Society's "Looking For Richard" Project with the archaeological work led by University of Leicester Archaeological Services (ULAS). Experts set out to locate the lost site of the former Greyfriars Church (demolished during Henry VIII's dissolution of the monasteries), and to discover whether his remains were still interred there. By comparing fixed points between maps in a historical sequence, the search located the Church of the Grey Friars, where Richard's body had been hastily buried without pomp in 1485, its foundations identifiable beneath a modern-day city centre car park.
On 5 September 2012, the excavators announced that they had identified Greyfriars church and two days later that they had identified the location of Robert Herrick's garden, where the memorial to Richard III stood in the early 17th century. A human skeleton was found beneath the Church's choir.
Improbably, the excavators found the remains in the first location in which they dug at the car park. Coincidentally, they lay almost directly under a roughly painted "R" on the tarmac. This had existed since the early 2000s to signify a reserved parking space.
On 12 September, it was announced that the skeleton discovered during the search might be that of Richard III. Several reasons were given: the body was of an adult male; it was buried beneath the choir of the church; and there was severe scoliosis of the spine, possibly making one shoulder higher than the other (to what extent depended on the severity of the condition). Additionally, there was an object that appeared to be an arrowhead embedded in the spine; and there were perimortem injuries to the skull. These included a relatively shallow orifice, which is most likely to have been caused by a rondel dagger, and a scooping depression to the skull, inflicted by a bladed weapon, most probably a sword. Additionally, the bottom of the skull presented a gaping hole, where a halberd had cut away and entered it.
Forensic pathologist Dr Stuart Hamilton stated that this injury would have left the individual's brain visible, and most certainly would have been the cause of death. Dr Jo Appleby, the osteo-archaeologist who excavated the skeleton, concurred and described the latter as "a mortal battlefield wound in the back of the skull". The base of the skull also presented another fatal wound in which a bladed weapon had been thrust into it, leaving behind a jagged hole. Closer examination of the interior of the skull revealed a mark opposite this wound, showing that the blade penetrated to a depth of .
In total, the skeleton presented ten wounds: four minor injuries on the top of the skull, one dagger blow on the cheekbone, one cut on the lower jaw, two fatal injuries on the base of the skull, one cut on a rib bone, and one final wound on the pelvis, most probably inflicted after death. It is generally accepted that postmortem, Richard's naked body was tied to the back of a horse, with his arms slung over one side and his legs and buttocks over the other. This presented a tempting target for onlookers, and the angle of the blow on the pelvis suggests that one of them stabbed Richard's right buttock with substantial force, as the cut extends from the back all the way to the front of the pelvic bone and was most probably an act of humiliation. It is also possible that Richard suffered other injuries which left no trace on the skeleton.
British historian John Ashdown-Hill had used genealogical research in 2004 to trace matrilineal descendants of Anne of York, Richard's elder sister. A British-born woman who emigrated to Canada after the Second World War, Joy Ibsen (), was found to be a 16th-generation great-niece of the king in the same direct maternal line. Joy Ibsen's mitochondrial DNA was tested and belongs to mitochondrial DNA haplogroup J, which by deduction, should also be the mitochondrial DNA haplogroup of Richard III. Joy Ibsen died in 2008. Her son Michael Ibsen gave a mouth-swab sample to the research team on 24 August 2012. His mitochondrial DNA passed down the direct maternal line was compared to samples from the human remains found at the excavation site and used to identify King Richard.
On 4 February 2013, the University of Leicester confirmed that the skeleton was beyond reasonable doubt that of King Richard III. This conclusion was based on mitochondrial DNA evidence, soil analysis, and dental tests (there were some molars missing as a result of caries), as well as physical characteristics of the skeleton which are highly consistent with contemporary accounts of Richard's appearance. The team announced that the "arrowhead" discovered with the body was a Roman-era nail, probably disturbed when the body was first interred. However, there were numerous perimortem wounds on the body, and part of the skull had been sliced off with a bladed weapon; this would have caused rapid death. The team concluded that it is unlikely that the king was wearing a helmet in his last moments. Soil taken from the remains was found to contain microscopic roundworm eggs. Several eggs were found in samples taken from the pelvis, where the king's intestines were, but not from the skull and only very small numbers were identified in soil surrounding the grave. The findings suggest that the higher concentration of eggs in the pelvic area probably arose from a roundworm infection the King suffered in his life, rather than from human waste dumped in the area at a later date, researchers said. The Mayor of Leicester announced that the king's skeleton would be re-interred at Leicester Cathedral in early 2014, but a judicial review of that decision delayed the reinterment for a year. A museum to Richard III was opened in July 2014 in the Victorian school buildings next to the Greyfriars grave site.
The proposal to have King Richard buried in Leicester attracted some controversy. Those who challenged the decision included fifteen "collateral [non-direct] descendants of Richard III", represented by the Plantagenet Alliance, who believed that the body should be reburied in York, as they claim the king wished. In August 2013, they filed a court case in order to contest Leicester's claim to re-inter the body within its cathedral, and propose the body be buried in York instead. However, Michael Ibsen, who gave the DNA sample that identified the king, gave his support to Leicester's claim to re-inter the body in their cathedral. On 20 August, a judge ruled that the opponents had the legal standing to contest his burial in Leicester Cathedral, despite a clause in the contract which had authorized the excavations requiring his burial there. He urged the parties, though, to settle out of court in order to "avoid embarking on the Wars of the Roses, Part Two". The Plantagenet Alliance, and the supporting fifteen collateral descendants, also faced the challenge that "Basic maths shows Richard, who had no surviving children but five siblings, could have millions of 'collateral' descendants" undermining the group's claim to represent "the only people who can speak on behalf of him". A ruling in May 2014 decreed that there are "no public law grounds for the Court interfering with the decisions in question". The remains were taken to Leicester Cathedral on 22 March 2015 and reinterred on 26 March.
On 5 February 2013 Professor Caroline Wilkinson of the University of Dundee conducted a facial reconstruction of Richard III, commissioned by the Richard III Society, based on 3D mappings of his skull. The face is described as "warm, young, earnest and rather serious". On 11 February 2014 the University of Leicester announced the project to sequence the entire genome of Richard III and one of his living relatives, Michael Ibsen, whose mitochondrial DNA confirmed the identification of the excavated remains. Richard III thus became the first ancient person of known historical identity whose genome has been sequenced.
In November 2014, the results of the testing were announced, confirming that the maternal side was as previously thought. The paternal side, however, demonstrated some variance from what had been expected, with the DNA showing no links to the purported descendants of Richard's great-great-grandfather Edward III of England through Henry Somerset, 5th Duke of Beaufort. This could be the result of covert illegitimacy that does not reflect the accepted genealogies between Richard and Edward III or between Edward III and the 5th Duke of Beaufort.
In 1485, following his death in battle against Henry Tudor at Bosworth Field, Richard III's body was buried in Greyfriars Church in Leicester.
Following the discoveries of Richard's remains in 2012, it was decided that they should be reburied at Leicester Cathedral, despite feelings in some quarters that he should have been reburied in York Minster. His remains were carried in procession to the cathedral on 22 March 2015, and reburied on 26 March 2015 at a religious re-burial service at which both the Right Reverend Tim Stevens, the Bishop of Leicester, and the Most Reverend Justin Welby, the Archbishop of Canterbury, officiated. The British royal family was represented by the Duke and Duchess of Gloucester and the Countess of Wessex. The actor Benedict Cumberbatch, who later portrayed him in "The Hollow Crown" television series, read a poem by poet laureate Carol Ann Duffy.
His cathedral tomb was designed by the architects van Heyningen and Haward. The tombstone is deeply incised with a cross, and consists of a rectangular block of white Swaledale fossil stone, quarried in North Yorkshire. It sits on a low plinth made of dark Kilkenny marble, incised with Richard's name, dates and motto ("Loyaulte me lie" – loyalty binds me). The plinth also carries his coat of arms in pietra dura. The remains of Richard III are in a lead-lined inner casket, inside an outer English oak coffin crafted by Michael Ibsen, a direct descendant of Richard's sister Anne of York, and laid in a brick-lined vault below the floor, and below the plinth and tombstone. The original 2010 raised tomb design had been proposed by Langley's "Looking For Richard Project" and fully funded by members of the Richard III Society. The proposal was publicly launched by the Society on 13 February 2013 but rejected by Leicester Cathedral in favour of a memorial slab. However, following a public outcry, the Cathedral changed its position and on 18 July 2013 announced its agreement to give King Richard III a raised tomb monument.
On 1 November 1461, Richard gained the title of Duke of Gloucester; in late 1461, he was invested as a Knight of the Garter. Following the death of King Edward IV, he was made Lord Protector of England. Richard held this office from 30 April to 26 June 1483, when he made himself king of the realm. As King of England, Richard was styled "Dei Gratia Rex Angliae et Franciae et Dominus Hiberniae" ("by the Grace of God, King of England and France and Lord of Ireland").
Informally, he may have been known as "Dickon", according to a sixteenth-century legend of a note, warning of treachery, that was sent to the Duke of Norfolk on the eve of Bosworth:
As Duke of Gloucester, Richard used the Royal Arms of England quartered with the Royal Arms of France, differenced by a label argent of three points ermine, on each point a canton gules, supported by a blue boar. As sovereign, he used the arms of the kingdom undifferenced, supported by a white boar and a lion. His motto was "Loyaulte me lie", "Loyalty binds me"; and his personal device was a white boar. | https://en.wikipedia.org/wiki?curid=26284 |
Restriction fragment length polymorphism
In molecular biology, restriction fragment length polymorphism (RFLP) is a technique that exploits variations in homologous DNA sequences, known as polymorphisms, in order to distinguish individuals, populations, or species or to pinpoint the locations of genes within a sequence.The term may refer to a polymorphism itself, as detected through the differing locations of restriction enzyme sites, or to a related laboratory technique by which such differences can be illustrated. In RFLP analysis, a DNA sample is digested into fragments by one or more restriction enzymes, and the resulting "restriction fragments" are then separated by gel electrophoresis according to their size.
Although now largely obsolete due to the emergence of inexpensive DNA sequencing technologies, RFLP analysis was the first DNA profiling technique inexpensive enough to see widespread application. RFLP analysis was an important early tool in genome mapping, localization of genes for genetic disorders, determination of risk for disease, and paternity testing.
The basic technique for the detection of RFLPs involves fragmenting a sample of DNA with the application of a restriction enzyme, which can selectively cleave a DNA molecule wherever a short, specific sequence is recognized in a process known as a restriction digest. The DNA fragments produced by the digest are then separated by length through a process known as agarose gel electrophoresis and transferred to a membrane via the Southern blot procedure. Hybridization of the membrane to a labeled DNA probe then determines the length of the fragments which are complementary to the probe. A restriction fragment length polymorphism is said to occur when the length of a detected fragment varies between individuals, indicating non-identical sequence homologies. Each fragment length is considered an allele, whether it actually contains a coding region or not, and can be used in subsequent genetic analysis.
There are two common mechanisms by which the size of a particular restriction fragment can vary. In the first schematic, a small segment of the genome is being detected by a DNA probe (thicker line). In allele "A", the genome is cleaved by a restriction enzyme at three nearby sites (triangles), but only the rightmost fragment will be detected by the probe. In allele "a", restriction site 2 has been lost by a mutation, so the probe now detects the larger fused fragment running from sites 1 to 3. The second diagram shows how this fragment size variation would look on a Southern blot, and how each allele (two per individual) might be inherited in members of a family.
In the third schematic, the probe and restriction enzyme are chosen to detect a region of the genome that includes a variable number tandem repeat (VNTR) segment (boxes in schematic diagram). In allele "c", there are five repeats in the VNTR, and the probe detects a longer fragment between the two restriction sites. In allele "d", there are only two repeats in the VNTR, so the probe detects a shorter fragment between the same two restriction sites. Other genetic processes, such as insertions, deletions, translocations, and inversions, can also lead to polymorphisms. RFLP tests require much larger samples of DNA than do short tandem repeat (STR) tests.
Analysis of RFLP variation in genomes was formerly a vital tool in genome mapping and genetic disease analysis. If researchers were trying to initially determine the chromosomal location of a particular disease gene, they would analyze the DNA of members of a family afflicted by the disease, and look for RFLP alleles that show a similar pattern of inheritance as that of the disease (see genetic linkage). Once a disease gene was localized, RFLP analysis of other families could reveal who was at risk for the disease, or who was likely to be a carrier of the mutant genes. RFLP test is used in identification and differentiation of organisms by analyzing unique patterns in genome. It is also used in identification of recombination rate in the loci between restriction sites.
RFLP analysis was also the basis for early methods of genetic fingerprinting, useful in the identification of samples retrieved from crime scenes, in the determination of paternity, and in the characterization of genetic diversity or breeding patterns in animal populations.
The technique for RFLP analysis is, however, slow and cumbersome. It requires a large amount of sample DNA, and the combined process of probe labeling, DNA fragmentation, electrophoresis, blotting, hybridization, washing, and autoradiography can take up to a month to complete. A limited version of the RFLP method that used oligonucleotide probes was reported in 1985. The results of the Human Genome Project have largely replaced the need for RFLP mapping, and the identification of many single-nucleotide polymorphisms (SNPs) in that project (as well as the direct identification of many disease genes and mutations) has replaced the need for RFLP disease linkage analysis (see SNP genotyping). The analysis of VNTR alleles continues, but is now usually performed by polymerase chain reaction (PCR) methods. For example, the standard protocols for DNA fingerprinting involve PCR analysis of panels of more than a dozen VNTRs.
RFLP is still used in marker-assisted selection. Terminal restriction fragment length polymorphism (TRFLP or sometimes T-RFLP) is a technique initially developed for characterizing bacterial communities in mixed-species samples. The technique has also been applied to other groups including soil fungi. TRFLP works by PCR amplification of DNA using primer pairs that have been labeled with fluorescent tags. The PCR products are then digested using RFLP enzymes and the resulting patterns visualized using a DNA sequencer. The results are analyzed either by simply counting and comparing bands or peaks in the TRFLP profile, or by matching bands from one or more TRFLP runs to a database of known species. The technique is similar in some aspects to temperature gradient or denaturing gradient gel electrophoresis (TGGE and DGGE).
The sequence changes directly involved with an RFLP can also be analyzed more quickly by PCR. Amplification can be directed across the altered restriction site, and the products digested with the restriction enzyme. This method has been called Cleaved Amplified Polymorphic Sequence (CAPS). Alternatively, the amplified segment can be analyzed by allele-specific oligonucleotide (ASO) probes, a process that can often be done by a simple dot blot. | https://en.wikipedia.org/wiki?curid=26285 |
Rocket-propelled grenade
A rocket-propelled grenade (often abbreviated RPG) is a shoulder-fired missile weapon that launches rockets equipped with an explosive warhead. Most RPGs can be carried by an individual soldier, and are frequently used as anti-tank weapons. These warheads are affixed to a rocket motor which propels the RPG towards the target and they are stabilized in flight with fins. Some types of RPG are reloadable with new rocket-propelled grenades, while others are single-use. RPGs, with some exceptions, are generally loaded from the front.
RPGs with high explosive anti-tank (HEAT) warheads are very effective against lightly armored vehicles such as armored personnel carriers (APCs) and armored cars. However, modern heavily armored vehicles, such as main battle tanks, are generally too well protected (with thick composite or reactive armor) to be penetrated by an RPG, unless less armored sections of vehicle are exploited. Various warheads are also capable of causing secondary damage to vulnerable systems (especially sights, tracks, rear and roof of turrets) and other unarmored targets.
The term "rocket-propelled grenade" is, strictly speaking, a backronym; it stems from the Russian language РПГ which stands for ручной противотанковый гранатомёт (transliterated as ""ruchnoy protivotankovy granatomyot"", which has the initials "RPG"), meaning "handheld anti-tank grenade launcher", the name given to early Russian designs.
The static nature of trench warfare in World War I encouraged the use of shielded defenses, even including personal armor, that were impenetrable by standard rifle ammunition. This led to some isolated experiments with higher caliber rifles, similar to elephant guns, using armor-piercing ammunition. The very first tanks, the British Mark I, could be penetrated by these weapons under the right conditions. Mark IV tanks, however, had slightly thicker armor. In response, the German rushed to create an upgraded version of these early anti-armor rifles, the Tankgewehr M1918, the first anti-tank rifle. In the inter-war years, tank armor continued to increase overall, to the point that anti-tank rifles could no longer be effective against anything but light tanks; any rifle made powerful enough for heavier tanks would exceed the ability of a soldier to carry and fire the weapon.
Even with the first tanks, artillery officers often used field guns depressed to fire directly at armored targets. However, this practice expended much valuable ammunition and was of increasingly limited effectiveness as tank armor became thicker. This led to the concept of anti-tank guns, a form of artillery specifically designed to destroy armored fighting vehicles, normally from static defensive positions (that is, immobile during a battle).
The first dedicated anti-tank artillery began appearing in the 1920s, and by World War II was a common appearance in most armies. In order to penetrate armor they fired specialized ammunition from proportionally longer barrels to achieve a higher muzzle velocity than field guns. Most anti-tank guns were developed in the 1930s as improvements in tanks were noted, and nearly every major arms manufacturer produced one type or another.
Anti-tank guns deployed during World War II were manned by specialist infantry rather than artillery crews, and issued to infantry units accordingly. The anti-tank guns of the 1930s were of small caliber; nearly all major armies possessing them used 37mm ammunition, except for the British Army, which had developed the 40mm Ordnance QF 2-pounder. As World War II progressed, the appearance of heavier tanks rendered these weapons obsolete and anti-tank guns likewise began firing larger calibre and more effective armor-piercing shells. Although a number of large caliber guns were developed during the war that were capable of knocking out the most heavily armored tanks, they proved slow to set up and difficult to conceal. The latter generation of low-recoil anti-tank weapons, which allowed projectiles the size of an artillery shell to be fired from a man's shoulder, was considered a far more viable option for arming infantrymen.
The RPG has its roots in the 20th century with the early development of the explosive shaped charge, in which the explosive is made with a conical hollow, which concentrates its power on the impact point. Before the adoption of the shaped charge, anti-tank guns and tank guns relied primarily on kinetic energy of metal shells to defeat armor. Soldier-carried anti-tank rifles such as the Boys anti-tank rifle could be used against lightly-armored tankettes and light armored vehicles. However, as tank armor increased in thickness and effectiveness, the anti-tank guns needed to defeat them became increasingly heavy, cumbersome and expensive. During WW II, as tank armor got thicker, larger calibre anti-tank guns were developed to defeat this thicker armor.
While larger anti-tank guns were more effective, the weight of these anti-tank guns meant that they increasingly were mounted on wheeled, towed platforms. This meant that if the infantry was on foot, they might not have access to these wheeled, vehicle-towed anti-tank guns. This led to situations where infantry could find themselves defenseless against tanks and unable to attack tanks. Armies found that they needed to give infantry a human-portable (i.e., can be carried by one soldier) weapon to defeat enemy armor when no wheeled anti-tank guns were available, since anti tank rifles were no longer effective. Initial attempts to put such weapons in the hands of the infantry resulted in weapons like the Soviet RPG-40 "blast effect" hand grenade (where "RPG" stood for "ruchnaya protivotankovaya granata", meaning hand-held anti-tank grenade). The later RPG-43 and RPG-6 used shaped charges, the chemical energy of their explosive being used more efficiently to enable the defeat of thicker armor; however, being hand thrown weapons, they still had to be deployed at suicidally close range to be effective. What was needed was a means of delivering the shaped charge warhead from a distance. Different approaches to this goal would lead to the anti-tank spigot mortar, the recoilless rifle and, from the development of practical rocketry, the rocket propelled grenade.
Research occasioned by World War II produced such weapons as the American Bazooka and German Panzerfaust, which combined portability with effectiveness against armored vehicles, such as tanks. The Soviet-developed RPG-7 is the most widely distributed, recognizable and used RPG in the world. The basic design of this RPG was developed by the Soviets shortly after World War II in the form of the RPG-2, which is similar in function to the Bazooka (due to the reloadability) and the Panzerfaust (due to an oversized grenade that protrudes outside of a smaller launch tube and the recoilless launch), though the rounds it fires lack a form of propulsion in addition to the launch charge (unlike the RPG-7 rounds, which also feature a sustainer motor, effectively making the rounds rocket propelled grenades).
Soviet RPGs were used extensively during the Vietnam War (by the Vietnam People's Army and Vietcong), as well as during the Soviet invasion of Afghanistan by the Mujahideen and against South Africans in Angola and Namibia (formerly South West Africa) by SWAPO guerillas during what the South Africans called the South African Border War. In the 2000s, they were still being used widely in conflict areas such as Chechnya, Iraq, and Sri Lanka. Militants have also used RPGs against helicopters: Taliban fighters shot down U.S. CH-47 Chinook helicopters in June 2005 and August 2011; and Somali militiamen shot down two U.S. UH-60 Black Hawk helicopters during the Battle of Mogadishu in 1993.
The RPG warhead being used against tanks and other armor often has a shaped charge explosive warhead. A shaped charge is an explosive charge shaped to focus the effect of the explosive's energy. Various types are used to penetrate tank armor; typical modern lined shaped charge can penetrate steel armor to a depth of seven or more times the diameter of the charge (charge diameters, CD), though greater depths of 10 CD and above have been achieved. Despite the popular misconception that shaped charges "melt" tank armor, the shaped charge does not depend in any way on heating or melting for its effectiveness; that is, the superplastic metal jet from a shaped charge impact on armor forms mainly due to a sudden and intense mechanical stress and does not melt its way through armor, as its effect is purely due to kinetic energy in nature.
An RPG comprises two main parts: the launcher and a rocket equipped with a warhead. The most common types of warheads are high explosive (HE) and high explosive anti-tank (HEAT) rounds. HE rounds can be used against troops or unarmored structures or vehicles. HEAT rounds can be used against armored vehicles. These warheads are affixed to a rocket motor and stabilized in flight with fins. Some types of RPG are single-use disposable units, such as the RPG-22 and M72 LAW; with these units, once the rocket is fired, the entire launcher is disposed of. Others are reloadable, such as the Soviet RPG-7 and the Israeli B-300. With reloadable RPGs, a new rocket can be inserted into the muzzle of the weapon after firing.
The launcher is designed so that the rocket exits the launcher without discharging an exhaust that would be dangerous to the operator (an issue that tended to affect the earliest RPG weapon systems such as the German Panzerschreck, which featured a metal shield for the operator attached to the launch tube). In the case of the RPG-7, the rocket is launched by a gunpowder booster charge, and the rocket motor ignites only after 10 metres. In some other designs, the propellant charge burns completely within the tube.
An RPG is an inexpensive way of delivering an explosive payload or warhead over a distance with moderate accuracy. Substantially more expensive guided anti-tank missiles are used at larger distances or when accuracy is paramount. Some anti-tank missiles, such as the Sagger, can be guided after firing by the operator. An RPG is not normally guided towards the target by heat sensors or IR signatures. Nor can most RPG rockets be controlled in flight after being aimed and launched. While the lack of active targeting technologies or after-firing guidance input can be viewed as a challenge or weak point, it also makes it hard to defend against RPGs with electronic countermeasures, jamming or similar approaches. For example, if a soldier or other fighter launches an RPG at a hovering helicopter, even if the helicopter releases chaff flares, engages in signal jamming or releases radar-fooling foil, this will have no effect on a typical RPG warhead in flight, even if these measures might protect against more sophisticated surface-to-air missiles.
The HEAT (high explosive anti-tank) round is a standard shaped charge warhead, similar in concept to those used in many tank cannon rounds. In this type of warhead, the shape of the explosive material within the warhead focuses the explosive energy on a copper (or similar metal) lining. This heats the metal lining and propels some of it forward at a very high velocity in a highly plastic state. The resulting narrow jet of metal can defeat armor equivalent to several hundred millimeters of RHA, such as that used in light and medium armored vehicles. However, heavily armored vehicles, such as main battle tanks, are generally too well armored to be penetrated by an RPG, unless weaker sections of the armor are exploited. Various warheads are also capable of causing secondary damage to vulnerable systems (especially sights, tracks, rear and roof of turrets) and other soft targets. The warhead detonates on impact or when the fuse runs out; usually the fuse is set to the maximum burn of the rocket motor, but it can be shortened for improvised anti aircraft purposes.
Specialized warheads are available for illumination, smoke, tear gas, and white phosphorus. Russia, China, and many former Warsaw Pact nations have also developed a fuel-air explosive (thermobaric) warhead. Another recent development is a tandem HEAT warhead capable of penetrating reactive armor.
So-called PRIGs (Propelled Recoilless Improvised Grenade) were improvised warheads used by the Provisional IRA.
The RPG-29 uses a tandem-charge high explosive anti-tank warhead to penetrate explosive reactive armor (ERA) as well as composite armor behind it. It is capable of penetrating MBTs, such as the M1 Abrams, older model Mark II version of the Merkava, Challenger 2 and T-90.
In August 2006, in al-Amarah, in Iraq, a Soviet RPG-29 damaged the front underside of a Challenger 2 tank, detonating ERA in the area of the driver's cabin. The driver lost part of a foot and two more of the crew were injured, but the driver was able to reverse to an aid post. The incident was not made public until May 2007; in response to accusations, the MoD said "We have never claimed that the Challenger 2 is impenetrable." Since then, the ERA has been replaced with a Dorchester block and the steel underbelly lined with armor, as part of the 'Streetfighter' upgrade, which was a direct response to this incident. In May 2008, "The New York Times" disclosed that an American M1 tank had also been damaged by an RPG-29 in Iraq. The American army is ranking the RPG-29 threat to American armor as high; they have refused to allow the newly formed Iraqi army to buy it, fearing that it would fall into the hands of insurgents.
Various armies and manufacturers have developed add-on tank armor and other systems for urban combat, such as the Tank Urban Survival Kit (TUSK) for M1 Abrams, slat armor for the Stryker, ERA kit for the FV432, AZUR for Leclerc, and others. Similar solutions are active protection systems (APS), engaging and destroying closing projectiles, such as the Russian Drozd and Arena, as well as the recent Israeli TROPHY Active Protection System.
The RPG-30 was designed to address the threat of active protection systems on tanks by using a false target to trick the APS. The RPG-30 shares a close resemblance with the RPG-27 in that it is a man-portable, disposable anti-tank rocket launcher with a single shot capacity. However, unlike the RPG-27, there is a smaller diameter precursor round in a smaller side barrel tube in addition to the main round in the main tube. This precursor round acts as a false target, tricking the target's active protection system into engaging it, allowing the main round a clear path into the target, while the APS is stuck in the 0.2–0.4 second delay it needs to start its next engagement. Recent German systems were able to reduce reaction delay to mere milliseconds, cancelling this advantage.
The PG-30 is the main round of the RPG-30. The round is a 105-mm tandem shaped charge with a weight of 10.3-kg (22.7-lb) and has a range of 200 meters and a stated penetration capability in excess of 600-mm (24-in) rolled homogeneous armor (RHA) (after ERA), 1500-mm reinforced concrete, 2000-mm brick and 3700-mm of soil. Reactive armor, including explosive reactive armor (ERA), can be defeated with multiple hits into the same place, such as by tandem-charge weapons, which fire two or more shaped charges in rapid succession.
An early method of disabling shaped charges developed during World War II was to apply thin skirt armor or meshwire at a distance around the hull and turret of the tank. The skirt or mesh armor (cage armor) triggers the RPG on contact and much of the molten jet that a shaped charge produces dissipates before coming into contact with the main armor of the vehicle. Well-sloped armor also gives some protection because the shaped charge is forced to penetrate a greater amount of armor due to the oblique angle. The benefits of cage armor are still considered great in modern battlefields in the Middle East, and although similar effects can be obtained using spaced armor, either as a part of the original design or as appliqué armor fitted later, cage armor is preferable due to its low weight and ease of repair.
Today, technologically advanced armies have implemented composite armors such as Chobham armor, which provide superior protection to steel. For added protection, vehicles may be retrofitted with reactive armor; on impact, reactive tiles explode or deform, disrupting the normal function of the shaped charge. Russian and Israeli vehicles also use active protection systems such as Drozd, Arena APS or Trophy. Such a system detects and shoots down incoming projectiles before they reach the vehicle. As in all arms races, these developments in armor countermeasures have led to the development of RPG rounds designed specifically to defeat them, with methods such as a tandem-charge warhead, which has two shaped charges, of which the first is meant to activate any reactive armor, and the second to penetrate the vehicle.
The United States Army developed a lightweight antitank weapon (LAW) in the middle 1950s. By 1961, the M72 LAW was in use. It is a shoulder-fired, disposable rocket launcher with HEAT warhead. It is a recoilless weapon, which is easy to use, and effective against armored vehicles. It was used during the Vietnam War, and is still in use today. It uses a fin-stabilized rocket. In response to the threat of thicker armor, this weapon was replaced by the AT4 recoilless rifle, a larger & non-collapsible – albeit still single-shot weapon.
The United States Marine Corps uses a different launcher, which is reloadable – the Shoulder-Launched Multipurpose Assault Weapon (SMAW). Unlike the RPG, it is reloaded from the breech-end rather than the muzzle.
Specific types of RPGs (current, past and under development) include:
One of the first instances the weapon was used by militants was on 13 January 1975 at Orly Airport in France, when Carlos the Jackal, together with another member from the PFLP, used two Soviet RPG-7 grenades to attack an Israeli El Al airliner. Both missed the target, with one hitting a Yugoslav Airlines's DC-9 instead.
In Afghanistan, Mujahideen guerrillas used RPG-7s to destroy Soviet vehicles. To assure a kill, two to four RPG shooters would be assigned to each vehicle. Each armored-vehicle hunter-killer team can have as many as 15 RPGs. In areas where vehicles were confined to a single path (a mountain road, swamps, snow, urban areas), RPG teams trapped convoys by destroying the first and last vehicles in line, preventing movement of the other vehicles. This tactic was especially effective in cities. Convoys learned to avoid approaches with overhangs and to send infantrymen forward in hazardous areas to detect the RPG teams.
Multiple shooters were also effective against heavy tanks with reactive armor: The first shot would be against the driver's viewing prisms. Following shots would be in pairs, one to set off the reactive armor, the second to penetrate the tank's armor. Favored weak spots were the top and rear of the turret.
Afghans sometimes used RPG-7s at extreme range, exploded by their 4.5-second self-destruct timer, which translates to roughly 950m flight distance, as a method of long distance approach denial for infantry and reconnaissance. The most noteworthy use of RPGs against aircraft in Afghanistan occurred on 6 August 2011 when Taliban fighters shot down a U.S. CH-47 Chinook helicopter killing all 38 personnel on board including SEAL Team 6 from a range of 220 meters. An earlier anti-aircraft kill by the Taliban occurred during Operation Red Wings, on 28 June 2005 when a Chinook helicopter was destroyed by unguided rocket propelled grenades.
In the period following the 2003 invasion of Iraq, the RPG became a favorite weapon of the insurgent forces fighting U.S. troops. Since most of the readily available RPG-7 rounds cannot penetrate M1 Abrams tank armor from almost any angle, it is primarily effective against soft-skinned or lightly armored vehicles, and infantry. Even if the RPG hit does not completely disable the tank or kill the crew, it can still damage external equipment, lowering the tank's effectiveness or forcing the crew to abandon and destroy it. Newer RPG-7 rounds are more capable, and in August 2006, an RPG-29 round penetrated the frontal ERA of a Challenger 2 tank during an engagement in al-Amarah, Iraq, and wounded several crew members.
RPGs were a main tool used by the FMLN's guerrilla forces in the Salvadoran Civil War. For example, during the June 19, 1986 overrun of the San Miguel Army base, FMLN sappers dressed only in black shorts, their faces blacked out with grease, sneaked through barbed wire at night, avoiding the searchlights, they made it to within firing range of the outer wall. Using RPGs to initiate the attack, they blew through the wall and killed a number of Salvadorean soldiers. They eliminated the outermost sentries and searchlights with the rockets, then made it into the inner wall, which they also punched through. They were then able to create mayhem as their comrades attacked from the outside.
During the First (1994–1996) and Second Chechen Wars (1999–2009), Chechen rebels used RPGs to attack Russian tanks from basements and high rooftops. This tactic was effective because tank main guns could not be depressed or raised far enough to return fire, in addition, armor on the very top and bottom of tanks is usually the weakest. Russian forces had to rely on artillery suppression, good crew gunners and infantry screens to prevent such attacks. Tank columns were eventually protected by attached self-propelled anti-aircraft guns (ZSU-23-4 Shilka, 9K22 Tunguska) used in the ground role to suppress and destroy Chechen ambushes. Chechen fighters formed independent "cells" that worked together to destroy a specific Russian armored target. Each cell contained small arms and some form of RPG (RPG-7V or RPG-18, for example). The small arms were used to button the tank up and keep any infantry occupied, while the RPG gunner struck at the tank. While doing so, other teams would attempt to fire at the target in order to overwhelm the Russians' ability to effectively counter the attack. To further increase the chance of success, the teams took up positions at different elevations where possible. Firing from the third and higher floors allowed good shots at the weakest armor (the top). When the Russians began moving in tanks fitted with explosive reactive armor (ERA), the Chechens had to adapt their tactics, because the RPGs they had access to were unlikely to result in the destruction of the tank.
Using RPGs as improvized anti-aircraft batteries has proved successful in Somalia, Afghanistan and Chechnya. Helicopters are typically ambushed as they land, take off or hover. In Afghanistan, the Mujahideen often modified RPGs for use against Soviet helicopters by adding a curved pipe to the rear of the launcher tube, which diverted the backblast, allowing the RPG to be fired upward at aircraft from a prone position. This made the operator less visible prior to firing and decreased the risk of injury from hot exhaust gases. The Mujahideen also utilized the 4.5-second timer on RPG rounds to make the weapon function as part of a flak battery, using multiple launchers to increase hit probabilities. At the time, Soviet helicopters countered the threat from RPGs at landing zones by first clearing them with anti-personnel saturation fire. The Soviets also varied the number of accompanying helicopters (two or three) in an effort to upset Afghan force estimations and preparation. In response, the Mujahideen prepared dug-in firing positions with top cover, and again, Soviet forces altered their tactics by using air-dropped thermobaric fuel-air bombs on such landing zones. As the U.S.-supplied Stinger surface-to-air missiles became available to them, the Afghans abandoned RPG attacks as the smart missiles proved especially efficient in the destruction of unarmed Soviet transport helicopters, such as Mil Mi-17. In Somalia, both of the UH-60 Black Hawk helicopters lost by U.S. forces during the Battle of Mogadishu in 1993 were downed by RPG-7s. | https://en.wikipedia.org/wiki?curid=26286 |
Roy Jenkins
Roy Harris Jenkins, Baron Jenkins of Hillhead, (11 November 1920 – 5 January 2003) was a British politician who served as President of the European Commission from 1977 to 1981. He was at various times a member of the Labour Party, Social Democratic Party and the Liberal Democrats.
The son of a Welsh coal-miner and trade unionist, himself later a Labour MP, Jenkins was educated at the University of Oxford and served as an intelligence officer during the Second World War. Elected to Parliament as a Labour MP in 1948, he went on to serve as both Chancellor of the Exchequer and Home Secretary under the Labour Governments of Harold Wilson and James Callaghan. In his first period as Home Secretary he sought to build what he described as "a civilised society", with measures such as the effective abolition in Britain of both capital punishment and theatre censorship, the partial decriminalisation of homosexuality, relaxing of divorce law, suspension of birching and the liberalisation of abortion law. As Chancellor of the Exchequer he pursued a tight fiscal policy. He was elected Deputy Leader of the Labour Party in 1970, but resigned in 1972 because he supported entry to the European Communities, which the party opposed.
He later chose to leave British politics in 1976, being appointed President of the European Commission the following year, serving until 1981. He was the first and only British holder of this office. He returned to British politics in 1981; dismayed with the Labour Party's left-ward movement under Michael Foot, he was one of the "Gang of Four"—centrist Labour figures who formed the Social Democratic Party (SDP). In 1982, Jenkins won a by-election to return to Parliament, taking the seat from the Conservatives in a famous result. He was formally made Leader of the SDP ahead of the 1983 general election, during the SDP-Liberal Alliance. However, after disappointment with the performance of the SDP, he resigned as leader.
In 1987 he was elected to succeed Harold Macmillan as Chancellor of the University of Oxford following the latter's death; he held this position until his own death sixteen years later. A few months after becoming Chancellor he was defeated at the 1987 general election by the Labour candidate, George Galloway. Jenkins accepted a life peerage shortly afterwards, and sat in the House of Lords as a Liberal Democrat. In the late 1990s he was an adviser to Prime Minister Tony Blair and chaired the Jenkins Commission on electoral reform. Jenkins died in 2003, aged 82.
In addition to his political career he was also a noted historian, biographer and writer. His "A Life at the Centre" (1991) is regarded as one of the best autobiographies of the later 20th century, which "will be read with pleasure long after most examples of the genre have been forgotten".
Born in Abersychan, Monmouthshire, in south-eastern Wales, as an only child, Roy Jenkins was the son of a National Union of Mineworkers official, Arthur Jenkins. His father was imprisoned during the 1926 General Strike for his alleged involvement in disturbances. Arthur Jenkins later became President of the South Wales Miners' Federation and Member of Parliament for Pontypool, Parliamentary Private Secretary to Clement Attlee, and briefly a minister in the 1945 Labour government. Roy Jenkins' mother, Hattie Harris, was the daughter of a steelworks foreman.
Jenkins was educated at Pentwyn Primary School, Abersychan County Grammar School, University College, Cardiff, and at Balliol College, Oxford, where he was twice defeated for the Presidency of the Oxford Union but took First-Class Honours in Politics, Philosophy and Economics (PPE). His university colleagues included Tony Crosland, Denis Healey and Edward Heath, and he became friends with all three, although he was never particularly close to Healey. In John Campbell's biography "A Well-Rounded Life" a romantic relationship between Jenkins and Crosland was detailed.
During the Second World War, Jenkins received his officer training at Alton Towers and was posted to the 55th West Somerset Yeomanry at West Lavington. Through the influence of his father, in April 1944 Jenkins was sent to Bletchley Park to work as a codebreaker.
Having failed to win Solihull in 1945, he was elected to the House of Commons in a 1948 by-election as the Member of Parliament for Southwark Central, becoming the "Baby of the House". His constituency was abolished in boundary changes for the 1950 general election, when he stood instead in the new Birmingham Stechford constituency. He won the seat, and represented the constituency until 1977.
In 1947 he edited a collection of Clement Attlee's speeches, published under the title "Purpose and Policy". Attlee then granted Jenkins access to his private papers so that he could write his biography, which appeared in 1948 ("Mr Attlee: An Interim Biography"). The reviews were generally favourable, including George Orwell's in "Tribune".
In 1951 "Tribune" published his pamphlet "Fair Shares for the Rich". Here, Jenkins advocated the abolition of large private incomes by taxing them, graduating from 50 per cent for incomes between £20,000 and £30,000 to 95 per cent for incomes over £100,000. He also proposed further nationalisations and said: "Future nationalisations will be more concerned with equality than with planning, and this means that we can leave the monolithic public corporation behind us and look for more intimate forms of ownership and control". He later described this "almost Robespierrean" pamphlet as "the apogee of my excursion to the left".
Jenkins contributed an essay on 'Equality' to the 1952 collection "New Fabian Essays". In 1953 appeared "Pursuit of Progress", a work intended to counter Bevanism. Retreating from what he had demanded in "Fair Shares for the Rich", Jenkins now argued that the redistribution of wealth would occur over a generation. However, he still proposed further nationalisations: "It is quite impossible to advocate both the abolition of great inequalities of wealth and the acceptance of a one-quarter public sector and three-quarters private sector arrangement. A mixed economy there will undoubtedly be, certainly for many decades and perhaps permanently, but it will need to be mixed in very different proportions from this". He also opposed the Bevanites' neutralist foreign policy platform: "Neutrality is essentially a conservative policy, a policy of defeat, of announcing to the world that we have nothing to say to which the world will listen. ... Neutrality could never be acceptable to anyone who believes that he has a universal faith to preach". Jenkins argued that the Labour leadership needed to take on and defeat the neutralists and pacifists in the party; it would be better to risk a split in the party than face "the destruction, by schism, perhaps for a generation, of the whole progressive movement in the country".
Between 1951 and 1956 he wrote a weekly column for the Indian newspaper "The Current". Here he advocated progressive reforms such as equal pay, the decriminalisation of homosexuality, the liberalisation of the obscenity laws and the abolition of capital punishment. "Mr Balfour's Poodle", a short account of the House of Lords crisis of 1911 that culminated in the Parliament Act 1911, was published in 1954. Favourable reviewers included A. J. P. Taylor, Harold Nicolson, Leonard Woolf and Violet Bonham Carter. After a suggestion by Mark Bonham Carter, Jenkins then wrote a biography of the Victorian radical, Sir Charles Dilke, which was published in October 1958.
During the 1956 Suez Crisis, Jenkins denounced Anthony Eden's "squalid imperialist adventure" at a Labour rally in Birmingham Town Hall. Three years later he claimed that "Suez was a totally unsuccessful attempt to achieve unreasonable and undesirable objectives by methods which were at once reckless and immoral; and the consequences, as was well deserved, were humiliating and disastrous".
Jenkins praised Anthony Crosland's 1956 work "The Future of Socialism" as "the most important book on socialist theory" since Evan Durbin's "The Politics of Democratic Socialism" (1940). With much of the economy now nationalised, Jenkins argued, socialists should concentrate on eliminating the remaining pockets of poverty and on the removal of class barriers, as well as promoting libertarian social reforms. Jenkins was principal sponsor, in 1959, of the bill which became the liberalising Obscene Publications Act, responsible for establishing the "liable to deprave and corrupt" criterion as a basis for a prosecution of suspect material and for specifying literary merit as a possible defence.
In July 1959 Penguin published Jenkins' "The Labour Case", timed to anticipate the upcoming election. Jenkins argued that Britain's chief danger was that of "living sullenly in the past, of believing that the world has a duty to keep us in the station to which we are accustomed, and showing bitter resentment if it does not do so". He added: "Our neighbours in Europe are roughly our economic and military equals. We would do better to live gracefully with them than to waste our substance by trying unsuccessfully to keep up with the power giants of the modern world". Jenkins claimed that the Attlee government concentrated "too much towards the austerity of fair shares, and too little towards the incentives of free consumers' choice". Although he still believed in the elimination of poverty and more equality, Jenkins now argued that these aims could be achieved by economic growth. In the final chapter ('Is Britain Civilised?') Jenkins set out a list of necessary progressive social reforms: the abolition of the death penalty, decriminalisation of homosexuality, abolition of the Lord Chamberlain's powers of theatre censorship, liberalisation of the licensing and betting laws, liberalisation of the divorce laws, legalisation of abortion, decriminalisation of suicide and more liberal immigration laws. Jenkins concluded:
Let us be on the side of those who want people to be free to live their own lives, to make their own mistakes, and to decide, in an adult way and provided they do not infringe the rights of others, the code by which they wish to live; and on the side of experiment and brightness, of better buildings and better food, of better music (jazz as well as Bach) and better books, of fuller lives and greater freedom. In the long run these things will be more important than the most perfect of economic policies.
In the aftermath of Labour's 1959 defeat, Jenkins appeared on "Panorama" and argued that Labour should abandon further nationalisation, question its connection with the trade unions and not dismiss a closer association with the Liberal Party. In November he delivered a Fabian Society lecture in which he blamed Labour's defeat on the unpopularity of nationalisation and he repeated this in an article for "The Spectator". His "Spectator" article also called for Britain to accept its diminished place in the world, to grant colonial freedom, to spend more on public services and to promote the right of individuals to live their own lives free from the constraints of popular prejudices and state interference. Jenkins later called it a "good radical programme, although...not a socialist one".
In May 1960 Jenkins joined the Campaign for Democratic Socialism, a Gaitskellite pressure group designed to fight against left-wing domination of the Labour Party. In July 1960 Jenkins resigned from his frontbench role in order to be able to campaign freely for British membership of the Common Market. At the 1960 Labour Party conference in Scarborough, Jenkins advocated rewriting Clause IV of the party's constitution but he was booed. In November he wrote in "The Spectator" that "unless the Labour Party is determined to abdicate its role as a mass party and become nothing more than a narrow sectarian society, its paramount task is to represent the whole of the Leftward-thinking half of the country—and to offer the prospect of attracting enough marginal support to give that half some share of power".
During 1960–62 his main campaign was British membership of the Common Market, where he became Labour's leading advocate of entry. When Harold Macmillan initiated the first British application to join the Common Market in 1961, Jenkins became deputy chairman of the all-party Common Market Campaign and then chairman of the Labour Common Market Committee. At the 1961 Labour Party conference Jenkins spoke in favour of Britain's entry.
Since 1959 Jenkins had been working on a biography of the Liberal Prime Minister, H. H. Asquith. For Jenkins, Asquith ranked with Attlee as the embodiment of the moderate, liberal intelligence in politics that he most admired. Through Asquith's grandson, Mark Bonham Carter, Jenkins had access to Asquith's letters to his mistress, Venetia Stanley. Kenneth Rose, Michael Foot, Asa Briggs and John Grigg all favourably reviewed the book when it was published in October 1964. However, Violet Bonham Carter wrote a defence of her father in "The Times" against the few criticisms of Asquith in the book, and Robert Rhodes James wrote in "The Spectator" that "Asquith was surely a tougher, stronger, more acute man...than Mr. Jenkins would have us believe. The fascinating enigma of his complete decline is never really analysed, nor even understood. ... We required a Sutherland: but we have got an Annigoni". John Campbell claims that "for half a century it has remained unchallenged as the best biography and is rightly regarded as a classic".
Like Healey and Crosland, he had been a close friend of Hugh Gaitskell and for them Gaitskell's death and the elevation of Harold Wilson as Labour Party leader was a setback. For Jenkins, Gaitskell would remain his political hero. After the 1964 general election Jenkins was appointed Minister of Aviation and was sworn of the Privy Council. While at Aviation he oversaw the high-profile cancellations of the BAC TSR-2 and Concorde projects (although the latter was later reversed after strong opposition from the French Government). In January 1965 Patrick Gordon Walker resigned as Foreign Secretary and in the ensuing reshuffle Wilson offered Jenkins the Department for Education and Science; however, he declined it, preferring to stay at Aviation.
In the summer of 1965 Jenkins eagerly accepted an offer to replace Frank Soskice as Home Secretary. However Wilson, dismayed by a sudden bout of press speculation about the potential move, delayed Jenkins' appointment until December. Once Jenkins took office – the youngest Home Secretary since Churchill – he immediately set about reforming the operation and organisation of the Home Office. The Principal Private Secretary, Head of the Press and Publicity Department and Permanent Under-Secretary were all replaced. He also redesigned his office, famously replacing the board on which condemned prisoners were listed with a fridge.
After the 1966 general election, in which Labour won a comfortable majority, Jenkins pushed through a series of police reforms which reduced the number of separate forces from 117 to 49. "The Times" called it "the greatest upheaval in policing since the time of Peel". His visit to Chicago in September (to study their policing methods) convinced him of the need to introduce two-way radios to the police; whereas the Metropolitan Police possessed 25 radios in 1965, Jenkins increased this to 2,500, and provided similar numbers of radios to the rest of the country's police forces. Jenkins also provided the police with more car radios, which made the police more mobile but reduced the amount of time they spent patrolling the streets. His Criminal Justice Act 1967 introduced more stringent controls on the purchase of shotguns, outlawed last-minute alibis and introduced majority verdicts in juries in England and Wales. The Act was also designed to lower the prison population by the introduction of release under licence, easier bail, suspended sentences and earlier parole.
Immigration was a divisive and provocative issue during the late 1960s and on 23 May 1966 Jenkins delivered a speech on race relations, which is widely considered to be one of his best. Addressing a London meeting of the National Committee for Commonwealth Immigrants he notably defined Integration:
Before going on to ask:
And concluding that:
By the end of 1966, Jenkins was the Cabinet's rising star; the "Guardian" called him the best Home Secretary of the century "and quite possibly the best since Peel", the "Sunday Times" called him Wilson's most likeliest successor and the "New Statesman" labelled him "Labour's Crown Prince".
In a speech to the London Labour Conference in May 1967, Jenkins said his vision was of "a more civilised, more free and less hidebound society" and he further claimed that "to enlarge the area of individual choice, socially, politically and economically, not just for a few but for the whole community, is very much what democratic socialism is about". He gave strong personal support to David Steel's Private Member's Bill for the legalisation of abortion, which became the Abortion Act 1967, telling the Commons that "the existing law on abortion is uncertain and...harsh and archaic", adding that "the law is consistently flouted by those who have the means to do so. It is, therefore, very much a question of one law for the rich and one law for the poor". When the Bill looked likely to be dropped due to insufficient time, Jenkins helped ensure that it received enough parliamentary time to pass and he voted for it in every division.
Jenkins also supported Leo Abse's bill for the decriminalisation of homosexuality, which became the Sexual Offences Act 1967. Jenkins told the Commons: "It would be a mistake to think...that by what we are doing tonight we are giving a vote of confidence or congratulation to homosexuality. Those who suffer from this disability carry a great weight of loneliness, guilt and shame. The crucial question...is, should we add to those disadvantages the full rigour of the criminal law? By its overwhelming decisions, the House has given a fairly clear answer, and I hope that the Bill will now make rapid progress towards the Statute Book. It will be an important and civilising Measure".
Jenkins also abolished the use of flogging in prisons. In July 1967 Jenkins recommended to the Home Affairs Select Committee a bill to end the Lord Chamberlain's power to censor the theatre. This was passed as the Theatres Act 1968 under Jenkins' successor as Home Secretary, James Callaghan. Jenkins also announced that he would introduce legislation banning racial discrimination in employment, which was embodied in the Race Relations Act 1968 passed under Callaghan. In October 1967 Jenkins planned to introduce legislation that would enable him to keep out the 20,000 Kenyan Asians who held British passports (this was passed four months later under Callaghan as the Commonwealth Immigrants Act 1968, which was based on Jenkins' draft).
Jenkins is often seen as responsible for the most wide-ranging social reforms of the late 1960s, with popular historian Andrew Marr claiming "the greatest changes of the Labour years" were thanks to Jenkins. These reforms would not have happened when they did, earlier than in most other European countries, if Jenkins had not supported them. In a speech in Abingdon in July 1969, Jenkins said that the "permissive society" had been allowed to become a dirty phrase: "A better phrase is the 'civilized society', based on the belief that different individuals will wish to make different decisions about their patterns of behaviour and that, provided these do not restrict the freedom of others, they should be allowed to do so within a framework of understanding and tolerance". Jenkins' words were immediately reported in the press as "The permissive society is the civilised society", which he later wrote "was not all that far from my meaning".
For some conservatives, such as Peter Hitchens, Jenkins' reforms remain objectionable. In his book "The Abolition of Britain", Hitchens accuses him of being a "cultural revolutionary" who takes a large part of the responsibility for the decline of "traditional values" in Britain. During the 1980s Margaret Thatcher and Norman Tebbit would blame Jenkins for family breakdowns, the decline of respect for authority and the decline of social responsibility. Jenkins replied by pointing out that Thatcher, with her large parliamentary majorities, never attempted to reverse his reforms.
From 1967 to 1970 Jenkins served as Chancellor of the Exchequer, replacing James Callaghan following the devaluation crisis of November 1967. Jenkins' ultimate goal as Chancellor was economic growth, which depended on restoring stability to sterling at its new value after devaluation. This could only be achieved by ensuring a surplus in the balance of payments, which had been in a deficit for the previous five years. Therefore, Jenkins pursued deflation, including cuts in public expenditure and increases in taxation, in order to ensure that resources went into exports rather than domestic consumption. Jenkins warned the House of Commons in January 1968 that there was "two years of hard slog ahead".
He quickly gained a reputation as a particularly tough Chancellor with his 1968 budget increasing taxes by £923 million, more than twice the increase of any previous budget to date. Jenkins had warned the Cabinet that a second devaluation would occur in three months if his budget did not restore confidence in sterling. He restored prescription charges (which had been abolished when Labour returned to office in 1964) and postponed the raising of the school leaving age to 16 to 1973 instead of 1971. Housing and road building plans were also heavily cut, and he also accelerated Britain's withdrawal East of Suez. Jenkins ruled out increasing the income tax and so raised the taxes on: drinks and cigarettes (except on beer), purchase tax, petrol duty, road tax, a 50 per cent rise in Selective Employment Tax and a one-off Special Charge on personal incomes. He also paid for an increase in family allowances by cutting child tax allowances.
Despite Edward Heath claiming it was a "hard, cold budget, without any glimmer of warmth" Jenkins' first budget broadly received a warm reception, with Harold Wilson remarking that "it was widely acclaimed as a speech of surpassing quality and elegance" and Barbara Castle that it "took everyone's breath away". Richard Crossman said it was "genuinely based on socialist principles, fair in the fullest sense by really helping people at the bottom of the scale and by really taxing the wealthy". In his budget broadcast on 19 March, Jenkins said that Britain had been living in a "fool's paradise" for years and that it was "importing too much, exporting too little and paying ourselves too much", with a lower standard of living than France or West Germany.
Jenkins' supporters in the Parliamentary Labour Party became known as the "Jenkinsites". These were usually younger, middle-class and university-educated ex-Gaitskellites such as Bill Rodgers, David Owen, Roy Hattersley, Dick Taverne, John Mackintosh and David Marquand. In May–July 1968 some of his supporters, led by Patrick Gordon Walker and Christopher Mayhew, plotted to replace Wilson with Jenkins as Labour leader but he declined to challenge Wilson. A year later his supporters again attempted to persuade Jenkins to challenge Wilson for the party leadership but he again declined. He later wrote in his memoirs that the 1968 plot was "for me...the equivalent of the same season of 1953 for Rab Butler. Having faltered for want of single-minded ruthlessness when there was no alternative to himself, he then settled down to a career punctuated by increasingly wide misses of the premiership. People who effectively seize the prime ministership – Lloyd George, Macmillan, Mrs Thatcher – do not let such moments slip".
In April 1968, with Britain's reserves declining by approximately £500 million every quarter, Jenkins went to Washington to obtain a $1,400 million loan from the International Monetary Fund. Following a further sterling crisis in November 1968 Jenkins was forced to raise taxes by a further £250 million. After this the currency markets slowly began to settle and his 1969 budget represented more of the same with a £340 million increase in taxation to further limit consumption.
By May 1969 Britain's current account position was in surplus, thanks to a growth in exports, a drop in overall consumption and, in part, the Inland Revenue correcting a previous underestimation in export figures. In July Jenkins was also able to announce that the size of Britain's foreign currency reserves had been increased by almost $1 billion since the beginning of the year. It was at this time that he presided over Britain's only excess of government revenue over expenditure in the period 1936-7 to 1987–8. Thanks in part to these successes there was a high expectation that the 1970 budget would be a more generous one. Jenkins, however, was cautious about the stability of Britain's recovery and decided to present a more muted and fiscally neutral budget. It is often argued that this, combined with a series of bad trade figures, contributed to the Conservative victory at the 1970 general election. Historians and economists have often praised Jenkins for presiding over the transformation in Britain's fiscal and current account positions towards the end of the 1960s. Andrew Marr, for example, described him as one of the 20th century's "most successful chancellors". Alec Cairncross considered Jenkins "the ablest of the four Chancellors I served".
Public expenditure as a proportion of GDP rose from 44 per cent in 1964 to around 50 per cent in 1970. Despite Jenkins' warnings about inflation, wage settlements in 1969–70 increased on average by 13 per cent and contributed to the high inflation of the early 1970s and consequently negated most of Jenkins' efforts to obtain a balance of payments surplus.
After Labour unexpectedly lost power in 1970 Jenkins was appointed Shadow Chancellor of the Exchequer by Harold Wilson. Jenkins was also subsequently elected to the deputy leadership of the Labour Party in July 1970, defeating future Labour Leader Michael Foot and former Leader of the Commons Fred Peart at the first ballot. At this time he appeared the natural successor to Harold Wilson, and it appeared to many only a matter of time before he inherited the leadership of the party, and the opportunity to become Prime Minister.
This changed completely, however, as Jenkins refused to accept the tide of anti-European feeling that became prevalent in the Labour Party in the early 1970s. After a special conference on the EEC was held by the Labour Party on 17 July 1971, but from which Jenkins was forbidden from addressing, he delivered one of the most powerful speeches of his career. Jenkins told a meeting of the Parliamentary Labour Party on 19 July: "At conference the only alternative [to the EEC] we heard was 'socialism in one country'. That is always good for a cheer. Pull up the drawbridge and revolutionize the fortress. That's not a policy either: it's just a slogan, and it is one which becomes not merely unconvincing but hypocritical as well when it is dressed up as our best contribution to international socialism". This reopened the old Bevanite–Gaitskellite divide in the Party; Wilson told Tony Benn the day after Jenkins' speech that he was determined to smash the Campaign for Democratic Socialism.
At the 1971 Labour Party conference in Brighton, the NEC's motion to reject the "Tory terms" of entry into the EEC was carried by a large majority. Jenkins told a fringe meeting that this would have no effect on his continued support for Britain's entry. Benn said Jenkins was "the figure dominating this Conference; there is no question about it". On 28 October 1971, he led 69 Labour MPs through the division lobby in support of the Heath government's motion to take Britain into the EEC. In so-doing they were defying a three-line whip and a five-to-one vote at the Labour Party annual conference. Jenkins later wrote: "I was convinced that it was one of the decisive votes of the century, and had no intention of spending the rest of my life answering the question of what did I do in the great division by saying 'I abstained'. I saw it in the context of the first Reform Bill, the repeal of the Corn Laws, Gladstone's Home Rule Bills, the Lloyd George Budget and the Parliament Bill, the Munich Agreement and the May 1940 votes".
Jenkins' action gave the European cause a legitimacy that would have otherwise been absent had the issue been considered solely as a party political matter. However, he was now regarded by the left as a "traitor". James Margach wrote in the "Sunday Times": "The unconcealed objective of the Left now is either to humiliate Roy Jenkins and his allies into submission – or drive them from the party". At this stage, however, Jenkins would not fully abandon his position as a political insider, and chose to stand again for deputy leader, an act his colleague David Marquand claimed he later came to regret. Jenkins promised not to vote with the government again and he narrowly defeated Michael Foot on a second ballot.
In accordance with the party whip, Jenkins voted against European Communities Bill 55 times. However, he resigned both the deputy leadership and his shadow cabinet position in April 1972, after the party committed itself to holding a referendum on Britain's membership of the EEC. This led to some former admirers, including Roy Hattersley, choosing to distance themselves from Jenkins. Hattersley later claimed that Jenkins' resignation was "the moment when the old Labour coalition began to collapse and the eventual formation of a new centre party became inevitable". In his resignation letter to Wilson, Jenkins said that if there were a referendum "the Opposition would form a temporary coalition of those who, whatever their political views, were against the proposed action. By this means we would have forged a more powerful continuing weapon against progressive legislation than anything we have known in this country since the curbing of the absolute powers of the old House of Lords".
Jenkins' lavish lifestyle — Wilson once described him as "more a socialite than a socialist" — had already alienated much of the Labour Party from him. Wilson accused him of having an affair with socialite Ann Fleming - and it was true.
In May 1972 he collected the Charlemagne Prize, which he had been awarded for promoting European unity. In September an ORC opinion poll found that there was considerable public support for an alliance between the 'moderate' wing of the Labour Party and the Liberals; 35 per cent said they would vote for a Labour–Liberal alliance, 27 per cent for the Conservatives and 23.5 per cent for 'Socialist Labour'. "The Times" claimed that there were "twelve million Jenkinsites". During the spring and summer of 1972, Jenkins delivered a series of speeches designed to set out his leadership credentials. These were published in September under the title "What Matters Now", which sold well. In the book's postscript, Jenkins said that Labour should not be a narrow socialist party advocating unpopular left-wing policies but must aim to "represent the hopes and aspirations of the whole leftward thinking half of the country", adding that a "broad-based, international, radical, generous-minded party could quickly seize the imagination of a disillusioned and uninspired British public".
After Dick Taverne's victory in the 1973 Lincoln by-election, where he stood as "Democratic Labour" in opposition to the official Labour candidate, Jenkins gave a speech to the Oxford University Labour Club denouncing the idea of a new centre party. Jenkins was elected to the shadow cabinet in November 1973 as Shadow Home Secretary. During the February 1974 election, Jenkins rallied to Labour and his campaign was described by David Butler and Dennis Kavanagh as sounding "a note of civilised idealism". Jenkins was disappointed that the Liberal candidate in his constituency won 6000 votes; he wrote in his memoirs that "I already regarded myself as such a closet Liberal that I naïvely thought they ought nearly all to have come to me".
Jenkins wrote a series of biographical essays that appeared in "The Times" during 1971–74 and which were published as "Nine Men of Power" in 1974. Jenkins chose Gaitskell, Ernest Bevin, Stafford Cripps, Adlai Stevenson II, Robert F. Kennedy, Joseph McCarthy, Lord Halifax, Léon Blum and John Maynard Keynes. In 1971 Jenkins delivered three lectures on foreign policy at Yale University, published a year later as "Afternoon on the Potomac?"
When Labour returned to power in early 1974, Jenkins was appointed Home Secretary for the second time. Earlier, he had been promised the treasury; however, Wilson later decided to appoint Denis Healey as Chancellor instead. Upon hearing from Bernard Donoughue that Wilson had reneged on his promise, Jenkins reacted angrily. Despite being on a public staircase, he is reported to have shouted "You tell Harold Wilson he must bloody well come to see me ...and if he doesn't watch out, I won't join his bloody government ... This is typical of the bloody awful way Harold Wilson does things!" The Jenkinsites were dismayed by Jenkins' refusal to insist upon the Chancellorship and began to look elsewhere for leadership, thus ending the Jenkinsites as a united group.
Jenkins served from 1974 to 1976. Whereas during his first period as Home Secretary in the 1960s the atmosphere had been optimistic and confident, the climate of the 1970s was much more fractious and disillusioned. After two Northern Irish sisters, Marian Price and Dolours Price, were imprisoned for 20 years for the 1973 Old Bailey bombing, they went on hunger strike in order to be transferred to a prison in Northern Ireland. In a television broadcast in June 1974, Jenkins announced that he would refuse to give in to their demands, although in March 1975 he discreetly transferred them to a Northern Irish prison.
He undermined his previous liberal credentials to some extent by pushing through the controversial Prevention of Terrorism Act in the aftermath of the Birmingham pub bombings of November 1974, which, among other things, extended the length of time suspects could be held in custody and instituted exclusion orders. Jenkins also resisted calls for the death penalty to be restored for terrorist murderers. On 4 December he told the Cabinet committee on Northern Ireland that "everything he heard made him more convinced that Northern Ireland had nothing to do with the rest of the UK". When reviewing Garret FitzGerald's memoirs in 1991, Jenkins proclaimed: "My natural prejudices, such as they are, are much more green than orange. I am a poor unionist, believing intuitively that even Paisley and Haughey are better at dealing with each other than the English are with either".
The Sex Discrimination Act 1975 (which legislated for gender equality and set up the Equal Opportunities Commission) and the Race Relations Act 1976 (which extended to private clubs the outlawing of racial discrimination and founded the Commission for Racial Equality) were two notable achievements during his second time as Home Secretary.
Jenkins opposed Michael Foot's attempts to grant pickets the right to stop lorries during strikes and he was dismayed by Anthony Crosland's decision to grant an amnesty to the 11 Labour councillors at Clay Cross who had been surcharged for refusing to increase council rents in accordance with the Conservatives' Housing Finance Act 1972. After two trade unionists, Ricky Tomlinson and Des Warren (known as the "Shrewsbury Two"), were imprisoned for intimidation and affray for their part in a strike, Jenkins refused to accede to demands from the labour movement that they should be released. This demonstrated Jenkins' increasing estrangement from much of the labour movement and for a time he was heckled in public by people chanting "Free the Two". Jenkins also unsuccessfully tried to persuade the Cabinet to adopt electoral reform in the form of proportional representation and to have the Official Secrets Act 1911 liberalised to facilitate more open government.
Although becoming increasingly disillusioned during this time by what he considered the party's drift to the left, he was the leading Labour figure in the EEC referendum of June 1975 (and was also president of the 'Yes' campaign). In September 1974 he had followed Shirley Williams in stating that he "could not stay in a Cabinet which had to carry out withdrawal" from the EEC. During the referendum campaign, Tony Benn claimed that 500,000 jobs had been lost due to Britain's membership; Jenkins replied on 27 May that "I find it increasingly difficult to take Mr Benn seriously as an economics minister". He added that Britain outside the EEC would enter "an old people's home for fading nations. ... I do not even think it would be a comfortable or agreeable old people's home. I do not much like the look of some of the prospective wardens". The two men debated Britain's membership together on "Panorama", which was chaired by David Dimbleby. According to David Butler and Uwe Kitzinger, "they achieved a decidedly more lucid and intricate level of discussion than is commonly seen on political television". Jenkins found it congenial to work with the centrists of all parties in the campaign and the 'Yes' campaign won by two to one.
After the referendum, Wilson demoted Benn to Energy Secretary and attempted to balance the downgrading of Benn with the dismissal of the right-wing minister Reg Prentice from the Department of Education, despite already promising Jenkins that he had no intention of sacking Prentice. Jenkins threatened to resign if Prentice was sacked, telling Wilson that he was "a squalid little man who was using squalid little arguments in order to explain why he was performing so much below the level of events". Wilson quickly backed down. In September Jenkins delivered a speech in Prentice's constituency of Newham to demonstrate solidarity with him after he was threatened with deselection by left-wingers in the constituency party. Jenkins was heckled by both far-left and far-right demonstrators and he was hit in the chest by a flour bomb thrown by a member of the National Front. Jenkins warned that if Prentice was deselected "it is not just the local party that is undermining its own foundations by ignoring the beliefs and feelings of ordinary people, the whole legitimate Labour Party, left as well as right, is crippled if extremists have their way". He added that if "tolerance is shattered formidable consequences will follow. Labour MPs will either have to become creatures of cowardice, concealing their views, trimming their sails, accepting orders, stilling their consciences, or they will all have to be men far far to the left of those whose votes they seek. Either would make a mockery of parliamentary democracy".
In January 1976 he further distanced himself from the left with a speech in Anglesey, where he repudiated ever-higher public spending: "I do not think you can push public expenditure significantly above 60 per cent [of GNP] and maintain the values of a plural society with adequate freedom of choice. We are here close to one of the frontiers of social democracy". A former supporter, Roy Hattersley, distanced himself from Jenkins after this speech.
In May 1976 he told the Police Federation conference to "be prepared first to look at the evidence and to recognize how little the widespread use of prison reduces our crime or deals effectively with many of the individuals concerned". He also responded to the Federation's proposals on law and order: "I respect your right to put them to me. You will no doubt respect my right to tell you that I do not think all the points in sum amount to a basis for a rational penal policy".
When Wilson suddenly resigned as Prime Minister in March 1976, Jenkins was one of six candidates for the leadership of the Labour Party but came third in the first ballot, behind Callaghan and Michael Foot. Realising that his vote was lower than expected, and sensing that the parliamentary party was in no mood to overlook his actions five years before, he immediately withdrew from the contest. On issues such as the EEC, trade union reform and economic policy he had proclaimed views opposite to those held by the majority of Labour Party activists, and his libertarian social views were at variance with the majority of Labour voters. A famous story alleged that when one of Jenkins' supporters canvassed a group of miners' MPs in the Commons' tea-room, he was told: "Nay, lad, we're all Labour here".
Jenkins had wanted to become Foreign Secretary, but Foot warned Callaghan that the party would not accept the pro-European Jenkins as Foreign Secretary. Callaghan instead offered Jenkins the Treasury in six months' time (when it would be possible to move Denis Healey to the Foreign Office). Jenkins turned the offer down. Jenkins then accepted an appointment as President of the European Commission (succeeding François-Xavier Ortoli) after Callaghan appointed Anthony Crosland to the Foreign Office.
In an interview with "The Times" in January 1977, Jenkins said that: "My wish is to build an effective united Europe. ... I want to move towards a more effectively organized Europe politically and economically and as far as I am concerned I want to go faster, not slower". The main development overseen by the Jenkins Commission was the development of the Economic and Monetary Union of the European Union from 1977, which began in 1979 as the European Monetary System, a forerunner of the Single Currency or Euro. His biographer calls Jenkins "the godfather of the euro" and claims that among his successors only Jacques Delors has made more impact.
In speech in Florence in October 1977, Jenkins argued that monetary union would facilitate "a more efficient and developed rationalisation of industry and commerce than is possible under a Customs Union alone". He added that "a major new international currency" would form "a joint and alternative pillar of the world monetary system" which would lead to greater international stability. Monetary union would also combat inflation by controlling the money supply. Jenkins conceded that this would involve the diminution of national sovereignty but he pointed out that "governments which do not discipline themselves already find themselves accepting very sharp surveillance" from the IMF. Monetary union would also promote employment and diminish regional differences. Jenkins ended the speech by quoting Jean Monnet's statement that politics was "not only the art of the possible, but...the art of making possible tomorrow what may seem impossible today".
President Jenkins was the first President to attend a G8 summit on behalf of the Community. He received an Honorary Degree (Doctor of Laws) from the University of Bath in 1978.
In October 1978 "Tribune" reported (falsely) that Jenkins and his wife had not paid their Labour Party subscription for several years. After this was repeated in the national press, Jenkins' drafted his wife's letter to "The Times" that refuted the allegation. Jenkins blamed the story on a "malicious Trot in the North Kensington Labour Party". Jenkins was disillusioned with the Labour Party and he was almost certain that he could not stand again as a Labour candidate; in January 1979 he told Shirley Williams that the "big mistake we had made was not to go and support Dick Taverne in 1973; everything had got worse since then".
He did not vote in the 1979 election. After the Conservatives won the election Margaret Thatcher contemplated appointing Jenkins Chancellor of the Exchequer on the strength of his success at cutting public expenditure when he was Chancellor. However, his friend Woodrow Wyatt claimed that Jenkins "had other and fresh fish to fry".
The Director-General of the BBC, Ian Trethowan, invited Jenkins to deliver the Richard Dimbleby Lecture for 1979, which he did on 22 November. The title Jenkins gave to his lecture, "Home Thoughts from Abroad", derived from a Robert Browning poem. He delivered it in the Royal Society of Arts and it was broadcast live on television. Jenkins analysed the decline of the two-party system since 1951 and criticised the excessive partisanship of British politics, which he claimed alienated the bulk of voters, who were more centrist. He advocated proportional representation and the acceptance of "the broad line of division between the public and private sectors", a middle way between Thatcherism and Bennism. Jenkins said that the private sector should be encouraged without too much interference to create as much wealth as possible "but use the wealth so created both to give a return for enterprise and to spread the benefits throughout society in a way that avoids the disfigurements of poverty, gives a full priority to public education and health services, and encourages co-operation and not conflict in industry and throughout society". He then reiterated his long-standing commitment to libertarianism:
You also make sure that the state knows its place...in relation to the citizen. You are in favour of the right of dissent and the liberty of private conduct. You are against unnecessary centralization and bureaucracy. You want to devolve decision-making wherever you sensibly can. ... You want the nation to be self-confident and outward-looking, rather than insular, xenophobic and suspicious. You want the class system to fade without being replaced either by an aggressive and intolerant proletarianism or by the dominance of the brash and selfish values of a 'get rich quick' society. ... These are some of the objectives which I believe could be assisted by a strengthening of the radical centre.
"The Listener" reprinted the text along with assessments by Enoch Powell, Paul Johnson, Jack Jones, J. A. G. Griffith, Bernard Crick, Neil Kinnock and Jo Grimond. They were all critical; Kinnock thought him misguided as Britain had already suffered from centrist rule for thirty years and Grimond complained that Jenkins' clarion call had come 20 years too late.
Jenkins' last year as President of the Commission was dominated by Margaret Thatcher's fight for a rebate on Britain's contribution to the EEC budget. He believed that the quarrel was unnecessary and regretted that it soured Britain's relationship with the Community for years. In November 1980 Jenkins delivered the Winston Churchill memorial lecture in Luxembourg, where he proposed a solution to the British budgetary question. The proportion of the Community's budget spent on agriculture should be reduced by extending Community spending into new areas where Britain would receive more benefit, such as regional spending. The size of the Community's budget would, in his scheme, be tripled by transferring from the nation states to the Community competence over social and industrial policy.
After his Dimbleby Lecture, Jenkins increasingly favoured the formation of a new social democratic party. He publicly aired these views in a speech to the Parliamentary Press Gallery in June 1980, where he repeated his criticisms of the two-party system and attacked Labour's move to the left. At the previous month's Wembley conference, Labour had adopted a programme which included non-cooperation with the EEC and "a near neutralist and unilateralist" defence policy that would, Jenkins argued, render meaningless Britain's NATO membership. Labour's proposals for further nationalisation and anti-private enterprise policies, Jenkins claimed, were more extreme than in any other democratic country and it was not "by any stretch of the imagination a social democratic programme". He added that a new party could reshape politics and lead to the "rapid revival of liberal social democratic Britain".
The Labour Party conference at Blackpool in September 1980 adopted a unilateralist defence policy, withdrawal from the EEC and further nationalisation, along with Tony Benn's demands for the mandatory reselection of MPs and an electoral college to elect the party leader. In November Labour MPs elected the left-winger Michael Foot over the right-wing Denis Healey and in January 1981 Labour's Wembley conference decided that the electoral college that would elect the leader would give the trade unions 40 per cent of the vote, with MPs and constituency parties 30 per cent each. Jenkins then joined David Owen, Bill Rodgers and Shirley Williams (known as the "Gang of Four") in issuing the Limehouse Declaration. This called for the "realignment of British politics". They then formed the Social Democratic Party (SDP) on 26 March.
Jenkins delivered a series of speeches setting out the SDP's alternative to Thatcherism and Bennism and argued that the solution to Britain's economic troubles lay in the revenue from North Sea oil, which should be invested in public services. He attempted to re-enter Parliament at the Warrington by-election in July 1981 and campaigned on a six-point programme which he put forward as a Keynesian alternative to Thatcherism and Labour's "siege economy", but Labour retained the seat with a small majority. Despite it being a defeat, the by-election demonstrated that the SDP was a serious force. Jenkins said after the count that it was the first parliamentary election that he had lost but it was "by far the greatest victory in which I have ever participated".
At the SDP's first annual conference in October 1981, Jenkins called for "an end to the futile frontier war between public and private sectors" and proposed an "inflation tax" on excessive pay rises that would restrain spiralling wages and prices. After achieving this, an SDP government would be able to embark on economic expansion to reduce unemployment.
In March 1982 he fought the Glasgow Hillhead by-election, which had previously been a Conservative-held seat. Polls at the beginning of the campaign put Jenkins in third place but after a series of ten well-attended public meetings which Jenkins addressed, the tide began to turn in Jenkins' favour and he was elected with a majority of just over 2000 on a swing of 19 per cent. Jenkins' first intervention in the House of Commons following his election, on 31 March, was seen as a disappointment. The Conservative MP Alan Clark wrote in his diary:
Jenkins, with excessive and almost unbearable gravitas, asked three very heavy statesman-like non-party-political questions of the PM. I suppose he is very formidable, but he was so portentous and long-winded that he started to lose the sympathy of the House about half way through and the barracking resumed. The Lady replied quite brightly and freshly, as if she did not particularly know who he was, or care.
Whereas earlier in his career Jenkins had excelled in the traditional set-piece debates from the dispatch box, the focus of parliamentary reporting had now moved to the point-scoring of Prime Minister's Questions, which he struggled with. Seated in the traditional place for third parties in the Commons (the second or third row below the gangway), Jenkins was situated near (and shared the same microphone with) Labour's "awkward squad" that included Dennis Skinner and Bob Cryer, who regularly heckled abuse ("Roy, your flies are undone").
Seven days after Jenkins' by-election victory Argentina invaded the Falklands and the subsequent Falklands War transformed British politics, increased substantially the public's support for the Conservatives and ended any chance that Jenkins' election would reinvigorate the SDP's support. In the SDP leadership election, Jenkins was elected with 56.44 of the vote, with David Owen coming second. During the 1983 election campaign his position as the prime minister-designate for the SDP-Liberal Alliance was questioned by his close colleagues, as his campaign style was now regarded as ineffective; the Liberal leader David Steel was considered to have a greater rapport with the electorate.
After the general election Owen succeeded him unopposed. Jenkins was disappointed with Owen's move to the right, and his acceptance and backing of some of Thatcher's policies. At heart, Jenkins remained an unrepentant Keynesian. In his July 1984 Tawney Lecture, Jenkins said that the "whole spirit and outlook" of the SDP "must be profoundly opposed to Thatcherism. It could not go along with the fatalism of the Government's acceptance of massive unemployment". He also delivered a series of speeches in the Commons attacking the Thatcherite policies of the Chancellor, Nigel Lawson. Jenkins called for more government intervention to support industry and for North Sea oil revenues to be channelled into a major programme of rebuilding Britain's infrastructure and into educating a skilled workforce. He also attacked the Thatcher government for failing to join the European Exchange Rate Mechanism.
In 1985 he wrote to "The Times" to advocate the closing down of the political surveillance role of MI5. During the controversy surrounding Peter Wright's "Spycatcher", in which he alleged that Harold Wilson had been a Soviet spy, Jenkins rubbished the allegation and reiterated his call for the end of MI5's powers of political survelliance.
In 1986 he won "The Spectator"'s Parliamentarian of the Year award. He continued to serve as SDP Member of Parliament for Glasgow Hillhead until his defeat at the 1987 general election by the Labour candidate George Galloway, after boundary changes in 1983 had changed the character of the constituency.
In 1986 appeared his biography of Harry S. Truman and the following year his biography of Stanley Baldwin was published.
From 1987, Jenkins remained in politics as a member of the House of Lords as a life peer with the title Baron Jenkins of Hillhead, of Pontypool in the County of Gwent. Also in 1987, Jenkins was elected Chancellor of the University of Oxford. He was leader of the Liberal Democrats in the Lords from 1988 until 1997.
In 1988 he fought and won an amendment to the Education Reform Act 1988, guaranteeing academic freedom of speech in further and higher education establishments. This affords and protects the right of students and academics to "question and test received wisdom" and has been incorporated into the statutes or articles and instruments of governance of all universities and colleges in Britain.
In 1991 his memoirs, "A Life at the Centre", was published by Macmillan, who paid Jenkins an £130,000 advance. He was magnanimous to most of those colleagues with whom he had clashed in the past, except for David Owen, who he blamed for destroying the idealism and cohesion of the SDP. In the last chapter ('Establishment Whig or Persistent Radical?') he reaffirmed his radicalism, placing himself "somewhat to the left of James Callaghan, maybe Denis Healey and certainly of David Owen". He also proclaimed his political credo:
My broad position remains firmly libertarian, sceptical of official cover-ups and uncompromisingly internationalist, believing sovereignty to be an almost total illusion in the modern world, although both expecting and welcoming the continuance of strong differences in national traditions and behaviour. I distrust the deification of the enterprise culture. I think there are more limitations to the wisdom of the market than were dreamt of in Mrs Thatcher's philosophy. I believe that levels of taxation on the prosperous, having been too high for many years (including my own period at the Treasury), are now too low for the provision of decent public services. And I think the privatisation of near monopolies is about as irrelevant as (and sometimes worse than) were the Labour Party's proposals for further nationalisation in the 1970s and early 1980s.
"A Life at the Centre" was generally favourably reviewed: in the "Times Literary Supplement" John Grigg said it was a "marvellous account of high politics by a participant writing with honesty, irony and sustained narrative verve". In "The Spectator" Anthony Quinton remarked that Jenkins was "not afraid to praise himself and earns the right to do so by unfudged self-criticism". However, there were critical voices: John Smith in "The Scotsman" charged that Jenkins never had any loyalty to the Labour Party and was an ambitious careerist intent only on furthering his career. John Campbell claims that "A Life at the Centre" is now generally recognised as one of the best political memoirs. David Cannadine ranked it alongside Duff Cooper's "Old Men Forget", R. A. Butler's "The Art of the Possible" and Denis Healey's "The Time of My Life" as one of the four best political memoirs of the post-war period.
In 1993, he was appointed to the Order of Merit. Also that year, his "Portraits and Miniatures" was published. The main body of the book is a set of 6 biographical essays (Rab Butler, Aneurin Bevan, Iain Macleod, Dean Acheson, Konrad Adenauer, Charles de Gaulle), along with lectures, articles and book reviews.
A television documentary about Jenkins was made by Michael Cockerell, titled "Roy Jenkins: A Very Social Democrat", and broadcast on 26 May 1996. Although an admiring portrait overall, Cockerell was frank about Jenkins' affairs and both Jenkins and his wife believed that Cockerell had betrayed their hospitality.
Jenkins hailed Tony Blair's election as Labour Party leader in July 1994 as "the most exciting Labour choice since the election of Hugh Gaitskell". He argued that Blair should stick "to a constructive line on Europe, in favour of sensible constitutional innovation...and in favour of friendly relations with the Liberal Democrats". He added that he hoped Blair would not move Labour further to the right: "Good work has been done in freeing it from nationalisation and other policies. But the market cannot solve everything and it would be a pity to embrace the stale dogmas of Thatcherism just when their limitations are becoming obvious".
Jenkins and Blair had been in touch since the latter's time as Shadow Home Secretary, when he admired Jenkins' reforming tenure at the Home Office. Jenkins told Paddy Ashdown in October 1995: "I think Tony treats me as a sort of father figure in politics. He comes to me a lot for advice, particularly about how to construct a Government". Jenkins tried to persuade Blair that the division in the centre-left vote between the Labour and Liberal parties had enabled the Conservatives to dominate the 20th century, whereas if the two left-wing parties entered into an electoral pact and adopted proportional representation, they could dominate the 21st century. Jenkins was an influence on the thinking of New Labour and both Peter Mandelson and Roger Liddle in their 1996 work "The Blair Revolution" and Philip Gould in his "Unfinished Revolution" recognised Jenkins' influence.
Before the 1997 election, Blair had promised an enquiry into electoral reform. In December 1997, Jenkins was appointed chair of a Government-appointed Independent Commission on the Voting System, which became known as the "Jenkins Commission", to consider alternative voting systems for the UK. The Jenkins Commission reported in favour of a new uniquely British mixed-member proportional system called "Alternative vote top-up" or "limited AMS" in October 1998, although no action was taken on this recommendation. Blair told Ashdown that Jenkins' recommendations would not pass the Cabinet.
British membership of the European single currency, Jenkins believed, was the supreme test of Blair's statesmanship. However, he was disappointed with Blair's timidity in taking on the Eurosceptic tabloid press. He told Blair in October 1997: "You have to choose between leading Europe or having Murdoch on your side. You can have one but not both". Jenkins was also critical of New Labour's authoritarianism, such as the watering down of the Freedom of Information Act 2000 and their intention to ban fox hunting. By the end of his life Jenkins believed that Blair had wasted his enormous parliamentary majority and would not be recorded in history as a great Prime Minister; he ranked him between Harold Wilson and Stanley Baldwin.
After Gordon Brown attacked Oxford University for indulging in "old school tie" prejudices because it rejected a state-educated pupil, Laura Spence, Jenkins told the House of Lords in June 2000 that "Brown's diatribe was born of prejudice out of ignorance. Nearly every fact he adduced was false". Jenkins voted for the equalisation of the homosexual age of consent and for repealing Section 28.
Jenkins wrote 19 books, including a biography of Gladstone (1995), which won the 1995 Whitbread Award for Biography, and a much-acclaimed biography of Winston Churchill (2001). His then-designated official biographer, Andrew Adonis, was to have finished the Churchill biography had Jenkins not survived the heart surgery he underwent towards the end of its writing. The popular historian Paul Johnson called it the best one-volume biography on its subject.
Jenkins underwent heart surgery in the form of an heart valve replacement on 12 October 2000 and postponed his 80th birthday celebrations whilst recovering, by having a celebratory party on 7 March 2001. He died on 5 January 2003, after suffering a heart attack at his home at East Hendred, in Oxfordshire. His last words, to his wife, were, "Two eggs, please, lightly poached". At the time of his death Jenkins was starting work on a biography of US President Franklin D. Roosevelt.
After his death, Blair paid tribute to "one of the most remarkable people ever to grace British politics", who had "intellect, vision and an integrity that saw him hold firm to his beliefs of moderate social democracy, liberal reform and the cause of Europe throughout his life. He was a friend and support to me". James Callaghan and Edward Heath also paid tribute and Tony Benn said that as "a founder of the SDP he was probably the grandfather of New Labour". However, he was strongly criticised by others including Denis Healey, who condemned the SDP split as a "disaster" for the Labour Party which prolonged their time in opposition and allowed the Tories to have an unbroken run of 18 years in government.
The Professor of Government at Oxford University, Vernon Bogdanor, provided an assessment in "The Guardian":
Roy Jenkins was both radical and contemporary; and this made him the most influential exponent of the progressive creed in politics in postwar Britain. Moreover, the political creed for which he stood belongs as much to the future as to the past. For Jenkins was the prime mover in the creation of a form of social democracy which, being internationalist, is peculiarly suited to the age of globalisation and, being liberal, will prove to have more staying power than the statism of Lionel Jospin or the corporatist socialism of Gerhard Schröder. ... Roy Jenkins was the first leading politician to appreciate that a liberalised social democracy must be based on two tenets: what Peter Mandelson called an aspirational society (individuals must be allowed to regulate their personal lives without interference from the state); and that a post-imperial country like Britain could only be influential in the world as part of a wider grouping (the EU).
His alma mater, Cardiff University, honoured the memory of Roy Jenkins by naming one of its halls of residence Roy Jenkins Hall.
On 20 January 1945, Jenkins married Mary Jennifer (Jennifer) Morris (18 January 1921 – 2 February 2017) They were married for almost 58 years until his death, although he had "several affairs", including one with Jackie Kennedy's sister Lee Radziwill.
She was made a DBE for services to ancient and historical buildings. They had two sons, Charles and Edward, and a daughter, Cynthia.
Early in his life Jenkins had a relationship with Anthony Crosland. According to the Liberal Democrat Leader Vince Cable Jenkins was bisexual | https://en.wikipedia.org/wiki?curid=26287 |
Rajasthan
Rajasthan ( ; literally, "Land of Kings") is a state in northern India. The state covers an area of or 10.4 percent of the total geographical area of India. It is the largest Indian state by area and the seventh largest by population. Rajasthan is located on the northwestern side of India, where it comprises most of the wide and inhospitable Thar Desert (also known as the "Great Indian Desert") and shares a border with the Pakistani provinces of Punjab to the northwest and Sindh to the west, along the Sutlej-Indus river valley. It is bordered by five other Indian states: Punjab to the north; Haryana and Uttar Pradesh to the northeast; Madhya Pradesh to the southeast; and Gujarat to the southwest. It's geographical location is 23.3 to 30.12 to North latitude and 69.30 to 78.17 East longitude with the Tropic of Cancer passing through southern most tip of the state.
Major features include the ruins of the Indus Valley Civilisation at Kalibangan and Balathal, the Dilwara Temples, a Jain pilgrimage site at Rajasthan's only hill station, Mount Abu, in the ancient Aravalli mountain range and in eastern Rajasthan, the Keoladeo National Park of Bharatpur, a World Heritage Site known for its bird life. Rajasthan is also home to three national tiger reserves, the Ranthambore National Park in Sawai Madhopur, Sariska Tiger Reserve in Alwar and Mukundra Hills Tiger Reserve in Kota.
The state was formed on 30 March 1949 when Rajputanathe name adopted by the British Raj for its dependencies in the regionwas merged into the Dominion of India. Its capital and largest city is Jaipur. Other important cities are Jodhpur, Kota, Bikaner, Ajmer, Bharatpur and Udaipur. The economy of Rajasthan is the ninth-largest state economy in India with in gross domestic product and a per capita GDP of . Rajasthan ranks 29th among Indian states in human development index.
The real meaning of Rajasthan is "The Land of Kings"."" The oldest reference to Rajasthan is found in a stone inscription dated back to 625 A.D. The print mention of the name "Rajasthan" appears in the 1829 publication "Annals and Antiquities of Rajast'han or the Central and Western Rajpoot States of India", while the earliest known record of "Rajputana" as a name for the region is in George Thomas's 1800 memoir "Military Memories". John Keay, in his book "India: A History", stated that "Rajputana" was coined by the British in 1829, John Briggs, translating Ferishta's history of early Islamic India, used the phrase "Rajpoot (Rajput) princes" rather than "Indian princes".
Parts of what is now Rajasthan were partly part of the Vedic Civilisation and Indus Valley Civilization. Kalibangan, in Hanumangarh district, was a major provincial capital of the Indus Valley Civilization. Another archaeological excavation at Balathal site in Udaipur district shows a settlement contemporary with the Harrapan civilisation dating back to 3000 – 1500 BC.
Stone Age tools dating from 5,000 to 200,000 years were found in Bundi and Bhilwara districts of the state.
Matsya Kingdom of the Vedic civilisation of India, is said to roughly corresponded to the former state of Jaipur in Rajasthan and included the whole of Alwar with portions of Bharatpur. The capital of Matsya was at Viratanagar (modern Bairat), which is said to have been named after its founder king Virata.
Bhargava identifies the two districts of Jhunjhunu and Sikar and parts of Jaipur district along with Haryana districts of Mahendragarh and Rewari as part of Vedic state of Brahmavarta. Bhargava also locates the present day Sahibi River as the Vedic Drishadwati River, which along with Saraswati River formed the borders of the Vedic state of Brahmavarta. Manu and Bhrigu narrated the Manusmriti to a congregation of seers in this area only. Ashrams of Vedic seers Bhrigu and his son Chayvan Rishi, for whom Chyawanprash was formulated, were near Dhosi Hill part of which lies in Dhosi village of Jhunjhunu district of Rajasthan and part lies in Mahendragarh district of Haryana.
The Western Kshatrapas (405–35 BC), the Saka rulers of the western part of India, were successors to the Indo-Scythians and were contemporaneous with the Kushans, who ruled the northern part of the Indian subcontinent. The Indo-Scythians invaded the area of Ujjain and established the Saka era (with their calendar), marking the beginning of the long-lived Saka Western Satraps state.
The Gurjar Pratihar Empire acted as a barrier for Arab invaders from the 8th to the 11th century. The chief accomplishment of the Gurjara-Pratihara Empire lies in its successful resistance to foreign invasions from the west, starting in the days of Junaid. Historian R. C. Majumdar says that this was openly acknowledged by the Arab writers. He further notes that historians of India have wondered at the slow progress of Muslim invaders in India, as compared with their rapid advance in other parts of the world. Now there seems little doubt that it was the power of the Gurjara Pratihara army that effectively barred the progress of the Arabs beyond the confines of Sindh, their only conquest for nearly 300 years.
Traditionally the Brahmins, Rajputs, Gurjars, Jats, Meenas, Bhils, Rajpurohits, Charans, Sunaars, Yadavs, Bishnois, Meghwals, Sermals, Rajput Malis (Sainis) and other tribes made a great contribution in building the state of Rajasthan. All these tribes suffered great difficulties in protecting their culture and the land. Millions of them were killed trying to protect their land.
Prithviraj Chauhan defeated the invading Muhammad Ghori in the First Battle of Tarain in 1191. In 1192 CE, Muhammad Ghori decisively defeated Prithviraj at the Second Battle of Tarain. After the defeat of Chauhan in 1192 CE, a part of Rajasthan came under Muslim rulers. The principal centers of their powers were Nagaur and Ajmer. Ranthambhore was also under their suzerainty. At the beginning of the 13th century, the most prominent and powerful state of Rajasthan was Mewar. The Rajputs resisted the Muslim incursions into India, although a number of Rajput kingdoms eventually became subservient to the Delhi Sultanate.
The Rajputs put up resistance to the Islamic invasions with their warfare and chivalry for centuries. The Rana's of Mewar led other kingdoms in its resistance to outside rule. Rana Hammir Singh, defeated the Tughlaq dynasty and recovered a large portion of Rajasthan. The indomitable Rana Kumbha defeated the Sultans of Malwa, Nagaur and Gujarat and made Mewar the most powerful Rajput Kingdom in India. The ambitious Rana Sanga united the various Rajput clans and fought against the foreign powers in India. Rana Sanga defeated the Afghan Lodi Empire of Delhi and crushed the Turkic Sultanates of Malwa and Gujarat. Rana Sanga then tried to create an Indian empire but was defeated by the first Mughal Emperor Babur at Khanua. The defeat was due to betrayal by the Tomar king Silhadi of Raisen. After Rana Sangas death there was no one who could check the rapid expansion of the Mughal Empire.
Hem Chandra Vikramaditya, the Hindu Emperor, was born in the village of Machheri in Alwar District in 1501. He won 22 battles against Afghans, from Punjab to Bengal including states of Ajmer and Alwar in Rajasthan, and defeated Akbar's forces twice, first at Agra and then at Delhi in 1556 at Battle of Delhi before acceding to the throne of Delhi and establishing the "Hindu Raj" in North India, albeit for a short duration, from Purana Quila in Delhi. Hem Chandra was killed in the battlefield at Second Battle of Panipat fighting against Mughals on 5 November 1556.
During Akbar's reign most of the Rajput kings accepted Mughal suzerainty, but the rulers of Mewar (Rana Udai Singh II) and Marwar (Rao Chandrasen Rathore) refused to have any form of alliance with the Mughals. To teach the Rajputs a lesson Akbar attacked Udai Singh and killed Rajput commander Jaimal of Chitor and the citizens of Mewar in large numbers. Akbar killed 20 – 25,000 unarmed citizens in Chittor on the grounds that they had actively helped in the resistance.
Maharana Pratap took an oath to avenge the citizens of Chittor, he fought the Mughal empire till his death and liberated most of Mewar apart from Chittor itself. Maharana Pratap soon became the most celebrated warrior of Rajasthan and became famous all over India for his sporadic warfare and noble actions. According to Satish Chandra, "Rana Pratap's defiance of the mighty Mughal empire, almost alone and unaided by the other Rajput states, constitutes a glorious saga of Rajput valor and the spirit of self-sacrifice for cherished principles. Rana Pratap's methods of sporadic warfare was later elaborated further by Malik Ambar, the Deccani general, and by Shivaji".
Rana Amar Singh I continued his ancestor's war against the Mughals under Jehangir, he repelled the Mughal armies at Dewar. Later an expedition was again sent under leadership of Prince Khurram, which caused much damage to life and property of Mewar. Many temples were destroyed, several villages were put on fire and women and children were captured and tortured to make Amar Singh accept surrender.
During Aurangzeb's rule Rana Raj Singh I and Veer Durgadas Rathore were chief among those who defied the intolerant emperor of Delhi. They took advantage of the Aravalli hills and caused heavy damage to the Mughal armies that were trying to occupy Rajasthan.
After Aurangzebs death Bahadur Shah I tried to subjugate Rajasthan like his ancestors but his plan backfired when the three Rajput Raja's of Amber, Udaipur and Jodhpur made a joint resistance to the Mughals. The Rajputs first expelled the commandants of Jodhpur and Bayana and recovered Amer by a night attack. They next killed Sayyid Hussain Khan Barha, the commandant of Mewat and many other Mughal officers. Bahadur Shah I, then in the Deccan was forced to patch up a truce with the Rajput Rajas. The Jats, under Suraj Mal, overran the Mughal garrison at Agra and plundered the city taking with them the two great silver doors of the entrance of the famous Taj Mahal which were then melted down by Suraj Mal in 1763.
Over the years, the Mughals began to have internal disputes which greatly distracted them at times. The Mughal Empire continued to weaken, and with the decline of the Mughal Empire in the late 18th century, Rajputana came under the influence of the Marathas. The Maratha Empire, which had replaced the Mughal Empire as the overlord of the subcontinent, was finally replaced by the British Empire in 1818.
In the 19th century, the Rajput kingdoms were exhausted, they had been drained financially and in manpower after continuous wars and due to heavy tributes exacted by the Maratha Empire. To save their kingdoms from instability, rebellions and banditry the Rajput kings concluded treaties with the British in the early 19th century, accepting British suzerainty and control over their external affairs in return for internal autonomy.
Modern Rajasthan includes most of Rajputana, which comprises the erstwhile nineteen princely states, two chiefships, and the British district of Ajmer-Merwara. Jaisalmer, Marwar (Jodhpur), Bikaner, Mewar (Chittorgarh), Alwar and Dhundhar (Jaipur) were some of the main Rajput princely states. Bharatpur and Dholpur were Jat princely states whereas Tonk was a princely state under Pathans.
The geographic features of Rajasthan are the Thar Desert and the Aravalli Range, which runs through the state from southwest to northeast, almost from one end to the other, for more than . Mount Abu lies at the southwestern end of the range, separated from the main ranges by the West Banas River, although a series of broken ridges continues into Haryana in the direction of Delhi where it can be seen as outcrops in the form of the Raisina Hill and the ridges farther north. About three-fifths of Rajasthan lies northwest of the Aravallis, leaving two-fifths on the east and south direction.
The Aravalli Range runs across the state from the southwest peak Guru Shikhar (Mount Abu), which is in height, to Khetri in the northeast. This range divides the state into 60% in the northwest of the range and 40% in the southeast. The northwest tract is sandy and unproductive with little water but improves gradually from desert land in the far west and northwest to comparatively fertile and habitable land towards the east. The area includes the Thar Desert. The south-eastern area, higher in elevation (100 to 350 m above sea level) and more fertile, has a very diversified topography. in the south lies the hilly tract of Mewar. In the southeast, a large area within the districts of Kota and Bundi forms a tableland. To the northeast of these districts is a rugged region (badlands) following the line of the Chambal River. Farther north the country levels out; the flat plains of the northeastern Bharatpur district are part of an alluvial basin. Merta City lies in the geographical center of Rajasthan.
The Aravalli Range and the lands to the east and southeast of the range are generally more fertile and better watered. This region is home to the Kathiawar-Gir dry deciduous forests ecoregion, with tropical dry broadleaf forests that include teak, "Acacia", and other trees. The hilly Vagad region, home to the cities of Dungarpur, and Banswara lies in southernmost Rajasthan, on the border with Gujarat and Madhya Pradesh. With the exception of Mount Abu, Vagad is the wettest region in Rajasthan, and the most heavily forested. North of Vagad lies the Mewar region, home to the cities of Udaipur and Chittaurgarh. The Hadoti region lies to the southeast, on the border with Madhya Pradesh. North of Hadoti and Mewar lies the Dhundhar region, home to the state capital of Jaipur. Mewat, the easternmost region of Rajasthan, borders Haryana and Uttar Pradesh. Eastern and southeastern Rajasthan is drained by the Banas and Chambal rivers, tributaries of the Ganges.
The northwestern portion of Rajasthan is generally sandy and dry. Most of this region is covered by the Thar Desert which extends into adjoining portions of Pakistan. The Aravalli Range does not intercept the moisture-giving southwest monsoon winds off the Arabian Sea, as it lies in a direction parallel to that of the coming monsoon winds, leaving the northwestern region in a rain shadow. The Thar Desert is thinly populated; the town of Jodhpur is the largest city in the desert and known as the gateway of the Thar desert. The desert has some major districts like Jodhpur, Jaisalmer, Barmer, Bikaner, and Nagour. This area is also important in the defence point of view. Jodhpur airbase is one of the largest airbases in India, BSF and Military bases are also situated here. A single civil airport is also situated in Jodhpur.
The Northwestern thorn scrub forests lie in a band around the Thar Desert, between the desert and the Aravallis. This region receives less than 400 mm of rain annually. Temperatures can sometimes exceed 54 °C in the summer months and drop below freezing point in the winter. The Godwar, Marwar, and Shekhawati regions lie in the thorn scrub forest zone, along with the city of Jodhpur. The Luni River and its tributaries are the major river system of Godwar and Marwar regions, draining the western slopes of the Aravallis and emptying southwest into the great Rann of Kutch wetland in neighboring Gujarat. This river is saline in the lower reaches and remains potable only up to Balotara in Barmer district. The Ghaggar River, which originates in Haryana, is an intermittent stream that disappears into the sands of the Thar Desert in the northern corner of the state and is seen as a remnant of the primitive Sarasvati river.
Though a large percentage of the total area is desert with little forest cover, Rajasthan has a rich and varied flora and fauna. The natural vegetation is classed as Northern Desert Thorn Forest (Champion 1936). These occur in small clumps scattered in a more or less open form. The density and size of patches increase from west to east following the increase in rainfall.
The Desert National Park in Jaisalmer is spread over an area of , is an excellent example of the ecosystem of the Thar Desert and its diverse fauna. Seashells and massive fossilised tree trunks in this park record the geological history of the desert. The region is a haven for migratory and resident birds of the desert. One can see many eagles, harriers, falcons, buzzards, kestrels and vultures. Short-toed snake eagles "(Circaetus gallicus)", tawny eagles "(Aquila rapax)", spotted eagles "(Aquila clanga)", laggar falcons "(Falco jugger)" and kestrels are the commonest of these.
The Ranthambore National Park located in Sawai Madhopur, one of the well known tiger reserves in the country, became a part of Project Tiger in 1973.
The Dhosi Hill located in the district of Jhunjunu, known as 'Chayvan Rishi's Ashram', where 'Chyawanprash' was formulated for the first time, has unique and rare herbs growing.
The Sariska Tiger Reserve located in Alwar district, from Delhi and from Jaipur, covers an area of approximately . The area was declared a national park in 1979.
Tal Chhapar Sanctuary is a very small sanctuary in Sujangarh, Churu District, from Jaipur in the Shekhawati region. This sanctuary is home to a large population of blackbuck. Desert foxes and the caracal, an apex predator, also known as the "desert lynx", can also be spotted, along with birds such as the partridge, harriers, Eastern Imperial Eagle, Pale Harrier, Marsh Harrier, Short-toed Eagle, Tawny Eagle, Sparrow Hawk, Crested Lark, Demoiselle Crane, Skylarks, Green Bee-eater, Brown Dove, Black Ibis and sand grouse. The Great Indian bustard, known locally as the "godavan", and which is a state bird, has been classed as critically endangered since 2011.
Rajasthan is also noted for its national parks and wildlife sanctuaries. There are four national parks and wildlife sanctuaries: Keladevi National Park of Bharatpur, Sariska Tiger Reserve of Alwar, Ranthambore National Park of Sawai Madhopur, and Desert National Park of Jaisalmer. A national-level institute, Arid Forest Research Institute (AFRI) an autonomous institute of the ministry of forestry is situated in Jodhpur and continuously works on desert flora and their conservation.
Ranthambore National Park is 7 km from Sawai Madhopur Railway Station. it is known worldwide for its tiger population and is considered by both wilderness lovers and photographers as one of the best places in India to spot tigers. At one point, due to poaching and negligence, tigers became extinct at Sariska, but five tigers have been relocated there. Prominent among the wildlife sanctuaries are Mount Abu Sanctuary, Bhensrod Garh Sanctuary, Darrah Sanctuary, Jaisamand Sanctuary, Kumbhalgarh Wildlife Sanctuary, Jawahar Sagar sanctuary, and Sita Mata Wildlife Sanctuary.
Major ISP and telecom companies are present in Rajasthan including Airtel, Data Infosys Limited, Reliance Limited, Idea, Jio, RAILTEL, Software Technology Parks of India (STPI), Tata Telecom and Vodafone. Data Infosys was the first Internet Service Provider (ISP) to bring internet in Rajasthan in April 1999 and OASIS was first private mobile telephone company. Today the largest coverage area and the clientele are with BSNL.
The politics of Rajasthan is dominated mainly by the Bharatiya Janata Party and the Indian National Congress.
Rajasthan is divided into 33 districts within seven divisions:
Rajasthan's economy is primarily agricultural and pastoral. Wheat and barley are cultivated over large areas, as are pulses, sugarcane, and oilseeds. Cotton and tobacco are the state's cash crops. Rajasthan is among the largest producers of edible oils in India and the second-largest producer of oilseeds. Rajasthan is also the biggest wool-producing state in India and the main opium producer and consumer. There are mainly two crop seasons. The water for irrigation comes from wells and tanks. The Indira Gandhi Canal irrigates northwestern Rajasthan.
The main industries are mineral based, agriculture-based, and textile based. Rajasthan is the second-largest producer of polyester fiber in India. Several prominent chemical and engineering companies are located in the city of Kota, in southern Rajasthan. Rajasthan is pre-eminent in quarrying and mining in India. The Taj Mahal was built from the white marble which was mined from a town called Makrana. The state is the second-largest source of cement in India. It has rich salt deposits at Sambhar, copper mines at Khetri, Jhunjhunu, and zinc mines at Dariba, Zawar mines and Rampura Agucha (opencast) near Bhilwara. Dimensional stone mining is also undertaken in Rajasthan. Jodhpur sandstone is mostly used in monuments, important buildings, and residential buildings. This stone is termed as "Chittar Patthar". Jodhpur leads in Handicraft and Guar Gum industry.
Rajasthan is also a part of the Mumbai-Delhi Industrial corridor is set to benefit economically. The State gets 39% of the DMIC, with major districts of Jaipur, Alwar, Kota and Bhilwara benefiting.
Rajasthan also has reserves of low-silica limestone.Nokha Brooms(Jhaadu) industry is top brooms manufacturer in Rajasthan also one of the leading small scale industry situated in Nokha(https://jhumarmal-ugamchand.ueniweb.com/). Nagaur is top Masala producer area in Rajasthan like Mayank Spices (Masale) , MDH masale etc.
Rajasthan is the largest producer of barley, mustard, pearl millet, coriander, fenugreek and guar in India. Rajasthan produces over 72% of guar of the world and 60% of India's barley. Rajasthan is major producer of aloe vera, amla, oranges leading producer of maize, groundnut. Rajasthan government had initiated olive cultivation with technical support from Israel. The current production of olives in the state is around 100–110 tonnes annually. Rajasthan is India's second largest producer of milk. Rajasthan has 13800 dairy co-operative societies.
Rajasthan is connected by many national highways. Most renowned being NH 8, which is India's first 4–8 lane highway. Rajasthan also has an inter-city surface transport system both in terms of railways and bus network. All chief cities are connected by air, rail, and road.
There are six main airports at Rajasthan – Jaipur International Airport, Jodhpur Airport, Udaipur Airport and the recently started Ajmer Airport, Bikaner Airport and Jaisalmer Airport. These airports connect Rajasthan with the major cities of India such as Delhi and Mumbai. There is another airport in Kota but is not open for commercial/civilian flights yet.
Rajasthan is connected with the main cities of India by rail. Jaipur, Kota, Ajmer, Jodhpur, Bharatpur, Bikaner, Alwar, Abu Road, and Udaipur are the principal railway stations in Rajasthan. Kota City is the only electrified section served by three Rajdhani Expresses and trains to all major cities of India. There is also an international railway, the Thar Express from Jodhpur (India) to Karachi (Pakistan). However, this is not open to foreign nationals.
Rajasthan is well connected to the main cities of the country including Delhi, Ahmedabad and Indore by state and national highways and served by Rajasthan State Road Transport Corporation (RSRTC) and private operators. Now in March 2017, 75 percent of all national highways being built in Rajasthan according to the public works minister of Rajasthan.
According to 2011 Census of India, Rajasthan has a total population of 68,548,437. The native Rajasthani people make up the majority of the state's population. The state of Rajasthan is also populated by Sindhis, who came to Rajasthan from Sindh province (now in Pakistan) during the India-Pakistan separation in 1947. As for religion, Rajasthan's residents are mainly Hindus, who account for 88.49% of the population. Muslims make up 9.07%, Sikhs 1.27% and Jains 0.91% of the population.
According to a report by "Moneycontrol.com" at the time of 2018 Rajasthan Legislative Assembly election, the Scheduled Caste (SC) population was 18% , Scheduled Tribe (ST) was 13.5, Jats 12%, Gujjars 11% and Rajputs 13% , Brahmins and Meenas 7% each. Brahmins, according to "Outlook" constituted 8% to 10% of the population of Rajasthan as per a 2003 report, but only 7% in a 2007 report. According to a 2007 "DNA India" report, 12.5% of the state are Brahmins.
Hindi is the official and the most widely spoken language in the state (90.97% of the population as per the 2001 census), followed by Bhili (4.60%), Punjabi (2.01%), and Urdu (1.17%).
Rajasthani is one of the main spoken languages in the state. Rajasthani and various Rajasthani dialects are counted under Hindi in the national census. In the 2001 census, standard Rajasthani had over 18 million speakers, as well as millions of other speakers of Rajasthani dialects, such as Marwari.
The languages taught under the three-language formula are:
First Language: Hindi
Second Language: English
Third Language: Sanskrit, Marwari, Sindhi, and Urdu
Rajasthan is culturally rich and has artistic and cultural traditions that reflect the ancient Indian way of life. There is rich and varied folk culture from villages which are often depicted as a symbol of the state. Highly cultivated classical music and dance with its own distinct style is part of the cultural tradition of Rajasthan. The music has songs that depict day-to-day relationships and chores, often focused around fetching water from wells or ponds.
Rajasthani cooking was influenced by both the war-like lifestyles of its inhabitants and the availability of ingredients in this arid region. Food that could last for several days and could be eaten without heating was preferred. The scarcity of water and fresh green vegetables have all had their effect on the cooking. It is known for its snacks like Bikaneri Bhujia. Other famous dishes include "bajre ki roti" (millet bread) and "lahsun ki chutney" (hot garlic paste), "mawa kachori" Mirchi Bada, Pyaaj Kachori and ghevar from Jodhpur, Alwar ka Mawa (milk cake), "Kadhi kachori" from Ajmer, "Malpua" from Pushkar, Daal kachori (Kota kachori) from Kota and rassgullas from Bikaner. Originating from the Marwar region of the state is the concept of Marwari Bhojnalaya, or vegetarian restaurants, today found in many parts of India, which offer vegetarian food of the Marwari people.
Dal-Bati-Churma is very popular in Rajasthan. The traditional way to serve it is to first coarsely mash the Baati, and then pour pure ghee on top of it. It is served with the daal (lentils) and spicy garlic chutney. Also served with besan (gram flour) ki kadi. It is commonly served at all festivities, including religious occasions, wedding ceremonies, and birthday parties in Rajasthan.
The Ghoomar dance from Jaipur, Jodhpur, and Kalbelia of the Kalbelia tribe have gained international recognition. Folk music is a large part of the Rajasthani culture. The Manganiyar and Langa communities from Rajasthan are notable for their folk music. Kathputli, Bhopa, Chang, Teratali, Ghindr, Gair dance, Kachchhi Ghori, and Tejaji are examples of traditional Rajasthani culture. Folk songs are commonly ballads that relate heroic deeds and love stories; and religious or devotional songs known as bhajans and banis which are often accompanied by musical instruments like dholak, sitar, and sarangi are also sung.
Rajasthan is known for its traditional, colorful art. The block prints, tie and dye prints, Gota Patti (main), Bagaru prints, Sanganer prints, and Zari embroidery are major export products from Rajasthan. Handicraft items like wooden furniture and crafts, carpets, and blue pottery are commonly found here. Shopping reflects the colorful culture, Rajasthani clothes have a lot of mirror work and embroidery. A Rajasthani traditional dress for females comprises an ankle-length skirt and a short top, known as "chaniya choli" Mainly pure owned by traditional people. A piece of cloth is used to cover the head, both for protection from heat and maintenance of modesty. Rajasthani dresses are usually designed in bright colors like blue, yellow, and orange.
The main religious festivals are Deepawali, Holi, Gangaur, Teej, Gogaji, Shri Devnarayan Jayanti, Makar Sankranti and Janmashtami, as the main religion is Hinduism. Rajasthan's desert festival is held once a year during winter. Dressed in costumes, the people of the desert dance and sing ballads. There are fairs with snake charmers, puppeteers, acrobats, and folk performers. Camels play a role in this festival.
During recent years, Rajasthan has worked on improving education. The state government has been making sustained efforts to raise the education standard.
In recent decades the literacy rate of Rajasthan has increased significantly. In 1991, the state's literacy rate was only 38.55% (54.99% male and 20.44% female). In 2001, the literacy rate increased to 60.41% (75.70% male and 43.85% female). This was the highest leap in the percentage of literacy recorded in India (the rise in female literacy being 23%). At the Census 2011, Rajasthan had a literacy rate of 67.06% (80.51% male and 52.66% female). Although Rajasthan's literacy rate is below the national average of 74.04% and although its female literacy rate is the lowest in the country, the state has been praised for its efforts and achievements in raising literacy rates.
In rural areas of Rajasthan, the literacy rate is 76.16% for males and 45.8% for females. This has been debated across all the party level, when the governor of Rajasthan set a minimum educational qualification for the village panchayat elections.
Rajasthan attracted a total of 45.9 million domestic and 1.6 million foreign tourists in 2017, which is the tenth highest in terms of domestic visitors and fifth highest in foreign tourists. The tourism industry in Rajasthan is growing effectively each year and is becoming one of the major income sources for the state government. Rajasthan is home to attractions for domestic and foreign travellers, including the forts and palaces of Jaipur, lakes of Udaipur, Temples of Rajsamand and Pali, sand dunes of Jaisalmer and Bikaner, Havelis of Mandawa and Fatehpur, Rajasthan, wildlife of Sawai Madhopur, the scenic beauty of Mount Abu, tribes of Dungarpur and Banswara, and the cattle fair of Pushkar.
Rajasthan is known for its custom culture colors, majestic forts, and palaces, folk dances and music, local festivals, local food, sand dunes, carved temples, beautiful havelis. Rajasthan's Jaipur Jantar Mantar, Mehrangarh Fort and Stepwell of Jodhpur, Dilwara Temples, Chittor Fort, Lake Palace, miniature paintings in Bundi, and numerous city palaces and Havelis are part of the architectural heritage of India. Jaipur, the "Pink City", is noted for the ancient houses made of a type of sandstone dominated by a pink hue. In Jodhpur, maximum houses are painted blue. At Ajmer, there is white marble Bara-dari on the Anasagar lake and Soniji Ki Nasiyan. Jain Temples dot Rajasthan from north to south and east to west. Dilwara Temples of Mount Abu, Shrinathji Temple of Nathdwara, Ranakpur Jain temple dedicated to Lord Adinath in Pali District, Jain temples in the fort complexes of Chittor, Jaisalmer and Kumbhalgarh, Lodurva Jain temples, Mirpur Jain Temple of Sirohi, Sarun Mata Temple at Kotputli, Bhandasar and Karni Mata Temple of Bikaner and Mandore of Jodhpur are some of the best examples. Keoladeo National Park, Ranthambore National Park, Sariska Tiger Reserve, Tal Chhapar Sanctuary, are wildlife attractions of Rajasthan. Mewar festival of Udaipur, Teej festival and Gangaur festival in Jaipur, Desert festival of Jodhpur, Brij Holi of Bharatpur, Matsya festival of Alwar, Kite festival of Jodhpur, Kolayat fair in Bikaner are some of the most popular fairs and festivals of Rajasthan. | https://en.wikipedia.org/wiki?curid=26291 |
Reflux suppressant
A reflux suppressant is any one of a number of drugs used to combat oesophageal reflux. Commonly, following ingestion a 'raft' of alginic acid is created, floating on the stomach contents by carbon dioxide released by the drug. This forms a mechanical barrier to further reflux. Some preparations also contain antacids to protect the oesophagus.
Reflux can also be coincidentally reduced by the and antidopaminergics. | https://en.wikipedia.org/wiki?curid=26294 |
Russian Civil War
The Russian Civil War () was a multi-party civil war in the former Russian Empire immediately after the two Russian Revolutions of 1917, as many factions vied to determine Russia's political future. The two largest combatant groups were the Red Army, fighting for the Bolshevik form of socialism led by Vladimir Lenin, and the loosely allied forces known as the White Army, which included diverse interests favouring political monarchism, capitalism and social democracy, each with democratic and anti-democratic variants. In addition, rival militant socialists, notably Makhnovia anarchists and Left SRs, as well as non-ideological Green armies, fought against both the Reds and the Whites. Thirteen foreign nations intervened against the Red Army, notably the former Allied military forces from the World War with the goal of re-establishing the Eastern Front. Three foreign nations of the Central Powers also intervened, rivaling the Allied intervention with the main goal of retaining the territory they had received in the Treaty of Brest-Litovsk.
The Red Army eventually defeated the White Armed Forces of South Russia in Ukraine and the army led by Admiral Alexander Kolchak to the east in Siberia in 1919. The remains of the White forces commanded by Pyotr Wrangel were beaten in Crimea and evacuated in late 1920. Lesser battles of the war continued on the periphery for two more years, and minor skirmishes with the remnants of the White forces in the Far East continued well into 1923. The war ended in 1923, in the sense that Bolshevik communist control of the newly formed Soviet Union was then assured, although armed national resistance in Central Asia was not completely crushed until 1934. There was an estimated 7,000,000–12,000,000 casualties during the war, mostly civilians.
Many pro-independence movements emerged after the break-up of the Russian Empire and fought in the war. Several parts of the former Russian Empire—Finland, Estonia, Latvia, Lithuania, and Poland—were established as sovereign states, with their own civil wars and wars of independence. The rest of the former Russian Empire was consolidated into the Soviet Union shortly afterwards.
The Russian Empire fought in World War I from 1914 alongside France and the United Kingdom (Triple Entente) against Germany, Austria-Hungary and the Ottoman Empire (Central Powers).
The February Revolution of 1917 resulted in the abdication of Nicholas II of Russia. As a result, the Russian Provisional Government was established, and soviets, elected councils of workers, soldiers, and peasants, were organized throughout the country, leading to a situation of dual power. Russia was proclaimed a republic in September of the same year.
The Provisional Government, led by Socialist Revolutionary Party politician Alexander Kerensky, was unable to solve the most pressing issues of the country, most importantly to end the war with the Central Powers. A failed military coup by General Lavr Kornilov in September 1917 led to a surge in support for the Bolshevik party, who gained majorities in the soviets, which until then had been controlled by the Socialist Revolutionaries. Promising an end to the war and "all power to the Soviets," the Bolsheviks then ended dual power by suppressing the Provisional Government in late October, on the eve of the Second All-Russian Congress of Soviets, in what would be the second Revolution of 1917. Despite the Bolsheviks' seizure of power, they lost to the Socialist Revolutionary Party in the 1917 Russian Constituent Assembly election, and the Constituent Assembly was dissolved by the Bolsheviks. The Bolsheviks soon lost the support of other far-left allies such as the Left Socialist-Revolutionaries due to their acceptance of the terms of the Treaty of Brest-Litovsk presented by Germany.
From mid-1917 onwards, the Russian Army, the successor-organisation of the old Imperial Russian Army, started to disintegrate; the Bolsheviks used the volunteer-based Red Guards as their main military force, augmented by an armed military component of the Cheka (the Bolshevik state-security apparatus). In January 1918, after significant Bolshevik reverses in combat, the future People's Commissar for Military and Naval Affairs, Leon Trotsky headed the reorganization of the Red Guards into a "Workers' and Peasants' Red Army" in order to create a more effective fighting force. The Bolsheviks appointed political commissars to each unit of the Red Army to maintain morale and to ensure loyalty.
In June 1918, when it had become apparent that a revolutionary army composed solely of workers would not suffice, Trotsky instituted mandatory conscription of the rural peasantry into the Red Army. The Bolsheviks overcame opposition of rural Russians to Red-Army conscription units by taking hostages and shooting them when necessary in order to force compliance, exactly the same practices used by the White Army officers.
The Red Army utilized former Tsarist officers as "military specialists" ("voenspetsy"); sometimes their families were taken hostage in order to ensure their loyalty. At the start of the civil war, former Tsarist officers comprised three-quarters of the Red Army officer-corps. By its end, 83% of all Red Army divisional and corps commanders were ex-Tsarist soldiers. The forced conscription drive had mixed results, successfully creating a large army with numerical superiority over the Whites but becoming composed of members indifferent towards Marxist–Leninist ideology.
While resistance to the Red Guard began on the very day after the Bolshevik uprising, the Treaty of Brest-Litovsk and the instinct of one-party rule became a catalyst for the formation of anti-Bolshevik groups both inside and outside Russia, pushing them into action against the new Soviet government.
A loose confederation of anti-Bolshevik forces aligned against the Communist government, including landowners, republicans, conservatives, middle-class citizens, reactionaries, pro-monarchists, liberals, army generals, non-Bolshevik socialists who still had grievances and democratic reformists voluntarily united only in their opposition to Bolshevik rule. Their military forces, bolstered by forced conscriptions and terror as well as foreign influence, under the leadership of General Nikolai Yudenich, Admiral Alexander Kolchak and General Anton Denikin, became known as the White movement (sometimes referred to as the "White Army") and controlled significant parts of the former Russian Empire for most of the war.
A Ukrainian nationalist movement was active in Ukraine during the war. More significant was the emergence of an anarchist political and military movement known as the Revolutionary Insurrectionary Army of Ukraine or the Anarchist Black Army led by Nestor Makhno. The Black Army, which counted numerous Jews and Ukrainian peasants in its ranks, played a key part in halting Denikin's White Army offensive towards Moscow during 1919, later ejecting White forces from Crimea.
The remoteness of the Volga Region, the Ural Region, Siberia and the Far East was favorable for the anti-Bolshevik forces, and the Whites set up a number of organizations in the cities of these regions. Some of the military forces were set up on the basis of clandestine officers' organizations in the cities.
The Czechoslovak Legions had been part of the Russian Army and numbered around 30,000 troops by October 1917. They had an agreement with the new Bolshevik government to be evacuated from the Eastern Front via the port of Vladivostok to France. The transport from the Eastern Front to Vladivostok slowed down in the chaos, and the troops became dispersed all along the Trans-Siberian Railway. Under pressure from the Central Powers, Trotsky ordered the disarming and arrest of the legionaries, which created tensions with the Bolsheviks.
The Western Allies armed and supported opponents of the Bolsheviks. They were worried about a possible Russo-German alliance, the prospect of the Bolsheviks making good on their threats to default on Imperial Russia's massive foreign loans, and the possibility that Communist revolutionary ideas would spread (a concern shared by many Central Powers). Hence, many of these countries expressed their support for the Whites, including the provision of troops and supplies. Winston Churchill declared that Bolshevism must be "strangled in its cradle". The British and French had supported Russia during World War I on a massive scale with war materials.
After the treaty, it looked like much of that material would fall into the hands of the Germans. To meet this danger the Allies intervened with Great Britain and France sending troops into Russian ports. There were violent clashes with the Bolsheviks. Britain intervened in support of the White forces to defeat the Bolsheviks and prevent the spread of communism across Europe.
The German Empire created several short-lived satellite buffer states within its sphere of influence after the Treaty of Brest-Litovsk: the United Baltic Duchy, Duchy of Courland and Semigallia, Kingdom of Lithuania, Kingdom of Poland, the Belarusian People's Republic, and the Ukrainian State. Following the defeat of Germany in World War I in November 1918, these states were abolished.
Finland was the first republic that declared its independence from Russia in December 1917 and established itself in the ensuing Finnish Civil War from January–May 1918. The Second Polish Republic, Lithuania, Latvia and Estonia formed their own armies immediately after the abolition of the Brest-Litovsk Treaty and the start of the Soviet westward offensive in November 1918.
In the European part of Russia the war was fought across three main fronts: the eastern, the southern and the northwestern. It can also be roughly split into the following periods.
The first period lasted from the Revolution until the Armistice. Already on the date of the Revolution, Cossack General Alexey Kaledin refused to recognize it and assumed full governmental authority in the Don region, where the Volunteer Army began amassing support. The signing of the Treaty of Brest-Litovsk also resulted in direct Allied intervention in Russia and the arming of military forces opposed to the Bolshevik government. There were also many German commanders who offered support against the Bolsheviks, fearing a confrontation with them was impending as well.
During this first period the Bolsheviks took control of Central Asia out of the hands of the Provisional Government and White Army, setting up a base for the Communist Party in the Steppe and Turkestan, where nearly two million Russian settlers were located.
Most of the fighting in this first period was sporadic, involving only small groups amid a fluid and rapidly shifting strategic situation. Among the antagonists were the Czechoslovak Legion, the Poles of the 4th and 5th Rifle Divisions and the pro-Bolshevik Red Latvian riflemen.
The second period of the war lasted from January to November 1919. At first the White armies' advances from the south (under Denikin), the east (under Kolchak) and the northwest (under Yudenich) were successful, forcing the Red Army and its allies back on all three fronts. In July 1919 the Red Army suffered another reverse after a mass defection of units in the Crimea to the anarchist Black Army under Nestor Makhno, enabling anarchist forces to consolidate power in Ukraine. Leon Trotsky soon reformed the Red Army, concluding the first of two military alliances with the anarchists. In June the Red Army first checked Kolchak's advance. After a series of engagements, assisted by a Black Army offensive against White supply lines, the Red Army defeated Denikin's and Yudenich's armies in October and November.
The third period of the war was the extended siege of the last White forces in the Crimea. General Wrangel had gathered the remnants of Denikin's armies, occupying much of the Crimea. An attempted invasion of southern Ukraine was rebuffed by the Black Army under Makhno's command. Pursued into the Crimea by Makhno's troops, Wrangel went over to the defensive in the Crimea. After an abortive move north against the Red Army, Wrangel's troops were forced south by Red Army and Black Army forces; Wrangel and the remains of his army were evacuated to Constantinople in November 1920.
In the October Revolution the Bolshevik Party directed the Red Guard (armed groups of workers and Imperial army deserters) to seize control of Petrograd (Saint Petersburg) and immediately began the armed takeover of cities and villages throughout the former Russian Empire. In January 1918 the Bolsheviks dissolved the Russian Constituent Assembly and proclaimed the Soviets (workers' councils) as the new government of Russia.
The first attempt to regain power from the Bolsheviks was made by the Kerensky-Krasnov uprising in October 1917. It was supported by the Junker Mutiny in Petrograd but was quickly put down by the Red Guard, notably including the Latvian Rifle Division.
The initial groups that fought against the Communists were local Cossack armies that had declared their loyalty to the Provisional Government. Kaledin of the Don Cossacks and General Grigory Semenov of the Siberian Cossacks were prominent among them. The leading Tsarist officers of the Imperial Russian Army also started to resist. In November, General Mikhail Alekseev, the Tsar's Chief of Staff during the First World War, began to organize the Volunteer Army in Novocherkassk. Volunteers of this small army were mostly officers of the old Russian army, military cadets and students. In December 1917 Alekseev was joined by General Lavr Kornilov, Denikin and other Tsarist officers who had escaped from the jail, where they had been imprisoned following the abortive Kornilov affair just before the Revolution. At the beginning of December 1917, groups of volunteers and Cossacks captured Rostov.
Having stated in the November 1917 "Declaration of Rights of Nations of Russia" that any nation under imperial Russian rule should be immediately given the power of self-determination, the Bolsheviks had begun to usurp the power of the Provisional Government in the territories of Central Asia soon after the establishment of the Turkestan Committee in Tashkent. In April 1917 the Provisional Government set up this committee, which was mostly made up of former Tsarist officials. The Bolsheviks attempted to take control of the Committee in Tashkent on 12 September 1917 but it was unsuccessful, and many leaders were arrested. However, because the Committee lacked representation of the native population and poor Russian settlers, they had to release the Bolshevik prisoners almost immediately due to public outcry, and a successful takeover of this government body took place two months later in November. The Leagues of Mohammedam Working People, which Russian settlers and natives who had been sent to work behind the lines for the Tsarist government in 1916 formed in March 1917, had led numerous strikes in the industrial centers throughout September 1917. However, after the Bolshevik destruction of the Provisional Government in Tashkent, Muslim elites formed an autonomous government in Turkestan, commonly called the "Kokand autonomy" (or simply Kokand). The White Russians supported this government body, which lasted several months because of Bolshevik troop isolation from Moscow. In January 1918 the Soviet forces under Lt. Col. Muravyov invaded Ukraine and invested Kiev, where the Central Council of the Ukrainian People's Republic held power. With the help of the Kiev Arsenal Uprising, the Bolsheviks captured the city on 26 January.
The Bolsheviks decided to immediately make peace with the German Empire and the Central Powers, as they had promised the Russian people before the Revolution. Vladimir Lenin's political enemies attributed that decision to his sponsorship by the Foreign Office of Wilhelm II, German Emperor, offered to Lenin in hope that, with a revolution, Russia would withdraw from World War I. That suspicion was bolstered by the German Foreign Ministry's sponsorship of Lenin's return to Petrograd. However, after the military fiasco of the summer offensive (June 1917) by the Russian Provisional Government, and in particular after the failed summer offensive of the Provisional Government had devastated the structure of the Russian Army, it became crucial that Lenin realize the promised peace. Even before the failed summer offensive the Russian population was very skeptical about the continuation of the war. Western socialists had promptly arrived from France and from the UK to convince the Russians to continue the fight, but could not change the new pacifist mood of Russia.
On 16 December 1917 an armistice was signed between Russia and the Central Powers in Brest-Litovsk and peace talks began. As a condition for peace, the proposed treaty by the Central Powers conceded huge portions of the former Russian Empire to the German Empire and the Ottoman Empire, greatly upsetting nationalists and conservatives. Leon Trotsky, representing the Bolsheviks, refused at first to sign the treaty while continuing to observe a unilateral cease-fire, following the policy of "No war, no peace".
In view of this, on 18 February 1918 the Germans began Operation Faustschlag on the Eastern Front, encountering virtually no resistance in a campaign that lasted 11 days. Signing a formal peace treaty was the only option in the eyes of the Bolsheviks because the Russian Army was demobilized, and the newly formed Red Guard was incapable of stopping the advance. They also understood that the impending counterrevolutionary resistance was more dangerous than the concessions of the treaty, which Lenin viewed as temporary in the light of aspirations for a world revolution. The Soviets acceded to a peace treaty, and the formal agreement, the Treaty of Brest-Litovsk, was ratified on 6 March. The Soviets viewed the treaty as merely a necessary and expedient means to end the war.
In Ukraine the German-Austrian Operation Faustschlag had by April 1918 removed the Bolsheviks from Ukraine. The German and Austro-Hungarian victories in Ukraine were due to the apathy of the locals and the inferior fighting skills of Bolsheviks troops compared to their Austro-Hungarian and German counterparts.
Under Soviet pressure, the Volunteer Army embarked on the epic Ice March from Yekaterinodar to Kuban on 22 February 1918, where they joined with the Kuban Cossacks to mount an abortive assault on Yekaterinodar. The Soviets recaptured Rostov on the next day. Kornilov was killed in the fighting on 13 April, and Denikin took over command. Fighting off its pursuers without respite, the army succeeded in breaking its way through back towards the Don, where the Cossack uprising against Bolsheviks had started.
The Baku Soviet Commune was established on 13 April. Germany landed its Caucasus Expedition troops in Poti on 8 June. The Ottoman Army of Islam (in coalition with Azerbaijan) drove them out of Baku on 26 July 1918. Subsequently, the Dashanaks, Right SRs and Mensheviks started negotiations with Gen. Dunsterville, the commander of the British troops in Persia. The Bolsheviks and their Left SR allies were opposed to it, but on 25 July the majority of the Soviet voted to call in the British and the Bolsheviks resigned. The Baku Soviet Commune ended its existence and was replaced by the Central Caspian Dictatorship.
In June 1918 the Volunteer Army, numbering some 9,000 men, started its Second Kuban campaign. Yekaterinodar was encircled on 1 August and fell on the 3rd. In September–October, heavy fighting took place at Armavir and Stavropol. On 13 October Gen. Kazanovich's division took Armavir, and on 1 November Gen. Pyotr Wrangel secured Stavropol. This time Red forces had no escape, and by the beginning of 1919 the whole Northern Caucasus was controlled by the Volunteer Army.
In October Gen. Alekseev, the leader of the White armies in southern Russia died of a heart attack. An agreement was reached between Denikin, head of the Volunteer Army, and Pyotr Krasnov, Ataman of the Don Cossacks, which united their forces under the sole command of Denikin. The Armed Forces of South Russia were thus created.
The revolt of the Czechoslovak Legion broke out in May 1918, and the legionaries took control of Chelyabinsk in June. Simultaneously Russian officers' organisations overthrew the Bolsheviks in Petropavlovsk (in present-day Kazakhstan) and in Omsk. Within a month the Czechoslovak Legion controlled most of the Trans-Siberian Railroad between Lake Baikal and the Ural regions. During the summer Bolshevik power in Siberia was eliminated. The Provisional Government of Autonomous Siberia formed in Omsk. By the end of July the Whites had extended their gains westwards, capturing Ekaterinburg on 26 July 1918. Shortly before the fall of Yekaterinburg on 17 July 1918, the former Tsar and his family were murdered by the Ural Soviet to prevent them from falling into the hands of the Whites.
The Mensheviks and Socialist-Revolutionaries supported peasants fighting against Soviet control of food supplies. In May 1918, with the support of the Czechoslovak Legion, they took Samara and Saratov, establishing the Committee of Members of the Constituent Assembly—known as the "Komuch". By July the authority of the Komuch extended over much of the area controlled by the Czechoslovak Legion. The Komuch pursued an ambivalent social policy, combining democratic and socialist measures, such as the institution of an eight-hour working day, with "restorative" actions, such as returning both factories and land to their former owners. After the fall of Kazan, Vladimir Lenin called for the dispatch of Petrograd workers to the Kazan Front: "We must send down the "maximum" number of Petrograd workers: (1) a few dozen 'leaders' like Kayurov; (2) a few thousand militants 'from the ranks'".
After a series of reverses at the front, the Bolsheviks' War Commissar, Trotsky, instituted increasingly harsh measures in order to prevent unauthorised withdrawals, desertions and mutinies in the Red Army. In the field the Cheka special investigations forces, termed the "Special Punitive Department of the All-Russian Extraordinary Commission for Combat of Counter-Revolution and Sabotage" or "Special Punitive Brigades", followed the Red Army, conducting field tribunals and summary executions of soldiers and officers who deserted, retreated from their positions or failed to display sufficient offensive zeal. Trotsky extended the use of the death penalty to the occasional political commissar whose detachment retreated or broke in the face of the enemy. In August, frustrated at continued reports of Red Army troops breaking under fire, Trotsky authorised the formation of barrier troops - stationed behind unreliable Red Army units and given orders to shoot anyone withdrawing from the battle line without authorisation.
In September 1918 Komuch, the Siberian Provisional Government and other local anti-Soviet governments met in Ufa and agreed to form a new Provisional All-Russian Government in Omsk, headed by a Directory of five: two Socialist-Revolutionaries (Nikolai Avksentiev and Vladimir Zenzinov), two Kadets (V. A. Vinogradov and PV Vologodskii) and General Vasily Boldyrev.
By the fall of 1918 anti-Bolshevik White forces in the east included the People's Army (Komuch), the Siberian Army (of the Siberian Provisional Government) and insurgent Cossack units of Orenburg, Ural, Siberia, Semirechye, Baikal, Amur and Ussuri Cossacks, nominally under the orders of Gen. V.G. Boldyrev, Commander-in-Chief, appointed by the Ufa Directorate.
On the Volga, Col. Kappel's White detachment captured Kazan on 7 August, but the Reds re-captured the city on 8 September 1918 following a counteroffensive. On the 11th Simbirsk fell, and on 8 October Samara. The Whites fell back eastwards to Ufa and Orenburg.
In Omsk the Russian Provisional Government quickly came under the influence - then the dominance - of its new War Minister, Rear-Admiral Kolchak. On 18 November a coup d'état established Kolchak as dictator. The members of the Directory were arrested and Kolchak proclaimed the "Supreme Ruler of Russia". By mid-December 1918 White armies had to leave Ufa, but they balanced this failure with a successful drive towards Perm, which they took on 24 December.
In February 1918 the Red Army overthrew the White Russian-supported Kokand autonomy of Turkestan. Although this move seemed to solidify Bolshevik power in Central Asia, more troubles soon arose for the Red Army as the Allied Forces began to intervene. British support of the White Army provided the greatest threat to the Red Army in Central Asia during 1918. Great Britain sent three prominent military leaders to the area. One was Lt. Col. Bailey, who recorded a mission to Tashkent, from where the Bolsheviks forced him to flee. Another was Gen. Malleson, leading the Malleson Mission, who assisted the Mensheviks in Ashkhabad (now the capital of Turkmenistan) with a small Anglo-Indian force. However, he failed to gain control of Tashkent, Bukhara and Khiva. The third was Maj. Gen. Dunsterville, who the Bolsheviks drove out of Central Asia only a month after his arrival in August 1918. Despite setbacks due to British invasions during 1918, the Bolsheviks continued to make progress in bringing the Central Asian population under their influence. The first regional congress of the Russian Communist Party convened in the city of Tashkent in June 1918 in order to build support for a local Bolshevik Party.
In July two Left SR and Cheka employees, Blyumkin and Andreyev, assassinated the German ambassador, Count Mirbach. In Moscow a Left SR uprising was put down by the Bolsheviks, using Cheka military detachments. Lenin personally apologized to the Germans for the assassination. Mass arrests of Socialist-Revolutionaries followed.
Estonia cleared its territory of the Red Army by January 1919. Baltic German volunteers captured Riga from the Red Latvian Riflemen on 22 May, but the Estonian 3rd Division defeated the Baltic Germans a month later, aiding the establishment of the Republic of Latvia.
This rendered possible another threat to the Red Army—one from Gen. Yudenich, who had spent the summer organizing the Northwestern Army in Estonia with local and British support. In October 1919 he tried to capture Petrograd in a sudden assault with a force of around 20,000 men. The attack was well-executed, using night attacks and lightning cavalry maneuvers to turn the flanks of the defending Red Army. Yudenich also had six British tanks, which caused panic whenever they appeared. The Allies gave large quantities of aid to Yudenich, who, however, complained that he was receiving insufficient support.
By 19 October Yudenich's troops had reached the outskirts of the city. Some members of the Bolshevik central committee in Moscow were willing to give up Petrograd, but Trotsky refused to accept the loss of the city and personally organized its defenses. Trotsky himself declared, "It is impossible for a little army of 15,000 ex-officers to master a working-class capital of 700,000 inhabitants." He settled on a strategy of urban defense, proclaiming that the city would "defend itself on its own ground" and that the White Army would be lost in a labyrinth of fortified streets and there "meet its grave".
Trotsky armed all available workers, men and women, ordering the transfer of military forces from Moscow. Within a few weeks the Red Army defending Petrograd had tripled in size and outnumbered Yudenich three to one. At this point Yudenich, short of supplies, decided to call off the siege of the city and withdrew, repeatedly asking permission to withdraw his army across the border to Estonia. However, units retreating across the border were disarmed and interned by order of the Estonian government, which had entered into peace negotiations with the Soviet Government on 16 September and had been informed by the Soviet authorities of their 6 November decision that, should the White Army be allowed to retreat into Estonia, it would be pursued across the border by the Reds. In fact, the Reds attacked Estonian army positions and fighting continued until a cease-fire went into effect on 3 January 1920. Following the Treaty of Tartu most of Yudenich's soldiers went into exile. Former Imperial Russian and then Finnish Gen. Mannerheim planned an intervention to help the Whites in Russia capture Petrograd. However, he did not gain the necessary support for the endeavour. Lenin considered it "completely certain, that the slightest aid from Finland would have determined the fate of [the city]".
The British occupied Murmansk and, alongside the Americans, seized Arkhangelsk. With the retreat of Kolchak in Siberia, they pulled their troops out of the cities before the winter trapped them in the port. The remaining White forces under Yevgeny Miller evacuated the region in February 1920.
At the beginning of March 1919 the general offensive of the Whites on the eastern front began. Ufa was retaken on 13 March; by mid-April, the White Army stopped at the Glazov–Chistopol–Bugulma–Buguruslan–Sharlyk line. Reds started their counteroffensive against Kolchak's forces at the end of April. The Red 5th Army, led by the capable commander Tukhachevsky, captured Elabuga on 26 May, Sarapul on 2 June and Izevsk on the 7th and continued to push forward. Both sides had victories and losses, but by the middle of summer the Red Army was larger than the White Army and had managed to recapture territory previously lost.
Following the abortive offensive at Chelyabinsk, the White armies withdrew beyond the Tobol. In September 1919 a White offensive was launched against the Tobol front, the last attempt to change the course of events. However, on 14 October the Reds counterattacked, and thus began the uninterrupted retreat of the Whites to the east. On 14 November 1919 the Red Army captured Omsk. Adm. Kolchak lost control of his government shortly after this defeat; White Army forces in Siberia essentially ceased to exist by December. Retreat of the eastern front by White armies lasted three months, until mid-February 1920, when the survivors, after crossing Lake Baikal, reached Chita area and joined Ataman Semenov's forces.
The Cossacks had been unable to organise and capitalise on their successes at the end of 1918. By 1919 they had begun to run short of supplies. Consequently, when the Soviet counteroffensive began in January 1919 under the Bolshevik leader Antonov-Ovseenko, the Cossack forces rapidly fell apart. The Red Army captured Kiev on 3 February 1919.
General Denikin's military strength continued to grow in the spring of 1919. During several months in winter and spring of 1919, hard fighting with doubtful outcomes took place in the Donbass, where the attacking Bolsheviks met White forces. At the same time Denikin's Armed Forces of South Russia (AFSR) completed the elimination of Red forces in the northern Caucasus and advanced towards Tsaritsyn. At the end of April and beginning of May the AFSR attacked on all fronts from the Dnepr to the Volga, and by the beginning of the summer they had won numerous battles. French forces landed in Odessa but, after having done almost no fighting, withdrew on 8 April 1919. By mid-June the Reds were chased from the Crimea and the Odessa area. Denikin's troops took the cities of Kharkov and Belgorod. At the same time White troops under Wrangel's command took Tsaritsyn on 17 June 1919. On 20 June Denikin issued his Moscow directive, ordering all AFSR units to prepare for a decisive offensive to take Moscow.
Although Great Britain had withdrawn its own troops from the theatre, it continued to give significant military aid (money, weapons, food, ammunition and some military advisers) to the White Armies during 1919. Major Ewen Cameron Bruce of the British Army had volunteered to command a British tank mission assisting the White Army. He was awarded the Distinguished Service Order for his bravery during the June 1919 battle of Tsaritsyn for single-handedly storming and capturing the fortified city of Tsaritsyn, under heavy shell fire in a single tank; this led to the capture of over 40,000 prisoners. The fall of Tsaritsyn is viewed "as one of the key battles of the Russian Civil War" which greatly helped the White Russian cause. Notable historian Sir Basil Henry Liddell Hart comments that Bruce's tank action during this battle is to be seen as "one of the most remarkable feats in the whole history of the Tank Corps".
After the capture of Tsaritsyn, Wrangel pushed towards Saratov but Trotsky, seeing the danger of the union with Kolchak, against whom the Red command was concentrating large masses of troops, repulsed his attempts with heavy losses. When Kolchak's army in the east began to retreat in June and July, the bulk of the Red Army, free from any serious danger from Siberia, was directed against Denikin.
Denikin's forces constituted a real threat and for a time threatened to reach Moscow. The Red Army, stretched thin by fighting on all fronts, was forced out of Kiev on 30 August. Kursk and Orel were taken, on 20 September and 14 October, respectively. The latter, only from Moscow, was the closest the AFSR would come to its target. The Cossack Don Army under the command of Gen. Vladimir Sidorin continued north towards Voronezh, but there Semyon Budyonny's cavalrymen defeated them on 24 October. This allowed the Red Army to cross the Don River, threatening to split the Don and Volunteer Armies. Fierce fighting took place at the key rail junction of Kastornoye, which was taken on 15 November; Kursk was retaken two days later.
The high tide of the White movement against the Soviets had been reached in September 1919. By this time Denikin's forces were dangerously overextended. The White front had no depth or stability—it had become a series of patrols with occasional columns of slowly advancing troops without reserves. Lacking ammunition, artillery and fresh reinforcements, Denikin's army was decisively defeated in a series of battles in October and November 1919. The Red Army recaptured Kiev on 17 December and the defeated Cossacks fled back towards the Black Sea.
While the White armies were being routed in Central Russia and the east, they had succeeded in driving Nestor Makhno's anarchist Black Army (formally known as the Revolutionary Insurrectionary Army of Ukraine) out of part of southern Ukraine and the Crimea. Despite this setback, Moscow was loath to aid Makhno and the Black Army and refused to provide arms to anarchist forces in Ukraine. The main body of White forces, the Volunteers and the Don Army, pulled back towards the Don, to Rostov. The smaller body (Kiev and Odessa troops) withdrew to Odessa and the Crimea, which it had managed to protect from the Bolsheviks during the winter of 1919–1920.
By February 1919 the British government had pulled its military forces out of Central Asia. Despite this success for the Red Army, the White Army's assaults in European Russia and other areas broke communication between Moscow and Tashkent. For a time Central Asia was completely cut off from Red Army forces in Siberia. Although this communication failure weakened the Red Army, the Bolsheviks continued their efforts to gain support for the Bolshevik Party in Central Asia by holding a second regional conference in March. During this conference a regional bureau of Muslim organisations of the Russian Bolshevik Party was formed. The Bolshevik Party continued to try to gain support among the native population by giving them the impression of better representation for the Central Asian population and throughout the end of the year were able to maintain harmony with the Central Asian people.
Communication difficulties with Red Army forces in Siberia and European Russia ceased to be a problem by mid-November 1919. Due to Red Army successes north of Central Asia, communication with Moscow was re-established and the Bolsheviks were able to claim victory over the White Army in Turkestan.
In the Ural-Guryev operation of 1919–1920, the Red Turkestan Front defeated the Ural Army. During winter 1920, Ural Cossacks and their families, totaling about 15,000 people, headed south along the eastern coast of the Caspian Sea towards Fort Alexandrovsk. Only a few hundred of them reached Persia in June 1920. The Orenburg Independent Army was formed from Orenburg Cossacks and others troops which rebelled against the Bolsheviks. During the winter 1919–20, the Orenburg Army retreated to Semirechye in what is known as the Starving March, as half of the participants perished. In March 1920 her remnants crossed the border into the Northwestern region of China.
By the beginning of 1920 the main body of the Armed Forces of South Russia was rapidly retreating towards the Don, to Rostov. Denikin hoped to hold the crossings of the Don, then rest and reform his troops, but the White Army was not able to hold the Don area, and at the end of February 1920 started a retreat across Kuban towards Novorossiysk. Slipshod evacuation of Novorossiysk proved to be a dark event for the White Army. Russian and Allied ships evacuated about 40,000 of Denikin's men from Novorossiysk to the Crimea, without horses or any heavy equipment, while about 20,000 men were left behind and either dispersed or captured by the Red Army. Following the disastrous Novorossiysk evacuation, Denikin stepped down and the military council elected Wrangel as the new Commander-in-Chief of the White Army. He was able to restore order to the dispirited troops and reshape an army that could fight as a regular force again. This remained an organized force in the Crimea throughout 1920.
After Moscow's Bolshevik government signed a military and political alliance with Nestor Makhno and the Ukrainian anarchists, the Black Army attacked and defeated several regiments of Wrangel's troops in southern Ukraine, forcing him to retreat before he could capture that year's grain harvest.
Stymied in his efforts to consolidate his hold, Wrangel then attacked north in an attempt to take advantage of recent Red Army defeats at the close of the Polish–Soviet War of 1919–1920. The Red Army eventually halted this offensive, and Wrangel's troops had to retreat to Crimea in November 1920, pursued by both the Red and Black cavalry and infantry. Wrangel's fleet evacuated him and his army to Constantinople on 14 November 1920, ending the struggle of Reds and Whites in Southern Russia.
After the defeat of Wrangel, the Red Army immediately repudiated its 1920 treaty of alliance with Nestor Makhno and attacked the anarchist Black Army; the campaign to liquidate Makhno and the Ukrainian anarchists began with an attempted assassination of Makhno by Cheka agents. Anger at continued repression by the Bolshevik Communist government and at its liberal use of the Cheka to put down anarchist elements led to a naval mutiny at Kronstadt in March 1921, followed by peasant revolts. Red Army attacks on the anarchist forces and their sympathisers increased in ferocity throughout 1921.
In Siberia, Admiral Kolchak's army had disintegrated. He himself gave up command after the loss of Omsk and designated Gen. Grigory Semyonov as the new leader of the White Army in Siberia. Not long after this Kolchak was arrested by the disaffected Czechoslovak Corps as he traveled towards Irkutsk without the protection of the army, and turned over to the socialist Political Centre in Irkutsk. Six days later this regime was replaced by a Bolshevik-dominated Military-Revolutionary Committee. On 6–7 February Kolchak and his prime minister Victor Pepelyaev were shot and their bodies thrown through the ice of the frozen Angara River, just before the arrival of the White Army in the area.
Remnants of Kolchak's army reached Transbaikalia and joined Semyonov's troops, forming the Far Eastern army. With the support of the Japanese army it was able to hold Chita, but after withdrawal of Japanese soldiers from Transbaikalia, Semenov's position became untenable, and in November 1920 he was driven by the Red Army from Transbaikalia and took refuge in China. The Japanese, who had plans to annex the Amur Krai, finally pulled their troops out as Bolshevik forces gradually asserted control over the Russian Far East. On 25 October 1922 Vladivostok fell to the Red Army, and the Provisional Priamur Government was extinguished.
In Central Asia, Red Army troops continued to face resistance into 1923, where "basmachi" (armed bands of Islamic guerrillas) had formed to fight the Bolshevik takeover. The Soviets engaged non-Russian peoples in Central Asia, like Magaza Masanchi, commander of the Dungan Cavalry Regiment, to fight against the Basmachis. The Communist Party did not completely dismantle this group until 1934.
General Anatoly Pepelyayev continued armed resistance in the Ayano-Maysky District until June 1923. The regions of Kamchatka and Northern Sakhalin remained under Japanese occupation until their treaty with the Soviet Union in 1925, when their forces were finally withdrawn.
The results of the civil war were momentous. Soviet demographer Boris Urlanis estimated the total number of men killed in action in the Civil War and Polish–Soviet War as 300,000 (125,000 in the Red Army, 175,500 White armies and Poles) and the total number of military personnel dead from disease (on both sides) as 450,000. Boris Sennikov estimated the total losses among the population of Tambov region in 1920 to 1922 resulting from the war, executions, and imprisonment in concentration camps as approximately 240,000.
During the Red Terror, estimates of Cheka executions range from 12,733 to 1.7 million. William Henry Chamberlin suspected that there were about 50,000. Evan Mawdsley suspected that there were more than 12,733, and less than 200,000. Some sources claimed at least 250,000 summary executions of "enemies of the people" with estimates reaching above a million. More modest estimates put the numbers executed by the Bolsheviks between December 1917 and February 1922 at around 28,000 per year, with roughly 10,000 executions during the Red Terror.
Some 300,000–500,000 Cossacks were killed or deported during Decossackization, out of a population of around three million. An estimated 100,000 Jews were killed in Ukraine, mostly by the White Army. Punitive organs of the All Great Don Cossack Host sentenced 25,000 people to death between May 1918 and January 1919. Kolchak's government shot 25,000 people in Ekaterinburg province alone. The White Terror, as it would become known, killed about 300,000 people in total.
At the end of the Civil War the Russian SFSR was exhausted and near ruin. The droughts of 1920 and 1921, as well as the 1921 famine, worsened the disaster still further. Disease had reached pandemic proportions, with 3,000,000 dying of typhus in 1920 alone. Millions more also died of widespread starvation, wholesale massacres by both sides and pogroms against Jews in Ukraine and southern Russia. By 1922 there were at least 7,000,000 street children in Russia as a result of nearly ten years of devastation from the Great War and the civil war.
Another one to two million people, known as the White émigrés, fled Russia, many with General Wrangel—some through the Far East, others west into the newly independent Baltic countries. These émigrés included a large percentage of the educated and skilled population of Russia.
The Russian economy was devastated by the war, with factories and bridges destroyed, cattle and raw materials pillaged, mines flooded and machines damaged. The industrial production value descended to one-seventh of the value of 1913 and agriculture to one-third. According to "Pravda", "The workers of the towns and some of the villages choke in the throes of hunger. The railways barely crawl. The houses are crumbling. The towns are full of refuse. Epidemics spread and death strikes—industry is ruined." It is estimated that the total output of mines and factories in 1921 had fallen to 20% of the pre-World War level, and many crucial items experienced an even more drastic decline. For example, cotton production fell to 5%, and iron to 2%, of pre-war levels.
War Communism saved the Soviet government during the Civil War, but much of the Russian economy had ground to a standstill. The peasants responded to requisitions by refusing to till the land. By 1921 cultivated land had shrunk to 62% of the pre-war area, and the harvest yield was only about 37% of normal. The number of horses declined from 35 million in 1916 to 24 million in 1920 and cattle from 58 to 37 million. The exchange rate with the US dollar declined from two rubles in 1914 to 1,200 in 1920.
With the end of the war the Communist Party no longer faced an acute military threat to its existence and power. However, the perceived threat of another intervention, combined with the failure of socialist revolutions in other countries—most notably the German Revolution—contributed to the continued militarisation of Soviet society. Although Russia experienced extremely rapid economic growth in the 1930s, the combined effect of World War I and the Civil War left a lasting scar on Russian society and had permanent effects on the development of the Soviet Union.
British historian Orlando Figes has contended that the root of the Whites' defeat was their inability to dispel the popular image that they were not only associated with Tsarist Russia, but supportive of a Tsarist restoration, as well. | https://en.wikipedia.org/wiki?curid=26295 |
Ralph Abercromby
Sir Ralph Abercromby (sometimes spelt Abercrombie) (7 October 173428 March 1801) was a Scottish soldier and politician. He twice served as MP for Clackmannanshire, rose to the rank of lieutenant-general in the British Army, was appointed Governor of Trinidad, served as Commander-in-Chief, Ireland, and was noted for his services during the French Revolutionary Wars.
Ralph Abercromby was born on 7 October 1734 at Menstrie Castle, Clackmannanshire. He was the second (but eldest surviving) son of George Abercromby, a lawyer and descendant of the Abercromby family of Birkenbog, Aberdeenshire and Mary Dundas (d. 1767), daughter of Ralph Dundas of Manour, Perthshire. His younger brothers include the advocate Alexander Abercromby, Lord Abercromby and General Robert Abercromby.
Abercromby's education was begun by a private tutor, then continued at the school of Mr Moir in Alloa, then considered one of the best in Scotland despite its Jacobite leanings. Ralph attended Rugby School from 12 June 1748, where he remained until he was 18. Between 1752 and 1753, he was a student at the University of Edinburgh. There he studied moral and natural philosophy and civil law, and was regarded by his professors as sound rather than brilliant. He completed his studies at Leipzig University in Germany from autumn 1754, taking more detailed studies in civil law with a view to a career as an advocate.
Abercromby was a Freemason. He was Initiated into Scottish Freemasonry in Lodge Canongate Kilwinning, No. 2, (Edinburgh, Scotland) on 25 May 1753.
On returning from the continent, Abercromby expressed a strong preference for the military profession, and a cornet's commission was accordingly obtained for him (March 1756) in the 3rd Dragoon Guards. He served with his regiment in the Seven Years' War, and thus, the opportunity afforded him of studying the methods of Frederick the Great, which moulded his military character and formed his tactical ideas.
Abercromby rose through the intermediate grades to the rank of lieutenant-colonel of the regiment (1773) and brevet colonel in 1780, and in 1781, he became colonel of the newly raised King's Irish infantry. When that regiment was disbanded in 1783, he retired on half pay. He also entered Parliament as MP for Clackmannanshire (1774–1780).
Abercromby was a strong supporter of the American cause in the American Revolutionary War, and remained in Ireland to avoid having to fight against the colonists.
When France declared war against Great Britain in 1793, Abercromby resumed his duties. He was appointed command of a brigade under the Duke of York for service in the Netherlands, where he commanded the advanced guard in the action at Le Cateau. During the 1794 withdrawal to Holland, he commanded the allied forces in the action at Boxtel and was wounded directing operations at Fort St Andries on the Waal.
In July 1795, Abercromby was nominated by Secretary of State for War Henry Dundas to lead an expedition to the West Indies. That same month he had been made a Knight of the Bath and in August Lieutenant Governor of the Isle of Wight - a reward for his services but also possibly an incentive to lead the army in the Caribbean. The appointment of Abercromby as Commander-in-Chief of the Leeward and Windward Islands was officially announced on 5 August.
On 17 March 1796 Abercromby arrived in Carlisle Bay, Barbados on the "Arethusa." A third of the 6,000 troops that had arrived on the island before him had already been sent on to Saint Vincent and Grenada, leaving the general with 3,700 soldiers at his disposal. Control of much of Saint Vincent been lost to rebelling French planters and native Caribs since early 1795, while Grenada was in the midst of an insurrection led by Julien Fédon. The reinforcements to Grenada allowed General Nicolls to attack enemy posts south of Port Royal on 25 March, preventing further French reinforcements from Guadeloupe. Three months later Abercromby arrived with further reinforcements and attacked Fédon's camp on 19 June, routing the insurgents and ending the rebellion.
The British fleet sailed on 25 April 1796 for Saint Lucia, landing the following day and establishing a beachhead. The French were soon repelled and retreated to the fort at Morne Fortune, which Abercromby decided to besiege. The garrison under General Goyrand surrendered to the British army 26 May. The island had been retaken at the cost of 566 men. A force of around 4,000 was left to hold Saint Lucia under the command of John Moore before Abercromby left for Saint Vincent at the beginning of June.
Abrecromby arrived on Saint Vincent 7 June with a force of just over 4,000. He marched his troops near to the insurgent base at Vigie Ridge and camped nearby as the British started to execute an encircling movement: Quatermaster General John Knox manoeuvred his men on the seaward side in order to prevent the enemy retreating north, and Lieutenant Colonel Dickens used the 34th Regiment as a diversion on the opposite side. Knox was able to cut off communications with the Vigie, whilst Dickens ousted the nearby Caribs to complete the encirclement. The black French commander, Marinier, signed terms of surrender on 11 June and the Caribs did 4 days later. The British took around 200 prisoner, with another 200 escaping into the jungle. Although some of the Caribs would remain in resistance until October, the rebellion had effectively been put down at the cost of 17 officers and 168 men killed or wounded.
Afterwards, Abercromby secured possession of the settlements of Demerara and Essequibo in South America, and the island of Trinidad. A major assault on the port of San Juan, Puerto Rico, in April 1797 failed after fierce fighting where both sides suffered heavy losses.
Abercromby returned to Europe and, in reward for his services, was appointed colonel of the 2nd (Royal North British) Regiment of Dragoons. He was also made Lieutenant-Governor of the Isle of Wight, Governor of Fort George and Fort Augustus in the Scottish Highlands, and promoted to the rank of Lieutenant-general. He again entered Parliament as member for Clackmannanshire from 1796 to 1798.
In 1798, Abernathy was made Commander-in-Chief of the forces in Ireland, then in rebellion and anticipating French intervention. He took the unusual step of publicly criticising the command of his predecessor, Henry Luttrell, 2nd Earl of Carhampton, for bequeathing an army "in a state of licentiousness, which must render it formidable to every one but the enemy". To quote the biographic entry in the 1888 Encyclopædia Britannica,
After holding for a short period the office of commander-in-chief in Scotland, Abercromby, when the enterprise against the Dutch Batavian Republic was resolved upon in 1799, was again called to command under the Duke of York. The Anglo-Russian invasion of Holland in 1799 ended in disaster, but friend and foe alike confessed that the most decisive victory could not have more conspicuously proved the talents of this distinguished officer.
After spending time with Dundas over Christmas, Abercromby was summoned to London 21 January 1800. The Portuguese, concerned that they were under threat from Spain, requested British support and wanted Abercromby to lead their army. However, Abercromby refused to serve under a foreign ruler and would only take command of a joint army. Before he could leave for Portugal to inspect their defences and army, the resignation of General Charles Stuart in the Mediterranean in April led to a change of plans. The Austrian plan was that Abercromby could create a distraction from the activities of General Michael von Melas in North Italty by landing at various points on the Italian coast. Abrecromby received instruction from London to send 2,500-3,000 men to take French-occupied Malta. Thereafter, he was to receive a further 6,000 men to assist the Austrians. General Charles O'Hara in Gibraltar was pleased with the appointment, for while Stuart had been hot-tempered and difficult to work with, Abercromby was "a reasonable, considerate good soldier, and listens with temper and patience to every proposal made to him". However, delays caused by the weather meant that the situation in Italy had changed drastically by the time that Abercromby reached Minorca 22 June.
In 1801, Abercromby was sent with an army to recover Egypt from France. His experience in the Netherlands and the West Indies particularly fitted him for this new command, as was proved when he carried his army in health, in spirits, and with the requisite supplies to the destined scene of action despite great difficulties. The debarkation of the troops at Abukir, in the face of strenuous opposition, is justly ranked among the most daring and brilliant exploits of the British army.
In 1800 Abercromby commanded the expedition to the Mediterranean, and after some brilliant operations defeated the French in the Battle of Alexandria, 21 March 1801. During the action he was struck by a musket-ball in the thigh; but not until the battle was won and he saw the enemy retreating did he show any sign of pain. He was borne from the field in a hammock, cheered by the blessings of the soldiers as he passed, and conveyed on board the flag-ship HMS "Foudroyant" which was moored in the harbour. The ball could not be extracted; mortification ensued, and seven days later, on 28 March 1801, he died.
Abercromby's old friend and commander, the Duke of York, paid tribute to Abercromby's memory in general orders: "His steady observance of discipline, his ever-watchful attention to the health and wants of his troops, the persevering and unconquerable spirit which marked his military career, the splendour of his actions in the field and the heroism of his death, are worthy the imitation of all who desire, like him, a life of heroism and a death of glory." He was buried on St John's Bastion within Fort Saint Elmo in Valletta, Malta. The British military renamed it "Abercrombie's Bastion" in his honour. The adjacent curtain wall linking this bastion to the fortifications of Valletta, originally called Santa Ubaldesca Curtain, was also renamed "Abercrombie's Curtain".
By a vote of the House of Commons, a monument was erected in Abercromby's honour in St Paul's Cathedral in London. His widow was created Baroness Abercromby of Tullibody and Aboukir Bay, and a pension of £2,000 a year was settled on her and her two successors in the title.
Abercromby Place in Edinburgh's New Town is named in his honour.
On 17 November 1767, Abercromby married Mary Anne, daughter of John Menzies and Ann, daughter of Patrick Campbell. They had seven children. Of four sons, all four entered Parliament, and two saw military service.
A public house in central Manchester, the 'Sir Ralph Abercromby', is named after him. There is also a 'General Abercrombie' pub with his portrait by John Hoppner as the sign off of the Blackfriars Bridge Road in London.
Three ships have been named HMS "Abercrombie" after the general but using the variant spelling of his name. | https://en.wikipedia.org/wiki?curid=26296 |
Radiometric dating
Radiometric dating, radioactive dating or radioisotope dating is a technique which is used to date materials such as rocks or carbon, in which trace radioactive impurities were selectively incorporated when they were formed. The method compares the abundance of a naturally occurring radioactive isotope within the material to the abundance of its decay products, which form at a known constant rate of decay. The use of radiometric dating was first published in 1907 by Bertram Boltwood and is now the principal source of information about the absolute age of rocks and other geological features, including the age of fossilized life forms or the age of the Earth itself, and can also be used to date a wide range of natural and man-made materials.
Together with stratigraphic principles, radiometric dating methods are used in geochronology to establish the geologic time scale. Among the best-known techniques are radiocarbon dating, potassium–argon dating and uranium–lead dating. By allowing the establishment of geological timescales, it provides a significant source of information about the ages of fossils and the deduced rates of evolutionary change. Radiometric dating is also used to date archaeological materials, including ancient artifacts.
Different methods of radiometric dating vary in the timescale over which they are accurate and the materials to which they can be applied.
All ordinary matter is made up of combinations of chemical elements, each with its own atomic number, indicating the number of protons in the atomic nucleus. Additionally, elements may exist in different isotopes, with each isotope of an element differing in the number of neutrons in the nucleus. A particular isotope of a particular element is called a nuclide. Some nuclides are inherently unstable. That is, at some point in time, an atom of such a nuclide will undergo radioactive decay and spontaneously transform into a different nuclide. This transformation may be accomplished in a number of different ways, including alpha decay (emission of alpha particles) and beta decay (electron emission, positron emission, or electron capture). Another possibility is spontaneous fission into two or more nuclides.
While the moment in time at which a particular nucleus decays is unpredictable, a collection of atoms of a radioactive nuclide decays exponentially at a rate described by a parameter known as the half-life, usually given in units of years when discussing dating techniques. After one half-life has elapsed, one half of the atoms of the nuclide in question will have decayed into a "daughter" nuclide or decay product. In many cases, the daughter nuclide itself is radioactive, resulting in a decay chain, eventually ending with the formation of a stable (nonradioactive) daughter nuclide; each step in such a chain is characterized by a distinct half-life. In these cases, usually the half-life of interest in radiometric dating is the longest one in the chain, which is the rate-limiting factor in the ultimate transformation of the radioactive nuclide into its stable daughter. Isotopic systems that have been exploited for radiometric dating have half-lives ranging from only about 10 years (e.g., tritium) to over 100 billion years (e.g., samarium-147).
For most radioactive nuclides, the half-life depends solely on nuclear properties and is essentially constant. This is known because decay constants measured by different techniques give consistent values within analytical errors and the ages of the same materials are consistent from one method to another. It is not affected by external factors such as temperature, pressure, chemical environment, or presence of a magnetic or electric field. The only exceptions are nuclides that decay by the process of electron capture, such as beryllium-7, strontium-85, and zirconium-89, whose decay rate may be affected by local electron density. For all other nuclides, the proportion of the original nuclide to its decay products changes in a predictable way as the original nuclide decays over time.
This predictability allows the relative abundances of related nuclides to be used as a clock to measure the time from the incorporation of the original nuclides into a material to the present. Nature has conveniently provided us with radioactive nuclides that have half-lives which range from considerably longer than the age of the universe, to less than a zeptosecond. This allows one to measure a very wide range of ages. Isotopes with very long half-lives are called "stable isotopes," and isotopes with very short half-lives are known as "extinct isotopes."
The radioactive decay constant, the probability that an atom will decay per year, is the solid foundation of the common measurement of radioactivity. The accuracy and precision of the determination of an age (and a nuclide's half-life) depends on the accuracy and precision of the decay constant measurement. The in-growth method is one way of measuring the decay constant of a system, which involves accumulating daughter nuclides. Unfortunately for nuclides with high decay constants (which are useful for dating very old samples), long periods of time (decades) are required to accumulate enough decay products in a single sample to accurately measure them. A faster method involves using particle counters to determine alpha, beta or gamma activity, and then dividing that by the number of radioactive nuclides. However, it is challenging and expensive to accurately determine the number of radioactive nuclides. Alternatively, decay constants can be determined by comparing isotope data for rocks of known age. This method requires at least one of the isotope systems to be very precisely calibrated, such as the Pb-Pb system.
The basic equation of radiometric dating requires that neither the parent nuclide nor the daughter product can enter or leave the material after its formation. The possible confounding effects of contamination of parent and daughter isotopes have to be considered, as do the effects of any loss or gain of such isotopes since the sample was created. It is therefore essential to have as much information as possible about the material being dated and to check for possible signs of alteration. Precision is enhanced if measurements are taken on multiple samples from different locations of the rock body. Alternatively, if several different minerals can be dated from the same sample and are assumed to be formed by the same event and were in equilibrium with the reservoir when they formed, they should form an isochron. This can reduce the problem of contamination. In uranium–lead dating, the concordia diagram is used which also decreases the problem of nuclide loss. Finally, correlation between different isotopic dating methods may be required to confirm the age of a sample. For example, the age of the Amitsoq gneisses from western Greenland was determined to be 3.60 ± 0.05 Ga (billion years ago) using uranium–lead dating and 3.56 ± 0.10 Ga (billion years ago) using lead–lead dating, results that are consistent with each other.
Accurate radiometric dating generally requires that the parent has a long enough half-life that it will be present in significant amounts at the time of measurement (except as described below under "Dating with short-lived extinct radionuclides"), the half-life of the parent is accurately known, and enough of the daughter product is produced to be accurately measured and distinguished from the initial amount of the daughter present in the material. The procedures used to isolate and analyze the parent and daughter nuclides must be precise and accurate. This normally involves isotope-ratio mass spectrometry.
The precision of a dating method depends in part on the half-life of the radioactive isotope involved. For instance, carbon-14 has a half-life of 5,730 years. After an organism has been dead for 60,000 years, so little carbon-14 is left that accurate dating cannot be established. On the other hand, the concentration of carbon-14 falls off so steeply that the age of relatively young remains can be determined precisely to within a few decades.
The closure temperature or blocking temperature represents the temperature below which the mineral is a closed system for the studied isotopes. If a material that selectively rejects the daughter nuclide is heated above this temperature, any daughter nuclides that have been accumulated over time will be lost through diffusion, resetting the isotopic "clock" to zero. As the mineral cools, the crystal structure begins to form and diffusion of isotopes is less easy. At a certain temperature, the crystal structure has formed sufficiently to prevent diffusion of isotopes. Thus an igneous or metamorphic rock or melt, which is slowly cooling, does not begin to exhibit measurable radioactive decay until it cools below the closure temperature. The age that can be calculated by radiometric dating is thus the time at which the rock or mineral cooled to closure temperature. This temperature varies for every mineral and isotopic system, so a system can be closed for one mineral but open for another. Dating of different minerals and/or isotope systems (with differing closure temperatures) within the same rock can therefore enable the tracking of the thermal history of the rock in question with time, and thus the history of metamorphic events may become known in detail. These temperatures are experimentally determined in the lab by artificially resetting sample minerals using a high-temperature furnace. This field is known as thermochronology or thermochronometry.
The mathematical expression that relates radioactive decay to geologic time is
where
The equation is most conveniently expressed in terms of the measured quantity "N"("t") rather than the constant initial value "No".
To calculate the age, it is assumed that the system is closed (neither parent nor daughter isotopes have been lost from system), "D"0 must be either negligible or can be accurately estimated, "λ" is known to a high precision, and one has accurate and precise measurements of D* and "N"("t").
The above equation makes use of information on the composition of parent and daughter isotopes at the time the material being tested cooled below its closure temperature. This is well-established for most isotopic systems. However, construction of an isochron does not require information on the original compositions, using merely the present ratios of the parent and daughter isotopes to a standard isotope. An isochron plot is used to solve the age equation graphically and calculate the age of the sample and the original composition.
Radiometric dating has been carried out since 1905 when it was invented by Ernest Rutherford as a method by which one might determine the age of the Earth. In the century since then the techniques have been greatly improved and expanded. Dating can now be performed on samples as small as a nanogram using a mass spectrometer. The mass spectrometer was invented in the 1940s and began to be used in radiometric dating in the 1950s. It operates by generating a beam of ionized atoms from the sample under test. The ions then travel through a magnetic field, which diverts them into different sampling sensors, known as "Faraday cups", depending on their mass and level of ionization. On impact in the cups, the ions set up a very weak current that can be measured to determine the rate of impacts and the relative concentrations of different atoms in the beams.
Uranium–lead radiometric dating involves using uranium-235 or uranium-238 to date a substance's absolute age. This scheme has been refined to the point that the error margin in dates of rocks can be as low as less than two million years in two-and-a-half billion years. An error margin of 2–5% has been achieved on younger Mesozoic rocks.
Uranium–lead dating is often performed on the mineral zircon (ZrSiO4), though it can be used on other materials, such as baddeleyite, as well as monazite (see: monazite geochronology). Zircon and baddeleyite incorporate uranium atoms into their crystalline structure as substitutes for zirconium, but strongly reject lead. Zircon has a very high closure temperature, is resistant to mechanical weathering and is very chemically inert. Zircon also forms multiple crystal layers during metamorphic events, which each may record an isotopic age of the event. "In situ" micro-beam analysis can be achieved via laser ICP-MS or SIMS techniques.
One of its great advantages is that any sample provides two clocks, one based on uranium-235's decay to lead-207 with a half-life of about 700 million years, and one based on uranium-238's decay to lead-206 with a half-life of about 4.5 billion years, providing a built-in crosscheck that allows accurate determination of the age of the sample even if some of the lead has been lost. This can be seen in the concordia diagram, where the samples plot along an errorchron (straight line) which intersects the concordia curve at the age of the sample.
This involves the alpha decay of 147Sm to 143Nd with a half-life of 1.06 x 1011 years. Accuracy levels of within twenty million years in ages of two-and-a-half billion years are achievable.
This involves electron capture or positron decay of potassium-40 to argon-40. Potassium-40 has a half-life of 1.3 billion years, so this method is applicable to the oldest rocks. Radioactive potassium-40 is common in micas, feldspars, and hornblendes, though the closure temperature is fairly low in these materials, about 350 °C (mica) to 500 °C (hornblende).
This is based on the beta decay of rubidium-87 to strontium-87, with a half-life of 50 billion years. This scheme is used to date old igneous and metamorphic rocks, and has also been used to date lunar samples. Closure temperatures are so high that they are not a concern. Rubidium-strontium dating is not as precise as the uranium-lead method, with errors of 30 to 50 million years for a 3-billion-year-old sample. Application of in situ analysis (Laser-Ablation ICP-MS) within single mineral grains in faults have shown that the Rb-Sr method can be used to decipher episodes of fault movement.
A relatively short-range dating technique is based on the decay of uranium-234 into thorium-230, a substance with a half-life of about 80,000 years. It is accompanied by a sister process, in which uranium-235 decays into protactinium-231, which has a half-life of 32,760 years.
While uranium is water-soluble, thorium and protactinium are not, and so they are selectively precipitated into ocean-floor sediments, from which their ratios are measured. The scheme has a range of several hundred thousand years. A related method is ionium–thorium dating, which measures the ratio of ionium (thorium-230) to thorium-232 in ocean sediment.
Radiocarbon dating is also simply called carbon-14 dating. Carbon-14 is a radioactive isotope of carbon, with a half-life of 5,730 years (which is very short compared with the above isotopes), and decays into nitrogen. In other radiometric dating methods, the heavy parent isotopes were produced by nucleosynthesis in supernovas, meaning that any parent isotope with a short half-life should be extinct by now. Carbon-14, though, is continuously created through collisions of neutrons generated by cosmic rays with nitrogen in the upper atmosphere and thus remains at a near-constant level on Earth. The carbon-14 ends up as a trace component in atmospheric carbon dioxide (CO2).
A carbon-based life form acquires carbon during its lifetime. Plants acquire it through photosynthesis, and animals acquire it from consumption of plants and other animals. When an organism dies, it ceases to take in new carbon-14, and the existing isotope decays with a characteristic half-life (5730 years). The proportion of carbon-14 left when the remains of the organism are examined provides an indication of the time elapsed since its death. This makes carbon-14 an ideal dating method to date the age of bones or the remains of an organism. The carbon-14 dating limit lies around 58,000 to 62,000 years.
The rate of creation of carbon-14 appears to be roughly constant, as cross-checks of carbon-14 dating with other dating methods show it gives consistent results. However, local eruptions of volcanoes or other events that give off large amounts of carbon dioxide can reduce local concentrations of carbon-14 and give inaccurate dates. The releases of carbon dioxide into the biosphere as a consequence of industrialization have also depressed the proportion of carbon-14 by a few percent; conversely, the amount of carbon-14 was increased by above-ground nuclear bomb tests that were conducted into the early 1960s. Also, an increase in the solar wind or the Earth's magnetic field above the current value would depress the amount of carbon-14 created in the atmosphere.
This involves inspection of a polished slice of a material to determine the density of "track" markings left in it by the spontaneous fission of uranium-238 impurities. The uranium content of the sample has to be known, but that can be determined by placing a plastic film over the polished slice of the material, and bombarding it with slow neutrons. This causes induced fission of 235U, as opposed to the spontaneous fission of 238U. The fission tracks produced by this process are recorded in the plastic film. The uranium content of the material can then be calculated from the number of tracks and the neutron flux.
This scheme has application over a wide range of geologic dates. For dates up to a few million years micas, tektites (glass fragments from volcanic eruptions), and meteorites are best used. Older materials can be dated using zircon, apatite, titanite, epidote and garnet which have a variable amount of uranium content. Because the fission tracks are healed by temperatures over about 200 °C the technique has limitations as well as benefits. The technique has potential applications for detailing the thermal history of a deposit.
Large amounts of otherwise rare 36Cl (half-life ~300ky) were produced by irradiation of seawater during atmospheric detonations of nuclear weapons between 1952 and 1958. The residence time of 36Cl in the atmosphere is about 1 week. Thus, as an event marker of 1950s water in soil and ground water, 36Cl is also useful for dating waters less than 50 years before the present. 36Cl has seen use in other areas of the geological sciences, including dating ice and sediments.
Luminescence dating methods are not radiometric dating methods in that they do not rely on abundances of isotopes to calculate age. Instead, they are a consequence of background radiation on certain minerals. Over time, ionizing radiation is absorbed by mineral grains in sediments and archaeological materials such as quartz and potassium feldspar. The radiation causes charge to remain within the grains in structurally unstable "electron traps". Exposure to sunlight or heat releases these charges, effectively "bleaching" the sample and resetting the clock to zero. The trapped charge accumulates over time at a rate determined by the amount of background radiation at the location where the sample was buried. Stimulating these mineral grains using either light (optically stimulated luminescence or infrared stimulated luminescence dating) or heat (thermoluminescence dating) causes a luminescence signal to be emitted as the stored unstable electron energy is released, the intensity of which varies depending on the amount of radiation absorbed during burial and specific properties of the mineral.
These methods can be used to date the age of a sediment layer, as layers deposited on top would prevent the grains from being "bleached" and reset by sunlight. Pottery shards can be dated to the last time they experienced significant heat, generally when they were fired in a kiln.
Other methods include:
Miocene–Pliocene sequences in the northern Danube Basin;Michal Šujan – Global and Planetary Change 137 (2016) 35–53; pdf
Absolute radiometric dating requires a measurable fraction of parent nucleus to remain in the sample rock. For rocks dating back to the beginning of the solar system, this requires extremely long-lived parent isotopes, making measurement of such rocks' exact ages imprecise. To be able to distinguish the relative ages of rocks from such old material, and to get a better time resolution than that available from long-lived isotopes, short-lived isotopes that are no longer present in the rock can be used.
At the beginning of the solar system, there were several relatively short-lived radionuclides like 26Al, 60Fe, 53Mn, and 129I present within the solar nebula. These radionuclides—possibly produced by the explosion of a supernova—are extinct today, but their decay products can be detected in very old material, such as that which constitutes meteorites. By measuring the decay products of extinct radionuclides with a mass spectrometer and using isochronplots, it is possible to determine relative ages of different events in the early history of the solar system. Dating methods based on extinct radionuclides can also be calibrated with the U-Pb method to give absolute ages. Thus both the approximate age and a high time resolution can be obtained. Generally a shorter half-life leads to a higher time resolution at the expense of timescale.
beta-decays to with a half-life of 16 million years. The iodine-xenon chronometer is an isochron technique. Samples are exposed to neutrons in a nuclear reactor. This converts the only stable isotope of iodine () into via neutron capture followed by beta decay (of ). After irradiation, samples are heated in a series of steps and the xenon isotopic signature of the gas evolved in each step is analysed. When a consistent / ratio is observed across several consecutive temperature steps, it can be interpreted as corresponding to a time at which the sample stopped losing xenon.
Samples of a meteorite called Shallowater are usually included in the irradiation to monitor the conversion efficiency from to . The difference between the measured / ratios of the sample and Shallowater then corresponds to the different ratios of / when they each stopped losing xenon. This in turn corresponds to a difference in age of closure in the early solar system.
Another example of short-lived extinct radionuclide dating is the – chronometer, which can be used to estimate the relative ages of chondrules. decays to with a half-life of 720 000 years. The dating is simply a question of finding the deviation from the natural abundance of (the product of decay) in comparison with the ratio of the stable isotopes /.
The excess of (often designated *) is found by comparing the / ratio to that of other Solar System materials.
The – chronometer gives an estimate of the time period for formation of primitive meteorites of only a few million years (1.4 million years for Chondrule formation). | https://en.wikipedia.org/wiki?curid=26298 |
Rocket
A rocket (from ) is a missile, spacecraft, aircraft or other vehicle that obtains thrust from a rocket engine. Rocket engine exhaust is formed entirely from propellant carried within the rocket. Rocket engines work by action and reaction and push rockets forward simply by expelling their exhaust in the opposite direction at high speed, and can therefore work in the vacuum of space.
In fact, rockets work more efficiently in space than in an atmosphere. Multistage rockets are capable of attaining escape velocity from Earth and therefore can achieve unlimited maximum altitude. Compared with airbreathing engines, rockets are lightweight and powerful and capable of generating large accelerations. To control their flight, rockets rely on momentum, airfoils, auxiliary reaction engines, gimballed thrust, momentum wheels, deflection of the exhaust stream, propellant flow, spin, or gravity.
Rockets for military and recreational uses date back to at least 13th-century China. Significant scientific, interplanetary and industrial use did not occur until the 20th century, when rocketry was the enabling technology for the Space Age, including setting foot on the Earth's moon. Rockets are now used for fireworks, weaponry, ejection seats, launch vehicles for artificial satellites, human spaceflight, and space exploration.
Chemical rockets are the most common type of high power rocket, typically creating a high speed exhaust by the combustion of fuel with an oxidizer. The stored propellant can be a simple pressurized gas or a single liquid fuel that disassociates in the presence of a catalyst (monopropellant), two liquids that spontaneously react on contact (hypergolic propellants), two liquids that must be ignited to react, (like kerosene (RP1) and liquid oxygen, used in most liquid-propellant rockets) a solid combination of fuel with oxidizer (solid fuel), or solid fuel with liquid or gaseous oxidizer (hybrid propellant system). Chemical rockets store a large amount of energy in an easily released form, and can be very dangerous. However, careful design, testing, construction and use minimizes risks.
The first gunpowder-powered rockets evolved in medieval China under the Song dynasty by the 13th century. The Mongols adopted Chinese rocket technology and the invention spread via the Mongol invasions to the Middle East and to Europe in the mid-13th century. Rockets are recorded in use by the Song navy in a military exercise dated to 1245. Internal-combustion rocket propulsion is mentioned in a reference to 1264, recording that the "ground-rat", a type of firework, had frightened the Empress-Mother Gongsheng at a feast held in her honor by her son the Emperor Lizong. Subsequently, rockets are included in the military treatise "Huolongjing", also known as the Fire Drake Manual, written by the Chinese artillery officer Jiao Yu in the mid-14th century. This text mentions the first known multistage rocket, the 'fire-dragon issuing from the water' (Huo long chu shui), thought to have been used by the Chinese navy.
Medieval and early modern rockets were used militarily as incendiary weapons in sieges. Between 1270 and 1280, Hasan al-Rammah wrote "al-furusiyyah wa al-manasib al-harbiyya" ("The Book of Military Horsemanship and Ingenious War Devices"), which included 107 gunpowder recipes, 22 of them for rockets. | https://en.wikipedia.org/wiki?curid=26301 |
Royal Botanic Gardens, Kew
Royal Botanic Gardens, Kew (brand name Kew) is a non-departmental public body in the United Kingdom sponsored by the Department for Environment, Food and Rural Affairs. An internationally important botanical research and education institution, it employs 1,100 staff. Its board of trustees is chaired by Dame Amelia Fawcett.
The organisation manages botanic gardens at Kew in Richmond upon Thames in southwest London, and at Wakehurst, a National Trust property in Sussex which is home to the internationally important Millennium Seed Bank, whose scientists work with partner organisations in more than 95 countries. Kew, jointly with the Forestry Commission, founded Bedgebury National Pinetum in Kent in 1923, specialising in growing conifers. In 1994 the Castle Howard Arboretum Trust, which runs the Yorkshire Arboretum, was formed as a partnership between Kew and the Castle Howard Estate.
In 2018 the organisation had 1,858,513 public visitors at Kew, and 354,957 at Wakehurst. Its site at Kew has 40 historically important buildings; it became a UNESCO World Heritage Site on 3 July 2003. The collections at Kew and Wakehurst include over 27,000 taxa of living plants, 8.3 million plant and fungal herbarium specimens, and over 40,000 species in the seed bank.
The Royal Botanic gardens, Kew states that its mission is to apply scientific discovery and research to fully develop the information about and potential uses of plants and fungi.
Kew is governed by a board of trustees which comprises a chairman and eleven members. Ten members and the chairman are appointed by the Secretary of State for Environment, Food and Rural Affairs. Her Majesty the Queen appoints her own trustee on the recommendation of the Secretary of State. the Board members are:
There are approximately 350 researchers working at Kew. The Director of Science is Professor Alexandre Antonelli. Professor Monique Simmonds is Deputy Director of Science. Professor Mark Chase is Senior Research Professor. Professor Phil Stevenson is the Senior Research Leader and Head of the Biological Chemistry and In Vitro Research. The group has four Research Leaders, Dr Melanie Howes, Dr Vis Sarasan, Dr Moses Langat and Dr Tom Prescott.
The scientific staff at Kew maintain a variety of plant and fungal data and digital resources, including;
Plants of the World Online is an online database launched in March 2017 as one of nine strategic outputs with the ultimate aim being "to enable users to access information on all the world's known seed-bearing plants by 2020". It links taxonomic data with images from the collection, to ptovide a single point of access with information on identification, distribution, traits, conservation, molecular phylogenies and uses. In addition it serves as a backbone for global resources such as World Flora Online.
The International Plant Names Index (IPNI) includes information from the "Index Kewensis", a project which began in the 19th century to provide an "Index to the Names and Authorities of all known flowering plants and their countries".The Harvard University Herbaria and the Australian National Herbarium co-operate with Kew in the IPNI database, which was launched in its present form in 1999 to produce an authoritative source of information on botanical nomenclature including publication details of seed plants, ferns and lycophytes. It is a nomenclatural listing of all published taxonomic plant names including new species, new combinations and new names at rank of Family down to infraspecific. It provides data for other related projects including Tropicos and GBIF.
Information and key to flowering plants of the Neotropics (tropical South and Central America).
The World Checklist of Selected Plant Families (WCSP) is a register of accepted scientific names and synonyms of 200 selected seed plant families. WCSP is widely used and most authoritative web resources on plants use it as their basis.
The World Checklist of Vascular Plants (WCVP) includes all known vascular plant species (flowering plants, conifers, ferns, clubmosses and firmosses). It is derived from the WCSP and the IPNI and therefore only includes names found in both. It is the taxonomic database for Plants of the World Online. Since WCSP includesonly selected families, WCVP will seek to complete the process.
A checklist of 40,292 species, including nine non-plant taxa (e.g. nostoc, forkweed, brown algae), compiled from multiple pre-existing datasets.
Kew also cooperated with the Missouri Botanical Garden and other international bodies in The Plant List (TPL). Uunlike the IPNI, it provides information on which names are currently accepted. The Plant List is an Internet encyclopedia project which was launched in 2010 to compile a comprehensive list of botanical nomenclature. The Plant List has 1,064,035 scientific plant names of species rank of which 350,699 are accepted species names. In addition, the list has 642 plant families and 17,020 plant genera. It was last updated in 2013, and was superseded by World Flora Online.
World Flora Online was developed as a successor to The Plant List, in 2012, aiming to include all known plants by 2020. | https://en.wikipedia.org/wiki?curid=26304 |
Robert Penn Warren
Robert Penn Warren (April 24, 1905 – September 15, 1989) was an American poet, novelist, and literary critic and was one of the founders of New Criticism. He was also a charter member of the Fellowship of Southern Writers. He founded the literary journal "The Southern Review" with Cleanth Brooks in 1935. He received the 1947 Pulitzer Prize for the Novel for "All the King's Men" (1946) and the Pulitzer Prize for Poetry in 1958 and 1979. He is the only person to have won Pulitzer Prizes for both fiction and poetry.
Warren was born in Guthrie, Kentucky, very near the Tennessee-Kentucky border, to Robert Warren and Anna Penn. Warren's mother's family had roots in Virginia, having given their name to the community of Penn's Store in Patrick County, Virginia, and she was a descendant of Revolutionary War soldier Colonel Abram Penn.
Robert Penn Warren graduated from Clarksville High School in Clarksville, Tennessee; Vanderbilt University ("summa cum laude", Phi Beta Kappa) in 1925; and the University of California, Berkeley (M.A.) in 1926. Warren pursued further graduate study at Yale University from 1927 to 1928 and obtained his B.Litt. as a Rhodes Scholar from New College, Oxford, in England in 1930. He also received a Guggenheim Fellowship to study in Italy during the rule of Benito Mussolini. That same year he began his teaching career at Southwestern College (now Rhodes College) in Memphis, Tennessee.
While still an undergraduate at Vanderbilt University, Warren became associated with the group of poets there known as the Fugitives, and somewhat later, during the early 1930s, Warren and some of the same writers formed a group known as the Southern Agrarians. He contributed "The Briar Patch" to the Agrarian manifesto "I'll Take My Stand" along with 11 other Southern writers and poets (including fellow Vanderbilt poet/critics John Crowe Ransom, Allen Tate, and Donald Davidson). In "The Briar Patch" the young Warren defends racial segregation, in line with the political leanings of the Agrarian group, although Davidson deemed Warren's stances in the essay so progressive that he argued for excluding it from the collection. However, Warren recanted these views in an article on the civil rights movement, "Divided South Searches Its Soul", which appeared in the July 9, 1956 issue of "Life" magazine. A month later, Warren published an expanded version of the article as a small book titled "Segregation: The Inner Conflict in the South". He subsequently adopted a high profile as a supporter of racial integration. In 1965, he published "Who Speaks for the Negro?", a collection of interviews with black civil rights leaders including Malcolm X and Martin Luther King, thus further distinguishing his political leanings from the more conservative philosophies associated with fellow Agrarians such as Tate, Cleanth Brooks, and particularly Davidson. Warren's interviews with civil rights leaders are at the Louie B. Nunn Center for Oral History at the University of Kentucky.
Warren's best-known work is "All the King's Men", a novel that won the Pulitzer Prize in 1947. Main character Willie Stark resembles Huey Pierce Long (1893–1935), the radical populist governor of Louisiana whom Warren was able to observe closely while teaching at Louisiana State University in Baton Rouge from 1933 to 1942. "All the King's Men" became a highly successful film, starring Broderick Crawford and winning the Academy Award for Best Picture in 1949. There was another film adaptation in 2006 by writer/director Steven Zaillian featured Sean Penn as Willie Stark and Jude Law as Jack Burden. The opera "Willie Stark" by Carlisle Floyd to his own libretto based on the novel was first performed in 1981.
Warren served as the Consultant in Poetry to the Library of Congress, 1944–1945 (later termed Poet Laureate), and won two Pulitzer Prizes in poetry, in 1958 for "Promises: Poems 1954–1956" and in 1979 for "Now and Then". "Promises" also won the annual National Book Award for Poetry.
In 1974, the National Endowment for the Humanities selected him for the Jefferson Lecture, the U.S. federal government's highest honor for achievement in the humanities. Warren's lecture was entitled "Poetry and Democracy" (subsequently published under the title "Democracy and Poetry"). In 1977, Warren was awarded the St. Louis Literary Award from the Saint Louis University Library Associates. In 1980, Warren was presented with the Presidential Medal of Freedom by President Jimmy Carter. In 1981, Warren was selected as a MacArthur Fellow and later was named as the first U.S. Poet Laureate Consultant in Poetry on February 26, 1986. In 1987, he was awarded the National Medal of Arts.
Warren was co-author, with Cleanth Brooks, of "Understanding Poetry", an influential literature textbook. It was followed by other similarly co-authored textbooks, including "Understanding Fiction", which was praised by Southern Gothic and Roman Catholic writer Flannery O'Connor, and "Modern Rhetoric", which adopted what can be called a New Critical perspective.
His first marriage was to Emma Brescia. His second marriage was in 1952 to Eleanor Clark, with whom he had two children, Rosanna Phelps Warren (born 1953) and Gabriel Penn Warren (born 1955). During his tenure at Louisiana State University he resided at Twin Oaks (otherwise known as the Robert Penn Warren House) in Prairieville, Louisiana. He lived the latter part of his life in Fairfield, Connecticut, and Stratton, Vermont where he died of complications from prostate cancer. He is buried at Stratton, Vermont, and, at his request, a memorial marker is situated in the Warren family gravesite in Guthrie, Kentucky.
In April 2005, the United States Postal Service issued a commemorative stamp to mark the 100th anniversary of Warren's birth. Introduced at the post office in his native Guthrie, it depicts the author as he appeared in a 1948 photograph, with a background scene of a political rally designed to evoke the setting of "All the King's Men". His son and daughter, Gabriel and Rosanna Warren, were in attendance.
Vanderbilt University houses the Robert Penn Warren Center for the Humanities, which is sponsored by the College of Arts and Science. It began its programs in January 1988, and in 1989 received a $480,000 Challenge Grant from the National Endowment for the Humanities. The center promotes "interdisciplinary research and study in the humanities, social sciences, and natural sciences."
The high school that Robert Penn Warren attended, Clarksville High School (Tennessee), was renovated into an apartment complex in 1982. The original name of the apartments was changed to The Penn Warren in 2010. | https://en.wikipedia.org/wiki?curid=26307 |
Rudyard Kipling
Joseph Rudyard Kipling ( ; 30 December 1865 – 18 January 1936) was an English journalist, short-story writer, poet, and novelist. He was born in India, which inspired much of his work.
Kipling's works of fiction include "The Jungle Book" (1894), "Kim" (1901), and many short stories, including "The Man Who Would Be King" (1888). His poems include "Mandalay" (1890), "Gunga Din" (1890), "The Gods of the Copybook Headings" (1919), "The White Man's Burden" (1899), and "If—" (1910). He is seen as an innovator in the art of the short story. His children's books are classics; one critic noted "a versatile and luminous narrative gift."
Kipling in the late 19th and early 20th centuries was among the United Kingdom's most popular writers. Henry James said, "Kipling strikes me personally as the most complete man of genius, as distinct from fine intelligence, that I have ever known." In 1907, he was awarded the Nobel Prize in Literature, as the first English-language writer to receive the prize, and at 41, its youngest recipient to date. He was also sounded for the British Poet Laureateship and several times for a knighthood, but declined both. Following his death in 1936, his ashes were interred at Poets' Corner, part of the South Transept of Westminster Abbey.
Kipling's subsequent reputation has changed with the political and social climate of the age. The contrasting views of him continued for much of the 20th century. George Orwell saw Kipling as "a jingo imperialist," who was "morally insensitive and aesthetically disgusting." Literary critic Douglas Kerr wrote: "[Kipling] is still an author who can inspire passionate disagreement and his place in literary and cultural history is far from settled. But as the age of the European empires recedes, he is recognised as an incomparable, if controversial, interpreter of how empire was experienced. That, and an increasing recognition of his extraordinary narrative gifts, make him a force to be reckoned with."
Rudyard Kipling was born on 30 December 1865 in Bombay, in the Bombay Presidency of British India, to Alice Kipling (née MacDonald) and John Lockwood Kipling. Alice (one of the four noted MacDonald sisters) was a vivacious woman, of whom Lord Dufferin would say, "Dullness and Mrs Kipling cannot exist in the same room." John Lockwood Kipling, a sculptor and pottery designer, was the Principal and Professor of Architectural Sculpture at the newly founded Sir Jamsetjee Jeejebhoy School of Art in Bombay.
John Lockwood and Alice had met in 1863 and courted at Rudyard Lake in Rudyard, Staffordshire, England. They married and moved to India in 1865. They had been so moved by the beauty of the Rudyard Lake area that they named their first child after it. Two of Alice's sisters were married to artists: Georgiana to the painter Edward Burne-Jones, and her sister Agnes to Edward Poynter. Kipling's most prominent relative was his first cousin, Stanley Baldwin, who was Conservative Prime Minister three times in the 1920s and 1930s.
Kipling's birth home on the campus of the J. J. School of Art in Bombay was for many years used as the Dean's residence. Although a cottage bears a plaque noting it as his birth site, the original one may have been torn down and replaced decades ago. Some historians and conservationists take the view that the bungalow marks a site merely close to the home of Kipling's birth, as it was built in 1882 – about 15 years after Kipling was born. Kipling seems to have said as much to the Dean when visiting J. J. School in the 1930s.
Kipling wrote of Bombay:
Mother of Cities to me,
For I was born in her gate,
Between the palms and the sea,
Where the world-end steamers wait.
According to Bernice M. Murphy, "Kipling's parents considered themselves 'Anglo-Indians' [a term used in the 19th century for people of British origin living in India] and so too would their son, though he spent the bulk of his life elsewhere. Complex issues of identity and national allegiance would become prominent in his fiction."
Kipling referred to such conflicts. For example: "In the afternoon heats before we took our sleep, she (the Portuguese "ayah", or nanny) or Meeta (the Hindu "bearer", or male attendant) would tell us stories and Indian nursery songs all unforgotten, and we were sent into the dining-room after we had been dressed, with the caution 'Speak English now to Papa and Mamma.' So one spoke 'English', haltingly translated out of the vernacular idiom that one thought and dreamed in."
Kipling's days of "strong light and darkness" in Bombay ended when he was five. As was the custom in British India, he and his three-year-old sister Alice ("Trix") were taken to the United Kingdom – in their case to Southsea, Portsmouth – to live with a couple who boarded children of British nationals living abroad. For the next six years (from October 1871 to April 1877), the children lived with the couple – Captain Pryse Agar Holloway, once an officer in the merchant navy, and Sarah Holloway – at their house, Lorne Lodge, 4 Campbell Road, Southsea.
In his autobiography published 65 years later, Kipling recalled the stay with horror, and wondered if the combination of cruelty and neglect which he experienced there at the hands of Mrs Holloway might not have hastened the onset of his literary life: "If you cross-examine a child of seven or eight on his day's doings (specially when he wants to go to sleep) he will contradict himself very satisfactorily. If each contradiction be set down as a lie and retailed at breakfast, life is not easy. I have known a certain amount of bullying, but this was calculated torture – religious as well as scientific. Yet it made me give attention to the lies I soon found it necessary to tell: and this, I presume, is the foundation of literary effort."
Trix fared better at Lorne Lodge; Mrs Holloway apparently hoped that Trix would eventually marry the Holloways' son. The two Kipling children, however, had no relatives in England they could visit, except that they spent a month each Christmas with a maternal aunt Georgiana ("Georgy") and her husband, Edward Burne-Jones, at their house, The Grange, in Fulham, London, which Kipling called "a paradise which I verily believe saved me."
In the spring of 1877, Alice returned from India and removed the children from Lorne Lodge. Kipling remembers, "Often and often afterwards, the beloved Aunt would ask me why I had never told any one how I was being treated. Children tell little more than animals, for what comes to them they accept as eternally established. Also, badly-treated children have a clear notion of what they are likely to get if they betray the secrets of a prison-house before they are clear of it."
Alice took the children during Spring 1877 to Goldings Farm at Loughton, where a carefree summer and autumn was spent on the farm and adjoining Forest, some of the time with Stanley Baldwin. In January 1878, Kipling was admitted to the United Services College at Westward Ho!, Devon, a school recently founded to prepare boys for the army. It proved rough going for him at first, but later led to firm friendships and provided the setting for his schoolboy stories "Stalky & Co." (1899). While there, Kipling met and fell in love with Florence Garrard, who was boarding with Trix at Southsea (to which Trix had returned). Florence became the model for Maisie in Kipling's first novel, "The Light That Failed" (1891).
Near the end of his schooling, it was decided that Kipling did not have the academic ability to get into Oxford University on a scholarship. His parents lacked the wherewithal to finance him, and so Kipling's father obtained him a job in Lahore, where the father served as Principal of the Mayo College of Art and Curator of the Lahore Museum. Kipling was to be assistant editor of a local newspaper, the "Civil and Military Gazette".
He sailed for India on 20 September 1882 and arrived in Bombay on 18 October. He described the moment years later: "So, at sixteen years and nine months, but looking four or five years older, and adorned with real whiskers which the scandalised Mother abolished within one hour of beholding, I found myself at Bombay where I was born, moving among sights and smells that made me deliver in the vernacular sentences whose meaning I knew not. Other Indian-born boys have told me how the same thing happened to them." This arrival changed Kipling, as he explains: "There were yet three or four days' rail to Lahore, where my people lived. After these, my English years fell away, nor ever, I think, came back in full strength."
From 1883 to 1889, Kipling worked in British India for local newspapers such as the "Civil and Military Gazette" in Lahore and "The Pioneer" in Allahabad.
The former, which was the newspaper Kipling was to call his "mistress and most true love," appeared six days a week throughout the year, except for one-day breaks for Christmas and Easter. Stephen Wheeler, the editor, worked Kipling hard, but Kipling's need to write was unstoppable. In 1886, he published his first collection of verse, "Departmental Ditties." That year also brought a change of editors at the newspaper; Kay Robinson, the new editor, allowed more creative freedom and Kipling was asked to contribute short stories to the newspaper.
In an article printed in the "Chums" boys' annual, an ex-colleague of Kipling's stated that "he never knew such a fellow for ink – he simply revelled in it, filling up his pen viciously, and then throwing the contents all over the office, so that it was almost dangerous to approach him." The anecdote continues: "In the hot weather when he (Kipling) wore only white trousers and a thin vest, he is said to have resembled a Dalmatian dog more than a human being, for he was spotted all over with ink in every direction."
In the summer of 1883, Kipling visited Shimla, then Simla, a well-known hill station and the summer capital of British India. By then it was the practice for the Viceroy of India and government to move to Simla for six months, and the town became a "centre of power as well as pleasure." Kipling's family became annual visitors to Simla, and Lockwood Kipling was asked to serve in Christ Church there. Rudyard Kipling returned to Simla for his annual leave each year from 1885 to 1888, and the town featured prominently in many stories he wrote for the "Gazette". "My month's leave at Simla, or whatever Hill Station my people went to, was pure joy – every golden hour counted. It began in heat and discomfort, by rail and road. It ended in the cool evening, with a wood fire in one's bedroom, and next morn – thirty more of them ahead! – the early cup of tea, the Mother who brought it in, and the long talks of us all together again. One had leisure to work, too, at whatever play-work was in one's head, and that was usually full."
Back in Lahore, 39 of his stories appeared in the "Gazette" between November 1886 and June 1887. Kipling included most of them in "Plain Tales from the Hills", his first prose collection, published in Calcutta in January 1888, a month after his 22nd birthday. Kipling's time in Lahore, however, had come to an end. In November 1887, he was moved to the "Gazette"s larger sister newspaper, "The Pioneer", in Allahabad in the United Provinces, where he worked as assistant editor and lived in Belvedere House from 1888 to 1889.
Kipling's writing continued at a frenetic pace. In 1888, he published six collections of short stories: "Soldiers Three", "The Story of the Gadsbys", "In Black and White", "Under the Deodars", "The Phantom Rickshaw", and "Wee Willie Winkie". These contain a total of 41 stories, some quite long. In addition, as "The Pioneer"s special correspondent in the western region of Rajputana, he wrote many sketches that were later collected in "Letters of Marque" and published in "From Sea to Sea and Other Sketches, Letters of Travel".
Kipling was discharged from "The Pioneer" in early 1889 after a dispute. By this time, he had been increasingly thinking of his future. He sold the rights to his six volumes of stories for £200 and a small royalty, and the "Plain Tales" for £50; in addition, he received six-months' salary from "The Pioneer", "in lieu" of notice.
Kipling decided to use the money to move to London, as the literary centre of the British Empire. On 9 March 1889, he left India, travelling first to San Francisco via Rangoon, Singapore, Hong Kong, and Japan. Kipling was favourably impressed by Japan, calling its people "gracious folk and fair manners."
Kipling later wrote that he "had lost his heart" to a geisha whom he called O-Toyo, writing while in the United States during the same trip across the Pacific, "I had left the innocent East far behind... Weeping softly for O-Toyo... O-Toyo was a darling." Kipling then travelled through the United States, writing articles for "The Pioneer" that were later published in "From Sea to Sea and Other Sketches, Letters of Travel".
Starting his North American travels in San Francisco, Kipling went north to Portland, Oregon, then Seattle, Washington, up to Victoria and Vancouver, British Columbia, through Medicine Hat, Alberta, back into the US to Yellowstone National Park, down to Salt Lake City, then east to Omaha, Nebraska and on to Chicago, Illinois, then to Beaver, Pennsylvania on the Ohio River to visit the Hill family. From there, he went to Chautauqua with Professor Hill, and later to Niagara Falls, Toronto, Washington, D.C., New York, and Boston.
In the course of this journey he met Mark Twain in Elmira, New York, and was deeply impressed. Kipling arrived unannounced at Twain's home, and later wrote that as he rang the doorbell, "It occurred to me for the first time that Mark Twain might possibly have other engagements other than the entertainment of escaped lunatics from India, be they ever so full of admiration."
As it was, Twain gladly welcomed Kipling and had a two-hour conversation with him on trends in Anglo-American literature and about what Twain was going to write in a sequel to "Tom Sawyer", with Twain assuring Kipling that a sequel was coming, although he had not decided upon the ending: either Sawyer would be elected to Congress or he would be hanged. Twain also passed along the literary advice that an author should "get your facts first and then you can distort 'em as much as you please." Twain, who rather liked Kipling, later wrote of their meeting: "Between us, we cover all knowledge; he covers all that can be known and I cover the rest." Kipling then crossed the Atlantic to Liverpool in October 1889. He soon made his début in the London literary world, to great acclaim.
In London, Kipling had several stories accepted by magazines. He found a place to live for the next two years at Villiers Street, near Charing Cross (in a building subsequently named Kipling House):
Meantime, I had found me quarters in Villiers Street, Strand, which forty-six years ago was primitive and passionate in its habits and population. My rooms were small, not over-clean or well-kept, but from my desk I could look out of my window through the fanlight of Gatti's Music-Hall entrance, across the street, almost on to its stage. The Charing Cross trains rumbled through my dreams on one side, the boom of the Strand on the other, while, before my windows, Father Thames under the Shot tower walked up and down with his traffic.
In the next two years, he published a novel, "The Light That Failed", had a nervous breakdown, and met an American writer and publishing agent, Wolcott Balestier, with whom he collaborated on a novel, "The Naulahka" (a title which he uncharacteristically misspelt; see below). In 1891, as advised his doctors, Kipling took another sea voyage, to South Africa, Australia, New Zealand, and once again India. He cut short his plans to spend Christmas with his family in India when he heard of Balestier's sudden death from typhoid fever and decided to return to London immediately. Before his return, he had used the telegram to propose to and be accepted by Wolcott's sister Caroline Starr Balestier (1862–1939), called "Carrie," whom he had met a year earlier, and with whom he had apparently been having an intermittent romance. Meanwhile, late in 1891, a collection of his short stories on the British in India, "Life's Handicap", was published in London.
On 18 January 1892, Carrie Balestier (aged 29) and Rudyard Kipling (aged 26) married in London, in the "thick of an influenza epidemic, when the undertakers had run out of black horses and the dead had to be content with brown ones." The wedding was held at All Souls Church, Langham Place. Henry James gave the bride away.
Kipling and his wife settled upon a honeymoon that took them first to the United States (including a stop at the Balestier family estate near Brattleboro, Vermont) and then to Japan. On arriving in Yokohama, they discovered that their bank, The New Oriental Banking Corporation, had failed. Taking this loss in their stride, they returned to the US, back to Vermont – Carrie by this time was pregnant with their first child – and rented a small cottage on a farm near Brattleboro for $10 a month. According to Kipling, "We furnished it with a simplicity that fore-ran the hire-purchase system. We bought, second or third hand, a huge, hot-air stove which we installed in the cellar. We cut generous holes in our thin floors for its eight-inch [20 cm] tin pipes (why we were not burned in our beds each week of the winter I never can understand) and we were extraordinarily and self-centredly content."
In this house, which they called "Bliss Cottage", their first child, Josephine, was born "in three-foot of snow on the night of 29 December 1892. Her Mother's birthday being the 31st and mine the 30th of the same month, we congratulated her on her sense of the fitness of things..."
It was also in this cottage that the first dawnings of "The Jungle Books" came to Kipling: "The workroom in the Bliss Cottage was seven feet by eight, and from December to April, the snow lay level with its window-sill. It chanced that I had written a tale about Indian Forestry work which included a boy who had been brought up by wolves. In the stillness, and suspense, of the winter of '92 some memory of the Masonic Lions of my childhood's magazine, and a phrase in Haggard's "Nada the Lily", combined with the echo of this tale. After blocking out the main idea in my head, the pen took charge, and I watched it begin to write stories about Mowgli and animals, which later grew into the two "Jungle Books"."
With Josephine's arrival, "Bliss Cottage" was felt to be congested, so eventually the couple bought land – on a rocky hillside overlooking the Connecticut River – from Carrie's brother Beatty Balestier and built their own house. Kipling named this Naulakha, in honour of Wolcott and of their collaboration, and this time the name was spelt correctly. From his early years in Lahore (1882–87), Kipling had become enamoured with the Mughal architecture, especially the Naulakha pavilion situated in Lahore Fort, which eventually inspired the title of his novel as well as the house. The house still stands on Kipling Road, three miles (5 km) north of Brattleboro in Dummerston, Vermont: a big, secluded, dark-green house, with shingled roof and sides, which Kipling called his "ship," and which brought him "sunshine and a mind at ease." His seclusion in Vermont, combined with his healthy "sane clean life," made Kipling both inventive and prolific.
In a mere four years he produced, along with the "Jungle Books", a book of short stories ("The Day's Work"), a novel ("Captains Courageous"), and a profusion of poetry, including the volume "The Seven Seas". The collection of "Barrack-Room Ballads" was issued in March 1892, first published individually for the most part in 1890, and contained his poems "Mandalay" and "Gunga Din." He especially enjoyed writing the "Jungle Books" and also corresponding with many children who wrote to him about them.
The writing life in "Naulakha" was occasionally interrupted by visitors, including his father, who visited soon after his retirement in 1893, and the British writer Arthur Conan Doyle, who brought his golf clubs, stayed for two days, and gave Kipling an extended golf lesson. Kipling seemed to take to golf, occasionally practising with the local Congregational minister and even playing with red-painted balls when the ground was covered in snow. However, winter golf was "not altogether a success because there were no limits to a drive; the ball might skid two miles (3 km) down the long slope to Connecticut river."
Kipling loved the outdoors, not least of whose marvels in Vermont was the turning of the leaves each fall. He described this moment in a letter: "A little maple began it, flaming blood-red of a sudden where he stood against the dark green of a pine-belt. Next morning there was an answering signal from the swamp where the sumacs grow. Three days later, the hill-sides as fast as the eye could range were afire, and the roads paved, with crimson and gold. Then a wet wind blew, and ruined all the uniforms of that gorgeous army; and the oaks, who had held themselves in reserve, buckled on their dull and bronzed cuirasses and stood it out stiffly to the last blown leaf, till nothing remained but pencil-shadings of bare boughs, and one could see into the most private heart of the woods."
In February 1896, Elsie Kipling was born, the couple's second daughter. By this time, according to several biographers, their marital relationship was no longer light-hearted and spontaneous. Although they would always remain loyal to each other, they seemed now to have fallen into set roles. In a letter to a friend who had become engaged around this time, the 30‑year‑old Kipling offered this sombre counsel: marriage principally taught "the tougher virtues – such as humility, restraint, order, and forethought." Later in the same year, he temporarily taught at Bishop's College School in Quebec, Canada.
The Kiplings loved life in Vermont and might have lived out their lives there, were it not for two incidents – one of global politics, the other of family discord. By the early 1890s, the United Kingdom and Venezuela were in a border dispute involving British Guiana. The US had made several offers to arbitrate, but in 1895, the new American Secretary of State Richard Olney upped the ante by arguing for the American "right" to arbitrate on grounds of sovereignty on the continent (see the Olney interpretation as an extension of the Monroe Doctrine). This raised hackles in Britain, and the situation grew into a major Anglo-American crisis, with talk of war on both sides.
Although the crisis eased into greater US–British cooperation, Kipling was bewildered by what he felt was persistent anti-British sentiment in the US, especially in the press. He wrote in a letter that it felt like being "aimed at with a decanter across a friendly dinner table." By January 1896, he had decided to end his family's "good wholesome life" in the US and seek their fortunes elsewhere.
A family dispute became the final straw. For some time, relations between Carrie and her brother Beatty Balestier had been strained, owing to his drinking and insolvency. In May 1896, an inebriated Beatty encountered Kipling on the street and threatened him with physical harm. The incident led to Beatty's eventual arrest, but in the subsequent hearing and the resulting publicity, Kipling's privacy was destroyed, and he was left feeling miserable and exhausted. In July 1896, a week before the hearing was to resume, the Kiplings packed their belongings, left the United States and returned to England.
By September 1896, the Kiplings were in Torquay, Devon, on the south-western coast of England, in a hillside home overlooking the English Channel. Although Kipling did not much care for his new house, whose design, he claimed, left its occupants feeling dispirited and gloomy, he managed to remain productive and socially active.
Kipling was now a famous man, and in the previous two or three years had increasingly been making political pronouncements in his writings. The Kiplings had welcomed their first son, John, in August 1897. Kipling had begun work on two poems, "Recessional" (1897) and "The White Man's Burden" (1899), which were to create controversy when published. Regarded by some as anthems for enlightened and duty-bound empire-building (capturing the mood of the Victorian era), the poems were seen by others as propaganda for brazen-faced imperialism and its attendant racial attitudes; still others saw irony in the poems and warnings of the perils of empire.
Take up the White Man's burden—
Send forth the best ye breed—
Go, bind your sons to exile
To serve your captives' need;
To wait, in heavy harness,
On fluttered folk and wild—
Your new-caught sullen peoples,
Half devil and half child.
—"The White Man's Burden"
There was also foreboding in the poems, a sense that all could yet come to naught.
Far-called, our navies melt away;
On dune and headland sinks the fire:
Lo, all our pomp of yesterday
Is one with Nineveh and Tyre!
Judge of the Nations, spare us yet.
Lest we forget – lest we forget!
—"Recessional"
A prolific writer during his time in Torquay, he also wrote "Stalky & Co.", a collection of school stories (born of his experience at the United Services College in Westward Ho!), whose juvenile protagonists display a know-it-all, cynical outlook on patriotism and authority. According to his family, Kipling enjoyed reading aloud stories from "Stalky & Co." to them and often went into spasms of laughter over his own jokes.
In early 1898, the Kiplings travelled to South Africa for their winter holiday, so beginning an annual tradition which (except the following year) would last until 1908. They would stay in "The Woolsack," a house on Cecil Rhodes's estate at Groote Schuur (now a student residence for the University of Cape Town), within walking distance of Rhodes' mansion.
With his new reputation as "Poet of the Empire", Kipling was warmly received by some of the influential politicians of the Cape Colony, including Rhodes, Sir Alfred Milner, and Leander Starr Jameson. Kipling cultivated their friendship and came to admire the men and their politics. The period 1898–1910 was crucial in the history of South Africa and included the Second Boer War (1899–1902), the ensuing peace treaty, and the 1910 formation of the Union of South Africa. Back in England, Kipling wrote poetry in support of the British cause in the Boer War and on his next visit to South Africa in early 1900, became a correspondent for "The Friend" newspaper in Bloemfontein, which had been commandeered by Lord Roberts for British troops.
Although his journalistic stint was to last only two weeks, it was Kipling's first work on a newspaper staff since he left "The Pioneer" in Allahabad more than ten years before. At "The Friend", he made lifelong friendships with Perceval Landon, H. A. Gwynne, and others. He also wrote articles published more widely expressing his views on the conflict. Kipling penned an inscription for the Honoured Dead Memorial (Siege memorial) in Kimberley.
In 1897, Kipling moved from Torquay to Rottingdean, East Sussex – first to "North End House" and then to "The Elms". In 1902, Kipling bought Bateman's, a house built in 1634 and located in rural Burwash.
Bateman's was Kipling's home from 1902 until his death in 1936. The house and its surrounding buildings, the mill and , were bought for £9,300. It had no bathroom, no running water upstairs and no electricity, but Kipling loved it: "Behold us, lawful owners of a grey stone lichened house – A.D. 1634 over the door – beamed, panelled, with old oak staircase, and all untouched and unfaked. It is a good and peaceable place. We have loved it ever since our first sight of it" (from a November 1902 letter).
In the non-fiction realm, he became involved in the debate over the British response to the rise in German naval power known as the Tirpitz Plan, to build a fleet to challenge the Royal Navy, publishing a series of articles in 1898 collected as "A Fleet in Being". On a visit to the United States in 1899, Kipling and his daughter Josephine developed pneumonia, from which she eventually died.
In the wake of his daughter's death, Kipling concentrated on collecting material for what became "Just So Stories for Little Children", published in 1902, the year after "Kim". The American literary scholar David Scott has argued that "Kim" disproves the claim by Edward Said about Kipling as a promoter of Orientalism as Kipling – who was deeply interested in Buddhism – as he presented Tibetan Buddhism in a fairly sympathetic light and aspects of the novel appeared to reflect a Buddhist understanding of the universe. Kipling was offended by the German Emperor Wilhelm II's "Hun speech ()" in 1900, urging German troops being sent to China to crush the Boxer Rebellion to behave like "Huns" and take no prisoners.
In a 1902 poem, "The Rowers", Kipling attacked the Kaiser as a threat to Britain and made the first use of the term "Hun" as an anti-German insult, using Wilhelm's own words and the actions of German troops in China to portray Germans as essentially barbarian. In an interview with the French newspaper "Le Figaro", the Francophile Kipling called Germany a menace and called for an Anglo-French alliance to stop it. In another letter at the same time, Kipling described the ""unfrei" peoples of Central Europe" as living in "the Middle Ages with machine guns."
Kipling wrote a number of speculative fiction short stories, including "The Army of a Dream," in which he sought to show a more efficient and responsible army than the hereditary bureaucracy of England at the time, and two science fiction stories: "With the Night Mail" (1905) and "As Easy As A.B.C." (1912). Both were set in the 21st century in Kipling's Aerial Board of Control universe. They read like modern hard science fiction, and introduced the literary technique known as indirect exposition, which would later become one of science fiction writer Robert Heinlein's hallmarks. This technique is one that Kipling picked up in India, and used to solve the problem of his English readers not understanding much about Indian society, when writing "The Jungle Book".
In 1907, he was awarded the Nobel Prize for Literature, having been nominated in that year by Charles Oman, professor at the University of Oxford. The prize citation said it was "in consideration of the power of observation, originality of imagination, virility of ideas and remarkable talent for narration which characterize the creations of this world-famous author." Nobel prizes had been established in 1901 and Kipling was the first English-language recipient. At the award ceremony in Stockholm on 10 December 1907, the Permanent Secretary of the Swedish Academy, Carl David af Wirsén, praised both Kipling and three centuries of English literature:
The Swedish Academy, in awarding the Nobel Prize in Literature this year to Rudyard Kipling, desires to pay a tribute of homage to the literature of England, so rich in manifold glories, and to the greatest genius in the realm of narrative that that country has produced in our times.
To "book-end" this achievement came the publication of two connected poetry and story collections: "Puck of Pook's Hill" (1906), and "Rewards and Fairies" (1910). The latter contained the poem "If—." In a 1995 BBC opinion poll, it was voted the UK's favourite poem. This exhortation to self-control and stoicism is arguably Kipling's most famous poem.
Such was Kipling's popularity that he was asked by his friend Max Aitken to intervene in the 1911 Canadian election on behalf of the Conservatives. In 1911, the major issue in Canada was a reciprocity treaty with the United States signed by the Liberal Prime Minister Sir Wilfrid Laurier and vigorously opposed by the Conservatives under Sir Robert Borden. On 7 September 1911, the "Montreal Daily Star" newspaper published a front-page appeal against the agreement by Kipling, who wrote: "It is her own soul that Canada risks today. Once that soul is pawned for any consideration, Canada must inevitably conform to the commercial, legal, financial, social, and ethical standards which will be imposed on her by the sheer admitted weight of the United States." At the time, the "Montreal Daily Star" was Canada's most read newspaper. Over the next week, Kipling's appeal was reprinted in every English newspaper in Canada and is credited with helping to turn Canadian public opinion against the Liberal government.
Kipling sympathised with the anti-Home Rule stance of Irish Unionists, who opposed Irish autonomy. He was friends with Edward Carson, the Dublin-born leader of Ulster Unionism, who raised the Ulster Volunteers to prevent Home Rule in Ireland. Kipling wrote in a letter to a friend that Ireland was not a nation, and that before the English arrived in 1169, the Irish were a gang of cattle thieves living in savagery and killing each other while "writing dreary poems" about it all. In his view it was only British rule that allowed Ireland to advance. A visit to Ireland in 1911 confirmed Kipling's prejudices. He wrote that the Irish countryside was beautiful, but spoiled by what he called the ugly homes of Irish farmers, with Kipling adding that God had made the Irish into poets having "deprived them of love of line or knowledge of colour." In contrast, Kipling had nothing but praise for the "decent folk" of the Protestant minority and Unionist Ulster, free from the threat of "constant mob violence".
Kipling wrote the poem ""Ulster"" in 1912, reflecting his Unionist politics. Kipling often referred to the Irish Unionists as "our party." Kipling had no sympathy or understanding for Irish nationalism, seeing Home Rule as an act of treason by the government of the Liberal Prime Minister H. H. Asquith that would plunge Ireland into the Dark Ages and allow the Irish Catholic majority to oppress the Protestant minority. The scholar David Gilmour wrote that Kipling's lack of understanding of Ireland could be seen in his attack on John Redmond – the Anglophile leader of the Irish Parliamentary Party who wanted Home Rule because he believed it was the best way of keeping the United Kingdom together – as a traitor working to break up the United Kingdom. "Ulster" was first publicly read at an Unionist rally in Belfast, where the largest Union Jack ever made was unfolded. Kipling admitted it was meant to strike a "hard blow" against the Asquith government's Home Rule bill: "Rebellion, rapine, hate, Oppression, wrong and greed, Are loosed to rule our fate, By England's act and deed." "Ulster" generated much controversy with the Conservative MP Sir Mark Sykes – who as a Unionist was opposed to the Home Rule bill – condemning "Ulster" in "The Morning Post" as a "direct appeal to ignorance and a deliberate attempt to foster religious hate."
Kipling was a staunch opponent of Bolshevism, a position which he shared with his friend Henry Rider Haggard. The two had bonded on Kipling's arrival in London in 1889 largely due to their shared opinions, and remained lifelong friends.
According to the English magazine "Masonic Illustrated", Kipling became a Freemason in about 1885, before the usual minimum age of 21, being initiated into Hope and Perseverance Lodge No. 782 in Lahore. He later wrote to "The Times", "I was Secretary for some years of the Lodge... which included Brethren of at least four creeds. I was entered [as an Apprentice] by a member from Brahmo Somaj, a Hindu, passed [to the degree of Fellow Craft] by a Mohammedan, and raised [to the degree of Master Mason] by an Englishman. Our Tyler was an Indian Jew." Kipling received not only the three degrees of Craft Masonry but also the side degrees of Mark Master Mason and Royal Ark Mariner.
Kipling so loved his Masonic experience that he memorialised its ideals in his poem "The Mother Lodge," and used the fraternity and its symbols as vital plot devices in his novella "The Man Who Would Be King".
At the beginning of the First World War, like many other writers, Kipling wrote pamphlets and poems enthusiastically supporting the UK war aims of restoring Belgium, after it had been occupied by Germany, together with generalised statements that Britain was standing up for the cause of good. In September 1914, Kipling was asked by the government to write propaganda, an offer that he accepted. Kipling's pamphlets and stories were popular with the British people during the war, his major themes being to glorify the British military as "the" place for heroic men to be, while citing German atrocities against Belgian civilians and the stories of women brutalised by a horrific war unleashed by Germany, yet surviving and triumphing in spite of their suffering.
Kipling was enraged by reports of the Rape of Belgium together with the sinking of the in 1915, which he saw as a deeply inhumane act, which led him to see the war as a crusade for civilisation against barbarism. In a 1915 speech, Kipling declared, "There was no crime, no cruelty, no abomination that the mind of men can conceive of which the German has not perpetrated, is not perpetrating, and will not perpetrate if he is allowed to go on... Today, there are only two divisions in the world... human beings and Germans."
Alongside his passionate antipathy towards Germany, Kipling was privately deeply critical of how the war was being fought by the British Army, complaining as early as October 1914 that Germany should have been defeated by now, and something must be wrong with the British Army. Kipling, who was shocked by the heavy losses that the British Expeditionary Force had taken by the autumn of 1914, blamed the entire pre-war generation of British politicians who, he argued, had failed to learn the lessons of the Boer War. Thus thousands of British soldiers were now paying with their lives for their failure in the fields of France and Belgium.
Kipling had scorn for men who shirked duty in the First World War. In "The New Army in Training" (1915), Kipling concluded by saying:
This much we can realise, even though we are so close to it, the old safe instinct saves us from triumph and exultation. But what will be the position in years to come of the young man who has deliberately elected to outcaste himself from this all-embracing brotherhood? What of his family, and, above all, what of his descendants, when the books have been closed and the last balance struck of sacrifice and sorrow in every hamlet, village, parish, suburb, city, shire, district, province, and Dominion throughout the Empire?
In 1914, Kipling was one of fifty-three leading British authors — a number that included H. G. Wells, Arthur Conan Doyle and Thomas Hardy — who signed their names to the “Authors’ Declaration.” This manifesto declared that the German invasion of Belgium had been a brutal crime, and that Britain “could not without dishonour have refused to take part in the present war.”
Kipling's son John was killed in action at the Battle of Loos in September 1915, at age 18. John had initially wanted to join the Royal Navy, but having had his application turned down after a failed medical examination due to poor eyesight, he opted to apply for military service as an army officer. Again, his eyesight was an issue during the medical examination. In fact, he tried twice to enlist, but was rejected. His father had been lifelong friends with Lord Roberts, former commander-in-chief of the British Army, and colonel of the Irish Guards, and at Rudyard's request, John was accepted into the Irish Guards.
John Kipling was sent to Loos two days into the battle in a reinforcement contingent. He was last seen stumbling through the mud blindly, with a possible facial injury. A body identified as his was found in 1992, although that identification has been challenged. In 2015, the Commonwealth War Grave Commission confirmed that it had correctly identified the burial place of John Kipling; they record his date of death as 27 September 1915, and that he is buried at St Mary's A.D.S. Cemetery, Haisnes.
After his son's death, in a poem entitled "Epitaphs of the War," Kipling wrote, "If any question why we died / Tell them, because our fathers lied." Critics have speculated that these words may express Kipling's guilt over his role in arranging John's commission. Professor Tracy Bilsing contends that the line refers to Kipling's disgust that British leaders failed to learn the lessons of the Boer War, and were unprepared for the struggle with Germany in 1914, with the "lie" of the "fathers" being that the British Army was prepared for any war when it was not.
John's death has been linked to Kipling's 1916 poem "My Boy Jack," notably in the play "My Boy Jack" and its subsequent television adaptation, along with the documentary "". However, the poem was originally published at the head of a story about the Battle of Jutland and appears to refer to a death at sea; the "Jack" referred to is probably a generic "Jack Tar." In the Kipling family, Jack was the name of the family dog, while John Kipling was always John, making the identification of the protagonist of "My Boy Jack" with John Kipling somewhat questionable. However, Kipling was indeed emotionally devastated by the death of his son. He is said to have assuaged his grief by reading the novels of Jane Austen aloud to his wife and daughter. During the war, he wrote a booklet "The Fringes of the Fleet" containing essays and poems on various nautical subjects of the war. Some of these were set to music by the English composer Edward Elgar.
Kipling became friends with a French soldier named Maurice Hammoneau, whose life had been saved in the First World War when his copy of "Kim", which he had in his left breast pocket, stopped a bullet. Hammoneau presented Kipling with the book, with bullet still embedded, and his Croix de Guerre as a token of gratitude. They continued to correspond, and when Hammoneau had a son, Kipling insisted on returning the book and medal.
On 1 August 1918, a poem, "The Old Volunteer," appeared under his name in "The Times". The next day, he wrote to the newspaper to disclaim authorship and a correction appeared. Although "The Times" employed a private detective to investigate, the detective appears to have suspected Kipling himself of being the author, and the identity of the hoaxer was never established.
Partly in response to John's death, Kipling joined Sir Fabian Ware's Imperial War Graves Commission (now the Commonwealth War Graves Commission), the group responsible for the garden-like British war graves that can be found to this day dotted along the former Western Front and the other places in the world where British Empire troops lie buried. His main contributions to the project were his selection of the biblical phrase, "Their Name Liveth For Evermore" (Ecclesiasticus 44.14, KJV), found on the Stones of Remembrance in larger war cemeteries, and his suggestion of the phrase "Known unto God" for the gravestones of unidentified servicemen. He also chose the inscription "The Glorious Dead" on the Cenotaph, Whitehall, London. Additionally, he wrote a two-volume history of the Irish Guards, his son's regiment, published in 1923 and seen as one of the finest examples of regimental history.
Kipling's short story "The Gardener" depicts visits to the war cemeteries, and the poem "The King's Pilgrimage" (1922) a journey which King George V made, touring the cemeteries and memorials under construction by the Imperial War Graves Commission. With the increasing popularity of the automobile, Kipling became a motoring correspondent for the British press, writing enthusiastically of trips around England and abroad, though he was usually driven by a chauffeur.
After the war, Kipling was sceptical of the Fourteen Points and the League of Nations, but had hopes that the United States would abandon isolationism and the post-war world be dominated by an Anglo-French-American alliance. He hoped the United States would take on a League of Nations mandate for Armenia as the best way of preventing isolationism, and hoped that Theodore Roosevelt, whom Kipling admired, would again become president. Kipling was saddened by Roosevelt's death in 1919, believing him to be the only American politician capable of keeping the United States in the "game" of world politics.
Kipling was hostile towards communism, writing of the Bolshevik take-over in 1917 that one sixth of the world had "passed bodily out of civilization." In a 1918 poem, Kipling wrote of Soviet Russia that everything good in Russia had been destroyed by the Bolsheviks – all that was left was "the sound of weeping and the sight of burning fire, and the shadow of a people trampled into the mire."
In 1920, Kipling co-founded the Liberty League with Haggard and Lord Sydenham. This short-lived enterprise focused on promoting classic liberal ideals as a response to the rising power of communist tendencies within Great Britain, or as Kipling put it, "to combat the advance of Bolshevism."
In 1922, Kipling, having referred to the work of engineers in some of his poems, such as "The Sons of Martha," "Sappers," and "McAndrew's Hymn," and in other writings, including short-story anthologies such as "The Day's Work", was asked by a University of Toronto civil engineering professor, Herbert E. T. Haultain, for assistance in developing a dignified obligation and ceremony for graduating engineering students. Kipling was enthusiastic in his response and shortly produced both, formally entitled "The Ritual of the Calling of an Engineer." Today engineering graduates all across Canada are presented with an iron ring at a ceremony to remind them of their obligation to society. In 1922 Kipling became Lord Rector of St Andrews University in Scotland, a three-year position.
Kipling, as a Francophile, argued strongly for an Anglo-French alliance to uphold the peace, calling Britain and France in 1920 the "twin fortresses of European civilization." Similarly, Kipling repeatedly warned against revising the Treaty of Versailles in Germany's favour, which he predicted would lead to a new world war. An admirer of Raymond Poincaré, Kipling was one of few British intellectuals who supported the French Occupation of the Ruhr in 1923, at a time when the British government and most public opinion was against the French position. In contrast to the popular British view of Poincaré as a cruel bully intent on impoverishing Germany with unreasonable reparations, Kipling argued that he was rightfully trying to preserve France as a great power in the face of an unfavourable situation. Kipling argued that even before 1914, Germany's larger economy and higher birth rate had made that country stronger than France; with much of France devastated by war and the French suffering heavy losses meant that its low birth rate would give it trouble, while Germany was mostly undamaged and still with a higher birth rate. So he reasoned that the future would bring German domination if Versailles were revised in Germany's favour, and it was madness for Britain to press France to do so.
In 1924, Kipling was opposed to the Labour government of Ramsay MacDonald as "Bolshevism without bullets." He believed that Labour was a communist front organisation, and "excited orders and instructions from Moscow" would expose Labour as such to the British people. Kipling's views were on the right. Though he admired Benito Mussolini to some extent in the 1920s, he was against fascism, calling Oswald Mosley was "a bounder and an "arriviste"." By 1935, he was calling Mussolini a deranged and dangerous egomaniac and in 1933 wrote, "The Hitlerites are out for blood."
Despite his anti-communism, the first major translations of Kipling into Russian took place under Lenin's rule in the early 1920s, and Kipling was popular with Russian readers in the interwar period. Many younger Russian poets and writers, such as Konstantin Simonov, were influenced by him. Kipling's clarity of style, use of colloquial language and employment of rhythm and rhyme were seen as major innovations in poetry that appealed to many younger Russian poets.
Though it was obligatory for Soviet journals to begin translations of Kipling with an attack on him as a "fascist" and an "imperialist," such was Kipling's popularity with Russian readers that his works were not banned in the Soviet Union until 1939, with the signing of the Molotov–Ribbentrop Pact. The ban was lifted in 1941 after Operation Barbarossa, when Britain become a Soviet ally, but imposed for good with the Cold War in 1946.
Many older editions of Rudyard Kipling's books have a swastika printed on the cover, associated with a picture of an elephant carrying a lotus flower, reflecting the influence of Indian culture. Kipling's use of the swastika was based on the Indian sun symbol conferring good luck and the Sanskrit word meaning "fortunate" or "well-being." He used the swastika symbol in both right and left-facing forms, and it was in general use by others at the time.
In a note to Edward Bok after the death of Lockwood Kipling in 1911, Rudyard said: "I am sending with this for your acceptance, as some little memory of my father to whom you were so kind, the original of one of the plaques that he used to make for me. I thought it being the Swastika would be appropriate for your Swastika. May it bring you even more good fortune." Once the Nazis came to power and usurped the swastika, Kipling ordered that it should no longer adorn his books. Less than a year before his death, Kipling gave a speech (titled "An Undefended Island") to the Royal Society of St George on 6 May 1935, warning of the danger which Nazi Germany posed to Britain.
Kipling scripted the first Royal Christmas Message, delivered via the BBC's Empire Service by George V in 1932. In 1934, he published a short story in "The Strand Magazine", "Proofs of Holy Writ," postulating that William Shakespeare had helped to polish the prose of the King James Bible.
Kipling kept writing until the early 1930s, but at a slower pace and with less success than before. On the night of 12 January 1936 he suffered a haemorrhage in his small intestine. He underwent surgery, but died less than a week later on 18 January 1936, at the age of 70 of a perforated duodenal ulcer. His death had previously been incorrectly announced in a magazine, to which he wrote, "I've just read that I am dead. Don't forget to delete me from your list of subscribers."
The pallbearers at the funeral included Kipling's cousin, Prime Minister Stanley Baldwin, and the marble casket was covered by a Union Jack. Kipling was cremated at Golders Green Crematorium in north-west London, and his ashes interred at Poets' Corner, part of the South Transept of Westminster Abbey, next to the graves of Charles Dickens and Thomas Hardy. Kipling's will was proven on 6 April, with his estate valued at £168,141 2s. 11d. (roughly equivalent to £ in ).
In 2010, the International Astronomical Union approved that a crater on the planet Mercury should be named after Kipling – one of ten newly discovered impact craters observed by the MESSENGER spacecraft in 2008–2009. In 2012, an extinct species of crocodile, "Goniopholis kiplingi", was named in his honour "in recognition for his enthusiasm for natural sciences."
More than 50 unpublished poems by Kipling, discovered by the American scholar Thomas Pinney, were released for the first time in March 2013.
Kipling's writing has strongly influenced that of others. His stories for adults remain in print and have garnered high praise from writers as different as Poul Anderson, Jorge Luis Borges, and Randall Jarrell, who wrote, "After you have read Kipling's fifty or seventy-five best stories you realize that few men have written this many stories of this much merit, and that very few have written more and better stories."
His children's stories remain popular and his "Jungle Books" made into several films. The first was made by producer Alexander Korda. Other films have been produced by The Walt Disney Company. A number of his poems were set to music by Percy Grainger. A series of short films based on some of his stories was broadcast by the BBC in 1964. Kipling's work is still popular today.
The poet T. S. Eliot edited "A Choice of Kipling's Verse" (1941) with an introductory essay. Eliot was aware of the complaints that had been levelled against Kipling and he dismissed them one by one: that Kipling is "a Tory" using his verse to transmit right wing political views, or "a journalist" pandering to popular taste; while Eliot writes, "I cannot find any justification for the charge that he held a doctrine of race superiority." Eliot finds instead,
Of Kipling's verse, such as his "Barrack-Room Ballads", Eliot writes "of a number of poets who have written great poetry, only... a very few whom I should call great verse writers. And unless I am mistaken, Kipling's position in this class is not only high, but unique."
In response to Eliot, George Orwell wrote a long consideration of Kipling's work for "Horizon" in 1942, noting that although as a "jingo imperialist" Kipling was "morally insensitive and aesthetically disgusting," his work had many qualities which ensured that while "every enlightened person has despised him... nine-tenths of those enlightened persons are forgotten and Kipling is in some sense still there.":
In 1939, the poet W.H. Auden celebrated Kipling in a similarly ambiguous way in his elegy for William Butler Yeats. Auden deleted this section from more recent editions of his poems.
Time, that is intolerant
Of the brave and innocent,
And indifferent in a week
To a beautiful physique,
Worships language, and forgives
Everyone by whom it lives;
Pardons cowardice, conceit,
Lays its honours at his feet.
Time, that with this strange excuse,
Pardons Kipling and his views,
And will pardon Paul Claudel,
Pardons him for writing well.
The poet Alison Brackenbury writes, "Kipling is poetry's Dickens, an outsider and journalist with an unrivalled ear for sound and speech."
The English folk singer Peter Bellamy was a lover of Kipling's poetry, much of which he believed to have been influenced by English traditional folk forms. He recorded several albums of Kipling's verse set to traditional airs, or to tunes of his own composition written in traditional style. However, in the case of the bawdy folk song, "The Bastard King of England," which is commonly credited to Kipling, it is believed that the song is actually misattributed.
Kipling is often quoted in discussions of contemporary British political and social issues. In 1911, Kipling wrote the poem "The Reeds of Runnymede" that celebrated Magna Carta, and summoned up a vision of the "stubborn Englishry" determined to defend their rights. In 1996, the following verses of the poem were quoted by former Prime Minister Margaret Thatcher warning against the encroachment of the European Union on national sovereignty:
At Runnymede, at Runnymede,
Oh, hear the reeds at Runnymede:
‘You musn’t sell, delay, deny,
A freeman’s right or liberty.
It wakes the stubborn Englishry,
We saw ’em roused at Runnymede!
… And still when Mob or Monarch lays
Too rude a hand on English ways,
The whisper wakes, the shudder plays,
Across the reeds at Runnymede.
And Thames, that knows the mood of kings,
And crowds and priests and suchlike things,
Rolls deep and dreadful as he brings
Their warning down from Runnymede!
Political singer-songwriter Billy Bragg, who attempts to build a left-wing English nationalism in contrast with the more common right-wing English nationalism, has attempted to 'reclaim' Kipling for an inclusive sense of Englishness. Kipling's enduring relevance has been noted in the United States, as it has become involved in Afghanistan and other areas about which he wrote.
In 1903, Kipling gave permission to Elizabeth Ford Holt to borrow themes from the "Jungle Books" to establish Camp Mowglis, a summer camp for boys on the shores of Newfound Lake in New Hampshire. Throughout their lives, Kipling and his wife Carrie maintained an active interest in Camp Mowglis, which still continues the traditions that Kipling inspired. Buildings at Mowglis have names such as Akela, Toomai, Baloo, and Panther. The campers are referred to as "the Pack," from the youngest "Cubs" to the oldest living in "Den."
Kipling's links with the Scouting movements were also strong. Robert Baden-Powell, founder of Scouting, used many themes from "Jungle Book" stories and "Kim" in setting up his junior Wolf Cubs. These ties still exist, such as the popularity of "Kim's Game." The movement is named after Mowgli's adopted wolf family, and adult helpers of Wolf Cub Packs take names from "The Jungle Book", especially the adult leader called "Akela" after the leader of the Seeonee wolf pack.
After the death of Kipling's wife in 1939, his house, Bateman's in Burwash, East Sussex, where he had lived from 1902 until 1936, was bequeathed to the National Trust. It is now a public museum dedicated to the author. Elsie Bambridge, his only child who lived to maturity, died childless in 1976, and bequeathed her copyrights to the National Trust, which in turn donated them to the University of Sussex to ensure better public access.
Novelist and poet Sir Kingsley Amis wrote a poem, "Kipling at Bateman's," after visiting Burwash (where Amis's father lived briefly in the 1960s) as part of a BBC television series on writers and their houses.
In 2003, actor Ralph Fiennes read excerpts from Kipling's works from the study in Bateman's, including, "The Jungle Book", "Something of Myself", "Kim", and "The Just So Stories", and poems, including "If ..." and "My Boy Jack," for a CD published by the National Trust.
In modern-day India, whence he drew much of his material, Kipling's reputation remains controversial, especially amongst modern nationalists and some post-colonial critics. Rudyard Kipling was a prominent supporter of Colonel Reginald Dyer, who was responsible for the Jallianwala Bagh massacre in Amritsar (in the province of Punjab). Kipling called Dyer "the man who saved India" and initiated collections for the latter's homecoming prize. However, Subhash Chopra writes in his book "Kipling Sahib – the Raj Patriot" that the benefit fund was started by "The Morning Post" newspaper, not by Kipling, and that Kipling made no contribution to the Dyer fund. While Kipling's name was conspicuously absent from the list of donors as published in "The Morning Post", he clearly admired Dyer.
Other contemporary Indian intellectuals such as Ashis Nandy have taken a more nuanced view. Jawaharlal Nehru, the first Prime Minister of independent India, often described Kipling's novel "Kim" as one of his favourite books.
G. V. Desani, an Indian writer of fiction, had a more negative opinion of Kipling. He alludes to Kipling in his novel, "All About H. Hatterr":
Indian writer Khushwant Singh wrote in 2001 that he considers Kipling's "If—" "the essence of the message of The Gita in English," referring to the Bhagavad Gita, an ancient Indian scripture. Indian writer R. K. Narayan said, "Kipling, the supposed expert writer on India, showed a better understanding of the mind of the animals in the jungle than of the men in an Indian home or the marketplace." The Indian politician and writer Sashi Tharoor commented "Kipling, that flatulent voice of Victorian imperialism, would wax eloquent on the noble duty to bring law to those without it".
In November 2007, it was announced that Kipling's birth home in the campus of the J. J. School of Art in Mumbai would be turned into a museum celebrating the author and his works.
Kipling's bibliography includes fiction (including novels and short stories), non-fiction, and poetry. Several of his works were collaborations. | https://en.wikipedia.org/wiki?curid=26308 |
Regency dance
Regency dance is the term for historical dances of the period ranging roughly from 1790 to 1825. Some feel that the popular use of the term "Regency dance" is not technically correct, as the actual English Regency (the future George IV ruling on behalf of mad King George III) lasted only from 1811 until 1820. However, the term "Regency" has been used to refer to a much broader period than the historical Regency for a very long time, particularly in areas such as the history of art and architecture, literature, and clothing. This is because there are consistencies of style over this period which make having a single term useful.
Most popular exposure to this era of dance comes in the works of Jane Austen. Balls occur in her novels and are discussed in her letters, but specifics are few. Films based on her works tend to incorporate modern revival English Country Dance; however, they rarely incorporate dances actually of the period and do them without the appropriate footwork and social style which make them accurate to the period. Dances of this era were lively and bouncy, not the smooth and stately style seen in films. Steps ranging from simple skipping to elaborate ballet-style movements were used.
In the early part of this period, up to the early 1810s, the ballroom was dominated by the country dance, the cotillion, and the scotch reel.
In the longways Country Dance, a line of couples perform figures with each other, progressing up and down the line. Regency country dances were often proceeded by a brief March by the couples, then begun by the top lady in the set and her partner, who would dance down the set to the bottom. Each couple in turn as they reached the top would likewise dance down until the entire set had returned to its original positions. This could be a lengthy process, easily taking an hour in a long set. An important social element was the calling of the dance by the leading lady (a position of honor), who would determine the figures, steps, and music to be danced. The rest of the set would listen to the calling dancing master or pick up the dance by observing the leading couple. Austen mentions in her letters instances in which she and her partner called the dance.
The cotillion was a French import, performed in a square using more elaborate footwork. It consisted of a "chorus" figure unique to each dance which was danced alternately with a standard series of up to ten "changes", which were simple figures such as a right hand moulinet (star) common to cotillions in general.
The scotch reel of the era consisted of alternate heying (interlacing) and setting (fancy steps danced in place) by a line of three or four dancers. More complex reels appear in manuals as well but it's unclear if they ever actually caught on. A sixsome reel is mentioned in a description of Scottish customs in the early 1820s and eightsome reels (danced in squares like cotillions) occur in some dance manuscripts of the era.
In the 1810s, the era of the Regency proper, English dance began an important transition with the introduction of the quadrille and the waltz.
The Waltz was first imported to England around 1810, but it was not considered socially acceptable until continental visitors at the post-Napoleonic-Wars celebrations danced it in London—and even then it remained the subject of anti-waltz diatribes, caricatures, and jokes. Even the decadent Lord Byron was scandalized by the prospect of people "embracing" on the dance floor. The Regency version is relatively slow, and done up on the balls of the feet with the arms in a variety of graceful positions. The Sauteuse is a leaping waltz commonly done in 2/4 rather than 3/4 time, similar in pattern (leap-glide-close) to the Redowa and Waltz Galop of the later nineteenth century.
First imported from France by Lady Jersey in 1815, the Quadrille was a shorter version of the earlier cotillions. Figures from individual cotillions were assembled into sets of five or six figures, and the changes were left out, producing much shorter dances. By the late 1810s, it was not uncommon to dance a series of quadrilles during the evening, generally consisting of the same first three figures combined with a variety of different fourth and fifth figures. Jane Austen's niece Fanny danced quadrilles and in their correspondence Jane mentions that she finds them much inferior to the cotillions of her own youth.
By the late 1810s, under siege from the Quadrille, dancing masters began to invent "new" forms of country dance, often with figures borrowed from the Quadrille, and giving them exotic names such as the Danse Ecossoise and Danse Espagnuole which suggested entire new dances but actually covered very minor variations in the classic form. A few of these dances became sufficiently popular that they survived through the entire 19th century. One example of this is the "Spanish dance" popular in vintage dance circles, which is a solitary survivor of its entire genre of Regency-era dances.
"La Boulangere", the only dance mentioned by name in Jane Austen's writings, is a simple circle dance for a group of couples. Sir Roger de Coverly, mentioned by Charles Dickens, is the ancestor of America's Virginia Reel.
Numerous instruction manuals survive from the Regency era. Several by Thomas Wilson are in the US Library of Congress online collection. The Scotch Reel is described by Francis Peacock, whose manual is also available in the LC collection.
The first major revival of English Country Dance, one of the major types of Regency dance, was by Englishman Cecil Sharp in the early 20th century. Various other revivals have followed, most using at least some of Sharp's research. Today, there are many groups around the world which perform a variety of English period dances, including many of the types of dance which were popular during the English Regency.
Regency dance has gained popularity at science fiction conventions, in part due to the efforts of John Hertz. Reconstructed dances from the era are taught to newcomers and experienced dancers alike. Some authors—notably, Larry Niven—have added their personal enthusiasm to the trend.
In Silicon Valley, the Bay Area English Regency Society sponsors local dance classes and formal balls in churches, community centers, and other venues. In Pasadena, California, the Valley Area English Regency Society hosts teas and Regency dance parties in a local church. Both societies were founded by Laura Brodian Freas Beraha.
Some enthusiasts go to extremes: Cisco Systems founders Sandra Lerner and Len Bosack created a foundation that bought a Regency-era country house once owned by Jane Austen's brother. In Australia, Earthly Delights Historic Dance Academy and John Gardiner-Garden run a Regency Dance School in conjunction with Jane Austen Festival Australia every April. | https://en.wikipedia.org/wiki?curid=26309 |
Reproduction
Reproduction (or procreation or breeding) is the biological process by which new individual organisms – "offspring" – are produced from their "parents". Reproduction is a fundamental feature of all known life; each individual organism exists as the result of reproduction. There are two forms of reproduction: asexual and sexual.
In asexual reproduction, an organism can reproduce without the involvement of another organism. Asexual reproduction is not limited to single-celled organisms. The cloning of an organism is a form of asexual reproduction. By asexual reproduction, an organism creates a genetically similar or identical copy of itself. The evolution of sexual reproduction is a major puzzle for biologists. The two-fold cost of sexual reproduction is that only 50% of organisms reproduce and organisms only pass on 50% of their genes.
Sexual reproduction typically requires the sexual interaction of two specialized organisms, called gametes, which contain half the number of chromosomes of normal cells and are created by meiosis, with typically a male fertilizing a female of the same species to create a fertilized zygote. This produces offspring organisms whose genetic characteristics are derived from those of the two parental organisms.
Asexual reproduction is a process by which organisms create genetically similar or identical copies of themselves without the contribution of genetic material from another organism. Bacteria divide asexually via binary fission; viruses take control of host cells to produce more viruses; Hydras (invertebrates of the order "Hydroidea") and yeasts are able to reproduce by budding. These organisms often do not possess different sexes, and they are capable of "splitting" themselves into two or more copies of themselves. Most plants have the ability to reproduce asexually and the ant species Mycocepurus smithii is thought to reproduce entirely by asexual means.
Some species that are capable of reproducing asexually, like hydra, yeast (See Mating of yeasts) and jellyfish, may also reproduce sexually. For instance, most plants are capable of vegetative reproduction—reproduction without seeds or spores—but can also reproduce sexually. Likewise, bacteria may exchange genetic information by conjugation.
Other ways of asexual reproduction include parthenogenesis, fragmentation and spore formation that involves only mitosis. Parthenogenesis is the growth and development of embryo or seed without fertilization by a male. Parthenogenesis occurs naturally in some species, including lower plants (where it is called apomixis), invertebrates (e.g. water fleas, aphids, some bees and parasitic wasps), and vertebrates (e.g. some
reptiles, fish, and, very rarely, birds and sharks). It is sometimes also used to describe reproduction modes in hermaphroditic species which can self-fertilize.
Sexual reproduction is a biological process that creates a new organism by combining the genetic material of two organisms in a process that starts with meiosis, a specialized type of cell division. Each of two parent organisms contributes half of the offspring's genetic makeup by creating haploid gametes. Most organisms form two different types of gametes. In these anisogamous species, the two sexes are referred to as male (producing sperm or microspores) and female (producing ova or megaspores). In isogamous species, the gametes are similar or identical in form (isogametes), but may have separable properties and then may be given other different names (see isogamy). For example, in the green alga, "Chlamydomonas reinhardtii", there are so-called "plus" and "minus" gametes. A few types of organisms, such as many fungi and the ciliate "Paramecium aurelia", have more than two "sexes", called syngens.
Most animals (including humans) and plants reproduce sexually. Sexually reproducing organisms have different sets of genes for every trait (called alleles). Offspring inherit one allele for each trait from each parent. Thus, offspring have a combination of the parents' genes. It is believed that "the masking of deleterious alleles favors the evolution of a dominant diploid phase in organisms that alternate between haploid and diploid phases" where recombination occurs freely.
Bryophytes reproduce sexually, but the larger and commonly-seen organisms are haploid and produce gametes. The gametes fuse to form a zygote which develops into a sporangium, which in turn produces haploid spores. The diploid stage is relatively small and short-lived compared to the haploid stage, i.e. "haploid dominance". The advantage of diploidy, heterosis, only exists in the diploid life generation. Bryophytes retain sexual reproduction despite the fact that the haploid stage does not benefit from heterosis. This may be an indication that the sexual reproduction has advantages other than heterosis, such as genetic recombination between members of the species, allowing the expression of a wider range of traits and thus making the population more able to survive environmental variation.
Allogamy is the fertilization of the combination of gametes from two parents, generally the ovum from one individual with the spermatozoa of another. (In isogamous species, the two gametes will not be defined as either sperm or ovum.)
Self-fertilization, also known as autogamy, occurs in hermaphroditic organisms where the two gametes fused in fertilization come from the same individual, e.g., many vascular plants, some foraminiferans, some ciliates. The term "autogamy" is sometimes substituted for autogamous pollination (not necessarily leading to successful fertilization) and describes self-pollination within the same flower, distinguished from geitonogamous pollination, transfer of pollen to a different flower on the same flowering plant, or within a single monoecious Gymnosperm plant.
Mitosis and meiosis are types of cell division. Mitosis occurs in somatic cells, while meiosis occurs in gametes.
Mitosis
The resultant number of cells in mitosis is twice the number of original cells. The number of chromosomes in the offspring cells is the same as that of the parent cell.
Meiosis
The resultant number of cells is four times the number of original cells. This results in cells with half the number of chromosomes present in the parent cell. A diploid cell duplicates itself, then undergoes two divisions (tetraploid to diploid to haploid), in the process forming four haploid cells. This process occurs in two phases, meiosis I and meiosis II.
In recent decades, developmental biologists have been researching and developing techniques to facilitate same-sex reproduction. The obvious approaches, subject to a growing amount of activity, are female sperm and male eggs, with female sperm closer to being a reality for humans, given that Japanese scientists have already created female sperm for chickens. "However, the ratio of produced W chromosome-bearing (W-bearing) spermatozoa fell substantially below expectations. It is therefore concluded that most of the W-bearing PGC could not differentiate into spermatozoa because of restricted spermatogenesis." In 2004, by altering the function of a few genes involved with imprinting, other Japanese scientists combined two mouse eggs to produce daughter mice and in 2018 Chinese scientists created 29 female mice from two female mice mothers but were unable to produce viable offspring from two father mice.
There are a wide range of reproductive strategies employed by different species. Some animals, such as the human and northern gannet, do not reach sexual maturity for many years after birth and even then produce few offspring. Others reproduce quickly; but, under normal circumstances, most offspring do not survive to adulthood. For example, a rabbit (mature after 8 months) can produce 10–30 offspring per year, and a fruit fly (mature after 10–14 days) can produce up to 900 offspring per year. These two main strategies are known as K-selection (few offspring) and r-selection (many offspring). Which strategy is favoured by evolution depends on a variety of circumstances. Animals with few offspring can devote more resources to the nurturing and protection of each individual offspring, thus reducing the need for many offspring. On the other hand, animals with many offspring may devote fewer resources to each individual offspring; for these types of animals it is common for many offspring to die soon after birth, but enough individuals typically survive to maintain the population. Some organisms such as honey bees and fruit flies retain sperm in a process called sperm storage thereby increasing the duration of their fertility.
Organisms that reproduce through asexual reproduction tend to grow in number exponentially. However, because they rely on mutation for variations in their DNA, all members of the species have similar vulnerabilities. Organisms that reproduce sexually yield a smaller number of offspring, but the large amount of variation in their genes makes them less susceptible to disease.
Many organisms can reproduce sexually as well as asexually. Aphids, slime molds, sea anemones, some species of starfish (by fragmentation), and many plants are examples. When environmental factors are favorable, asexual reproduction is employed to exploit suitable conditions for survival such as an abundant food supply, adequate shelter, favorable climate, disease, optimum pH or a proper mix of other lifestyle requirements. Populations of these organisms increase exponentially via asexual reproductive strategies to take full advantage of the rich supply resources.
When food sources have been depleted, the climate becomes hostile, or individual survival is jeopardized by some other adverse change in living conditions, these organisms switch to sexual forms of reproduction. Sexual reproduction ensures a mixing of the gene pool of the species. The variations found in offspring of sexual reproduction allow some individuals to be better suited for survival and provide a mechanism for selective adaptation to occur. The meiosis stage of the sexual cycle also allows especially effective repair of DNA damages (see Meiosis). In addition, sexual reproduction usually results in the formation of a life stage that is able to endure the conditions that threaten the offspring of an asexual parent. Thus, seeds, spores, eggs, pupae, cysts or other "over-wintering" stages of sexual reproduction ensure the survival during unfavorable times and the organism can "wait out" adverse situations until a swing back to suitability occurs.
The existence of life without reproduction is the subject of some speculation. The biological study of how the origin of life produced reproducing organisms from non-reproducing elements is called abiogenesis. Whether or not there were several independent abiogenetic events, biologists believe that the last universal ancestor to all present life on Earth lived about 3.5 billion years ago.
Scientists have speculated about the possibility of creating life non-reproductively in the laboratory. Several scientists have succeeded in producing simple viruses from entirely non-living materials. However, viruses are often regarded as not alive. Being nothing more than a bit of RNA or DNA in a protein capsule, they have no metabolism and can only replicate with the assistance of a hijacked cell's metabolic machinery.
The production of a truly living organism (e.g. a simple bacterium) with no ancestors would be a much more complex task, but may well be possible to some degree according to current biological knowledge. A synthetic genome has been transferred into an existing bacterium where it replaced the native DNA, resulting in the artificial production of a new "M. mycoides" organism.
There is some debate within the scientific community over whether this cell can be considered completely synthetic on the grounds that the chemically synthesized genome was an almost 1:1 copy of a naturally occurring genome and, the recipient cell was a naturally occurring bacterium. The Craig Venter Institute maintains the term "synthetic bacterial cell" but they also clarify "...we do not consider this to be "creating life from scratch" but rather we are creating new life out of already existing life using synthetic DNA". Venter plans to patent his experimental cells, stating that "they are pretty clearly human inventions". Its creators suggests that building 'synthetic life' would allow researchers to learn about life by building it, rather than by tearing it apart. They also propose to stretch the boundaries between life and machines until the two overlap to yield "truly programmable organisms". Researchers involved stated that the creation of "true synthetic biochemical life" is relatively close in reach with current technology and cheap compared to the effort needed to place man on the Moon.
Sexual reproduction has many drawbacks, since it requires far more energy than asexual reproduction and diverts the organisms from other pursuits, and there is some argument about why so many species use it. George C. Williams used lottery tickets as an analogy in one explanation for the widespread use of sexual reproduction. He argued that asexual reproduction, which produces little or no genetic variety in offspring, was like buying many tickets that all have the same number, limiting the chance of "winning" – that is, producing surviving offspring. Sexual reproduction, he argued, was like purchasing fewer tickets but with a greater variety of numbers and therefore a greater chance of success. The point of this analogy is that since asexual reproduction does not produce genetic variations, there is little ability to quickly adapt to a changing environment. The lottery principle is less accepted these days because of evidence that asexual reproduction is more prevalent in unstable environments, the opposite of what it predicts. | https://en.wikipedia.org/wiki?curid=26310 |
Racial segregation
Racial segregation is the systemic separation of people into racial or other ethnic groups in daily life. Segregation can involve the spatial separation of the races, and mandatory use of different institutions, such as schools and hospitals by people of different races. Specifically, it may be applied to activities such as eating in restaurants, drinking from water fountains, using public toilets, attending schools, going to movies, riding buses, renting or purchasing homes or renting hotel rooms. In addition, segregation often allows close contact between members of different racial or ethnic groups in hierarchical situations, such as allowing a person of one race to work as a servant for a member of another race.
Segregation is defined by the European Commission against Racism and Intolerance as "the act by which a (natural or legal) person separates other persons on the basis of one of the enumerated grounds without an objective and reasonable justification, in conformity with the proposed definition of discrimination. As a result, the voluntary act of separating oneself from other people on the basis of one of the enumerated grounds does not constitute segregation". According to the UN Forum on Minority Issues, "The creation and development of classes and schools providing education in minority languages should not be considered impermissible segregation, if the assignment to such classes and schools is of a voluntary nature".
Racial segregation has generally been outlawed worldwide. In the United States, racial segregation was mandated by law in some states (see Jim Crow laws) and enforced along with anti-miscegenation laws (prohibitions against interracial marriage), until the U.S. Supreme Court led by Chief Justice Earl Warren struck down racial segregationist laws throughout the United States. However, racial segregation may exist "de facto" through social norms, even when there is no strong individual preference for it, as suggested by Thomas Schelling's models of segregation and subsequent work. Segregation may be maintained by means ranging from discrimination in hiring and in the rental and sale of housing to certain races to vigilante violence (such as lynchings). Generally, a situation that arises when members of different races mutually prefer to associate and do business with members of their own race would usually be described as "separation" or "de facto separation" of the races rather than "segregation".
Wherever multiracial communities have existed, racial segregation has also been practiced. Only areas with extensive miscegenation, or mixing, such as Hawaii and Brazil, seem to be exempt from it, despite some social stratification within them.
Several laws which enforced racial segregation of foreigners from Chinese were passed by the Han Chinese during the Tang dynasty. In 779 the Tang dynasty issued an edict which forced Uyghurs to wear their ethnic dress, stopped them from marrying Chinese females, and banned them from pretending to be Chinese. In 836, when Lu Chun was appointed as governor of Canton, he was disgusted to find Chinese living with foreigners and intermarriage between Chinese and foreigners. Lu enforced separation, banning interracial marriages, and made it illegal for foreigners to own property. Lu Chun believed his principles were just and upright. The 836 law specifically banned Chinese from forming relationships with "Dark peoples" or "People of colour", which was used to describe foreigners, such as "Iranians, Sogdians, Arabs, Indians, Malays, Sumatrans", among others.
The Qing Dynasty was founded not by the Han Chinese who form the majority of the Chinese population, but the Manchus, who are today an ethnic minority of China. The Manchus were keenly aware of their minority status, however, it was only later in the dynasty that they banned intermarriage.
Han defectors played a massive role in the Qing conquest of China. Han Chinese Generals who defected to the Manchu were often given women from the Imperial Aisin Gioro family in marriage while the ordinary soldiers who defected were given non-royal Manchu women as wives. The Manchu leader Nurhaci married one of his granddaughters to the Ming General Li Yongfang after he surrendered Fushun in Liaoning to the Manchu in 1618. Jurchen (Manchu) women married most the Han Chinese defectors in Liaodong. Aisin Gioro women were married to the sons of the Han Chinese Generals Sun Sike (Sun Ssu-k'o), Geng Jimao (Keng Chi-mao), Shang Kexi (Shang K'o-hsi), and Wu Sangui (Wu San-kuei).
A mass marriage of Han Chinese officers and officials to Manchu women numbering 1,000 couples was arranged by Prince Yoto and Hongtaiji in 1632 to promote harmony between the two ethnic groups.
Geng Zhongming, a Han bannerman, was awarded the title of Prince Jingnan, and his son Geng Jingmao managed to have both his sons Geng Jingzhong and Geng Zhaozhong become court attendants under Shunzhi and marry Aisin Gioro women, with Haoge's (a son of Hong Taiji) daughter marrying Geng Jingzhong and Prince Abatai's (Hong Taiji) granddaughter marrying Geng Zhaozhong.
The Qing differentiated between Han Bannermen and ordinary Han civilians. Han Bannermen were made out of Han Chinese who defected to the Qing up to 1644 and joined the Eight Banners, giving them social and legal privileges in addition to being acculturated to Manchu culture. So many Han defected to the Qing and swelled the ranks of the Eight Banners that ethnic Manchus became a minority within the Banners, making up only 16% in 1648, with Han Bannermen dominating at 75%. It was this multi-ethnic force in which Manchus were only a minority, which conquered China for the Qing.
It was Han Chinese Bannermen who were responsible for the successful Qing conquest of China, they made up the majority of governors in the early Qing and were the ones who governed and administered China after the conquest, stabilizing Qing rule. Han Bannermen dominated the post of governor-general in the time of the Shunzhi and Kangxi Emperors, and also the post of governors, largely excluding ordinary Han civilians from the posts.
To promote ethnic harmony, a 1648 decree from the Manchu Shunzhi Emperor allowed Han Chinese civilian men to marry Manchu women from the Banners with the permission of the Board of Revenue if they were registered daughters of officials or commoners or the permission of their banner company captain if they were unregistered commoners, it was only later in the dynasty that these policies allowing intermarriage were done away with.
The Qing implemented a policy of segregation between the Bannermen of the Eight Banners (Manchu Bannermen, Mongol Bannermen, Han Bannermen) and Han Chinese civilians. This ethnic segregation had cultural and economic reasons: intermarriage was forbidden to keep up the Manchu heritage and minimize sinicization. Han Chinese civilians and Mongol civilians were banned from settling in Manchuria. Han civilians and Mongol civilians were banned from crossing into each other's lands. Ordinary Mongol civilians in Inner Mongolia were banned from even crossing into other Mongol Banners. (A banner in Inner Mongolia was an administrative division and not related to the Mongol Bannermen in the Eight Banners)
These restrictions did not apply Han Bannermen, who were settled in Manchuria by the Qing. Han bannermen were differentiated from Han civilians by the Qing and treated differently.
The Qing Dynasty started colonizing Manchuria with Han Chinese later on in the dynasty's rule, but the Manchu area was still separated from modern-day Inner Mongolia by the Outer Willow Palisade, which kept the Manchu and the Mongols in the area separate.
The policy of segregation applied directly to the banner garrisons, most of which occupied a separate walled zone within the cities in which they were stationed. Manchu Bannermen, Han Bannermen, and Mongol Bannermen were separated from the Han civilian population. While the Manchus followed the governmental structure of the preceding Ming dynasty, their ethnic policy dictated that appointments were split between Manchu noblemen and Han Chinese civilian officials who had passed the highest levels of the state examinations, and because of the small number of Manchus, this insured that a large fraction of them would be government officials.
Spanish colonists created caste systems in Latin American countries based on classification by race and race mixture. An extensive nomenclature developed, including the terms "mulatto", "mestizo", and "zambo" (the latter the origin of "sambo"). The Spanish had practiced a form of caste system in Hispania before their expulsion of the Jews and Muslims. While many Latin American countries have long since rendered the system officially illegal through legislation, usually at the time of independence, prejudice based on degrees of perceived racial distance from European ancestry combined with one's socioeconomic status remain, an echo of the colonial caste system.
The Land Apportionment Act of 1930 passed in Southern Rhodesia (now known as Zimbabwe) was a segregationist measure that governed land allocation and acquisition in rural areas, making distinctions between blacks and whites.
One highly publicized legal battle occurred in 1960 involving the opening of a new theatre that was to be open to all races; the proposed unsegregated public toilets at the newly built Reps Theatre in 1959 caused an argument called "The Battle of the Toilets".
Following its conquest of Ottoman controlled Algeria in 1830, for well over a century France maintained colonial rule in the territory which has been described as "quasi-apartheid". The colonial law of 1865 allowed Arab and Berber Algerians to apply for French citizenship only if they abandoned their Muslim identity; Azzedine Haddour argues that this established "the formal structures of a political apartheid". Camille Bonora-Waisman writes that, "in contrast with the Moroccan and Tunisian protectorates", this "colonial apartheid society" was unique to Algeria.
This "internal system of apartheid" met with considerable resistance from the Muslims affected by it, and is cited as one of the causes of the 1954 insurrection and ensuing independence war.
Though there were no specific laws imposing racial segregation and barring blacks from establishments frequented by whites, "de facto" segregation operated in most areas. For example, initially, the city centers were reserved to the white population only, while the black population was organized in "cités indigènes" (indigenous neighbourhoods called 'le belge'). Hospitals, department stores and other facilities were often reserved for either whites or blacks.
The black population in the cities could not leave their houses from 9 pm to 4 am. This type of segregation began to disappear gradually only in the 1950s, but even then the Congolese remained or felt treated in many respects as second-rate citizens (for instance in political and legal terms).
From 1953, and even more so after the triumphant visit of King Baudouin to the colony in 1955, Governor-General Léon Pétillon (1952–1958) worked to create a "Belgian-Congolese community", in which blacks and whites were to be treated as equals. Regardless, anti-miscegenation laws remained in place, and between 1959 and 1962 thousands of mixed-race Congolese children were forcibly deported from the Congo by the Belgian government and the Catholic Church and taken to Belgium.
Jews in Europe were generally forced, by decree or informal pressure, to live in highly segregated ghettos and shtetls. In 1204 the papacy required Jews to segregate themselves from Christians and wear distinctive clothing. Forced segregation of Jews spread throughout Europe during the 14th and 15th centuries. In the Russian Empire, Jews were restricted to the so-called Pale of Settlement, the Western frontier of the Russian Empire which roughly corresponds to the modern-day countries of Poland, Lithuania, Belarus, Moldova and Ukraine. By the early 20th century, the majority of Europe's Jews lived in the Pale of Settlement.
From the beginning of the 15th century, Jewish populations in Morocco were confined to mellahs. In cities, a "mellah" was surrounded by a wall with a fortified gateway. In contrast, rural "mellahs" were separate villages whose sole inhabitants were Jews.
In the middle of the 19th century, J. J. Benjamin wrote about the lives of Persian Jews:
On 16 May 1940, the "Administrasjonsrådet" asked the Rikskommisariatet why radio receivers had been confiscated from Jews in Norway. "Administrasjonsrådet" thereafter "quietly" accepted racial segregation between Norwegian citizens, has been claimed by Tor Bomann-Larsen. Furthermore, he claimed that this segregation "created a precedent. 2 years later (with "NS-styret" in the ministries of Norway) Norwegian police arrested citizens at the addresses where radios had previously been confiscated from Jews.
German praise for America's system of institutional racism, which was previously found in Adolf Hitler's "Mein Kampf", was continuous throughout the early 1930s, and radical Nazi lawyers were advocates of the use of American models. Race based U.S. citizenship laws and anti-miscegenation laws directly inspired the two principal Nuremberg Laws—the Citizenship Law and the Blood Law. The ban on interracial marriage (anti-miscegenation) prohibited sexual relations and marriages between people classified as "Aryan" and "non-Aryan". Such relationships were called "Rassenschande" (race defilement). At first the laws were aimed primarily at Jews but were later extended to "Gypsies, Negroes and their bastard offspring". Aryans found guilty could face incarceration in a Nazi concentration camp, while non-Aryans could face the death penalty. To preserve the so-called purity of the German blood, after the war began, the Nazis extended the race defilement law to include all foreigners (non-Germans).
Under the General Government of occupied Poland in 1940, the Nazis divided the population into different groups, each with different rights, food rations, allowed housing strips in the cities, public transportation, etc. In an effort to split Polish identity they attempted to establish ethnic divisions of Kashubians and Gorals (Goralenvolk), based on these groups' alleged "Germanic component".
During the 1930s and 1940s, Jews in Nazi-controlled states were made to wear something that identified them as Jewish, such as a yellow ribbon or star of David, and were, along with Romas (Gypsies), discriminated against by the racial laws. Jewish doctors were not allowed to treat Aryan patients nor were Jewish professors permitted to teach Aryan pupils. In addition, Jews were not allowed to use any public transportation, besides the ferry, and were able to shop only from 3–5 pm in Jewish stores. After "Kristallnacht" ("The Night of Broken Glass"), the Jews were fined 1,000,000 marks for damages done by the Nazi troops and SS members.
Jews and Roma were subjected to genocide as "undesirable" racial groups in the Holocaust. The Nazis established ghettos to confine Jews and sometimes Romas into tightly packed areas of the cities of Eastern Europe, turning them into "de facto" concentration camps. The Warsaw Ghetto was the largest of these ghettos, with 400,000 people. The Łódź Ghetto was the second largest, holding about 160,000.
Between 1939 and 1945, at least 1.5 million Polish citizens were transported to the Reich for forced labour (in all, about 12 million forced laborers were employed in the German war economy inside Nazi Germany). Although Nazi Germany also used forced laborers from Western Europe, Poles, along with other Eastern Europeans viewed as racially inferior, were subject to deeper discriminatory measures. They were forced to wear a yellow with purple border and letter "P" (for Polen/Polish) cloth identifying tag sewn to their clothing, subjected to a curfew, and banned from public transportation.
While the treatment of factory workers or farm hands often varied depending on the individual employer, Polish laborers as a rule were compelled to work longer hours for lower wages than Western Europeans – in many cities, they were forced to live in segregated barracks behind barbed wire. Social relations with Germans outside work were forbidden, and sexual relations ("Rassenschande" or "racial defilement") were punishable by death.
In 1938, the fascist regime which was led by Benito Mussolini, under pressure from the Nazis, introduced a series of racial laws which instituted an official segregationist policy in the Italian Empire, which was especially directed against Italian Jews. This policy enforced various segregationist norms, like the laws which banned Jews from teaching or studying in ordinary schools and universities, owning industries which were reputed to be very important to the nation, working as journalists, entering the military, and marrying non-Jews.
Some of the immediate consequences of the introduction of the 'provvedimenti per la difesa della razza' (norms for the defence of the race) included many of the best Italian scientists leaving their jobs, or even leaving Italy. Amongst these were the world-renowned physicists Emilio Segrè, Enrico Fermi (whose wife was Jewish), Bruno Pontecorvo, Bruno Rossi, Tullio Levi-Civita, mathematicians Federigo Enriques and Guido Fubini and even the fascist propaganda director, art critic and journalist Margherita Sarfatti, who was one of Mussolini's mistresses. Rita Levi-Montalcini, who would successively win the Nobel Prize for Medicine, was forbidden to work at the university. Albert Einstein, upon passage of the racial law, resigned from his honorary membership in the Accademia dei Lincei.
After 1943, when Northern Italy was occupied by the Nazis, Italian Jews were rounded up and became victims of the Holocaust.
In fifteenth-century north-east Germany, people of Wendish, i.e. Slavic, origin were not allowed to join some guilds. According to Wilhelm Raabe, "down into the eighteenth century no German guild accepted a Wend."
After the passage of Jim Crow laws which segregated African Americans and Whites, the people who were negatively affected by those laws saw no progress in their quest for equality. Racial segregation was not a new phenomenon, as illustrated by the fact that almost four million blacks had been enslaved before the Civil War. The laws passed segregated African Americans from Whites in order to enforce a system of white supremacy. Signs were used to show non whites where they could legally walk, talk, drink, rest, or eat. For those places that were racially mixed, blacks had to wait until all White customers were dealt with. Rules were also enforced that restricted African Americans from entering white stores. Segregated facilities extended from white only schools to white only graveyards.
After the Thirteenth Amendment abolished slavery in America, racial discrimination became regulated by the so-called Jim Crow laws, which mandated strict segregation of the races. Though many such laws were instituted shortly after fighting ended, they only became formalized after the 1877 end of the Reconstruction period. The period that followed is known as the nadir of American race relations. The legislation (or in some states, such as Florida, the state constitutions) that mandated segregation lasted at least until a 1968 ruling by the Supreme Court outlawing all forms of segregation.
While the U.S. Supreme Court majority in the 1896 "Plessy v. Ferguson" case explicitly permitted "separate but equal" facilities (specifically, transportation facilities), Justice John Marshall Harlan, in his dissent, protested that the decision was an expression of white supremacy; he predicted that segregation would "stimulate aggressions … upon the admitted rights of colored citizens", "arouse race hate", and "perpetuate a feeling of distrust between [the] races. Feelings between whites and blacks were so tense, even the jails were segregated."
Elected in 1912, President Woodrow Wilson tolerated the extension of segregation throughout the federal government that was already underway. In World War I, blacks were drafted and served in the United States Army in segregated units. Black combat soldiers were often poorly trained and equipped, and new draftees were put on the front lines in dangerous missions. The U.S. military was still heavily segregated in World War II. The air force and the marines had no blacks enlisted in their ranks. There were blacks in the Navy Seabees. The army had only five African-American officers. In addition, no African-American would receive the Medal of Honor during the war, and their tasks in the war were largely reserved to noncombat units. Black soldiers had to sometimes give up their seats in trains to the Nazi prisoners of war.
A club central to the Harlem Renaissance in the 1920s, the Cotton Club in Harlem, New York City was a whites-only establishment, with blacks (such as Duke Ellington) allowed to perform, but to a white audience. In the reception to honor his 1936 Olympic success, Jesse Owens was not permitted to enter through the main doors of the Waldorf Astoria New York and instead forced to travel up to the event in a freight elevator. The first black Oscar recipient Hattie McDaniel was not permitted to attend the premiere of "Gone with the Wind" with Georgia being racially segregated, and at the Oscars ceremony at the Ambassador Hotel in Los Angeles she was required to sit at a segregated table at the far wall of the room; the hotel had a no-blacks policy, but allowed McDaniel in as a favor. Her final wish to be buried in Hollywood Cemetery was denied because the graveyard was restricted to whites only.
On September 11, 1964, John Lennon announced The Beatles would not play to a segregated audience in Jacksonville, Florida. City officials relented following this announcement. A contract for a 1965 Beatles concert at the Cow Palace in California specifies that the band "not be required to perform in front of a segregated audience".
American sports were racially segregated until the mid-twentieth century. In baseball, the "Negro leagues" were established by Rube Foster for non-white players, such as Negro league baseball, which ran through the early 1950s. In basketball, the Black Fives (all-black teams) were established in 1904, and emerged in New York City, Washington, D.C., Chicago, Pittsburgh, Philadelphia, and other cities. Racial segregation in basketball lasted until 1950, when the NBA became racially integrated.
Many U.S. states banned interracial marriage. While opposed to slavery in the U.S, in a speech in Charleston, Illinois in 1858, Abraham Lincoln stated, "I am not, nor ever have been in favor of bringing about in any way the social and political equality of the white and black races, that I am not, nor ever have been in favor of making voters or jurors of negroes, nor of qualifying them to hold office, nor to intermarry with white people. I as much as any man am in favor of the superior position assigned to the white race". In 1967, Mildred Loving, a black woman, and Richard Loving, a white man, were sentenced to a year in prison in Virginia for marrying each other. Their marriage violated the state's anti-miscegenation statute, the Racial Integrity Act of 1924, which prohibited marriage between people classified as white and people classified as "colored" (persons of non-white ancestry). In the "Loving v. Virginia" case in 1967, the Supreme Court invalidated laws prohibiting interracial marriage in the U.S.
Institutionalized racial segregation was ended as an official practice during the civil rights movement by the efforts of such civil rights activists as Clarence M. Mitchell Jr., Rosa Parks, Martin Luther King Jr. and James Farmer working for social and political freedom during the period from the end of World War II through the Interstate Commerce Commission desegregation order of 1961, the passage of the Civil Rights Act in 1964 and the Voting Rights Act in 1965 supported by President Lyndon B. Johnson. Many of their efforts were acts of non-violent civil disobedience aimed at disrupting the enforcement of racial segregation rules and laws, such as refusing to give up a seat in the black part of the bus to a white person (Rosa Parks), or holding sit-ins at all-white diners.
By 1968, all forms of segregation had been declared unconstitutional by the Supreme Court under Chief Justice Earl Warren, and by 1970 support for formal legal segregation had dissolved. The Warren Court's decision on landmark case "Brown v. Board of Education" of Topeka, Kansas in 1954 outlawed segregation in public schools, and its decision on "Heart of Atlanta Motel, Inc. v. United States" in 1964 prohibits racial segregation and discrimination in public institutions and public accommodations. The Fair Housing Act of 1968, administered and enforced by the Office of Fair Housing and Equal Opportunity, prohibited discrimination in the sale and rental of housing on the basis of race, color, national origin, religion, sex, familial status, and disability. Formal racial discrimination became illegal in school systems, businesses, the American military, other civil services and the government.
The apartheid system carried out by Afrikaner minority rule enacted a nationwide social policy "separate development" with the National Party victory in 1948, following the "colour bar"-discriminatory legislation dating back to the beginning of the Union of South Africa and the Boer republics before which, while repressive to black South Africans along with other minorities, had not gone nearly so far.
Apartheid laws can be generally divided into following acts. Firstly, the Population Registration Act in 1950 classified residents in South Africa into four racial groups: "black", "white", "Coloured", and "Indian" and noted their racial identities on their identifications. Secondly, the Group Areas Act in 1950 assigned different regions according to different races. People were forced to live in their corresponding regions and the action of passing the boundaries without a permit was made illegal, extending pass laws that had already curtailed black movement. Thirdly, under the Reservation of Separate Amenities Act in 1953, amenities in public areas, like hospitals, universities and parks, were labeled separately according to particular races. In addition, the Bantu Education Act in 1953 segregated national education in South Africa as well. Additionally, the government of the time enforced the pass laws, which deprived black South Africans of their right to travel freely within their own country. Under this system black people were severely restricted from urban areas, requiring authorisation from a white employer to enter.
Uprisings and protests against apartheid appeared immediately when apartheid arose. As early as 1949, the youth wing of the African National Congress (ANC) advocated the ending of apartheid and suggested fighting against racial segregation by various methods. During the following decades, hundreds of anti-apartheid actions occurred, including those of the Black Consciousness Movement, students' protests, labor strikes, and church group activism etc. In 1991, the Abolition of Racially Based Land Measures Act was passed, repealing laws enforcing racial segregation, including the Group Areas Act. In 1994, Nelson Mandela won in the first multiracial democratic election in South Africa. His success fulfilled the ending of apartheid in South African history.
On 28 April 2007, the lower house of Bahraini Parliament passed a law banning unmarried migrant workers from living in residential areas. To justify the law MP Nasser Fadhala, a close ally of the government said "bachelors also use these houses to make alcohol, run prostitute rings or to rape children and housemaids".
Sadiq Rahma, technical committee head, who is a member of Al Wefaq said: "The rules we are drawing up are designed to protect the rights of both the families and the Asian bachelors (..) these labourers often have habits which are difficult for families living nearby to tolerate (..) they come out of their homes half dressed, brew alcohol illegally in their homes, use prostitutes and make the neighbourhood dirty (..) these are poor people who often live in groups of 50 or more, crammed into one house or apartment," said Mr Rahma. "The rules also state that there must be at least one bathroom for every five people (..) there have also been cases in which young children have been sexually molested."
Bahrain Centre for Human Rights issued a press release condemning this decision as discriminatory and promoting negative racist attitudes towards migrant workers. Nabeel Rajab, then BCHR vice president, said: "It is appalling that Bahrain is willing to rest on the benefits of these people's hard work, and often their suffering, but that they refuse to live with them in equality and dignity. The solution is not to force migrant workers into ghettos, but to urge companies to improve living conditions for workers – and not to accommodate large numbers of workers in inadequate space, and to improve the standard of living for them."
Since the 1970s, there has been a concern expressed by some academics that major Canadian cities are becoming more segregated on income and ethnic lines. Reports have indicated that the inner suburbs of post-merger Toronto and the southern bedroom communities of Greater Vancouver have become steadily more immigrant and visible minority dominated communities and have lagged behind other neighbourhoods in average income. A CBC panel in Vancouver in 2012 discussed the growing public fear that the proliferation of ethnic enclaves in Greater Vancouver (such as Han Chinese in Richmond and Punjabis in Surrey) amounted to a type of self-segregation. In response to these fears, many minority activists have pointed out that most Canadian neighbourhoods remain predominately White, and yet Whites are never accused of "self-segregation".
The Mohawk tribe of Kahnawake has been criticized for evicting non-Mohawks from the Mohawk reserve. Mohawks who marry outside of their tribal nation lose their right to live in their homelands. The Mohawk government claims that its policy of nationally exclusive membership is for the preservation of its identity, but there is no exemption for those who adopt Mohawk language or culture. The policy is based on a 1981 moratorium which was made law in 1984. All interracial couples are sent eviction notices regardless of how long they have lived on the reserve. The only exemption is for mixed national couples married before the 1981 moratorium.
Although some concerned Mohawk citizens contested the nationally exclusive membership policy, the Canadian Human Rights Tribunal ruled that the Mohawk government may adopt policies it deems necessary to ensure the survival of its people.
A long-standing practice of national segregation has also been imposed upon the commercial salmon fishery in British Columbia since 1992 when separate commercial fisheries were created for select aboriginal groups on three B.C. river systems. Canadians of other nations who fish in the separate fisheries have been arrested, jailed and prosecuted. Although the fishermen who were prosecuted were successful at trial (see the decision in R. v. Kapp), the decision was overturned on appeal. On final appeal, the Supreme Court of Canada ruled in favour of the program on the grounds that segregation of this workplace is a step towards equality in Canada. Affirmative action programs in Canada are protected from equality rights challenges by s. 15(2) of the Canadian Charter of Rights and Freedoms. Segregation continues today, but more than 35% of the fishermen in the BC commercial fishery are of aboriginal ancestry, yet Canadians of aboriginal ancestry comprise less than 4% of BC's population.
Two military coups in Fiji in 1987 removed a democratically elected government led by an Indo-Fijians. The coup was supported principally by the ethnic Fijian population. A new constitution was promulgated in 1990, establishing Fiji as a republic, with the offices of President, Prime Minister, two-thirds of the Senate, and a clear majority of the House of Representatives reserved for ethnic Fijians; ethnic Fijian ownership of the land was also entrenched in the constitution. Most of these provisions were ended with the promulgation of the 1997 Constitution, although the President, and 14 of the 32 Senators were still selected by the all-indigenous Great Council of Chiefs. The last of these distinctions were removed by the 2013 Constitution.
Fiji's case is a situation of de facto ethnic segregation. Fiji has a long complex history with more than 3500 years as a divided tribal nation. Unification under the British rule as a colony for 96 years brought other racial groups, particularly immigrants from the Indian subcontinent.
Israeli Declaration of Independence proclaims equal rights to all citizens regardless of ethnicity, denomination or race. Israel has a substantial list of laws that demand racial equality (such as prohibition of discrimination, equality in Employment, libel based on race or ethnicity.). There is however, in practice, significant institutional, legal, and societal discrimination against the country's Arab citizens.
In 2010, the Israeli supreme court sent a message against racial segregation in a case involving the Slonim Hassidic sect of the Ashkenazi Jews, ruling that segregation between Ashkenazi and Sephardi students in a school is illegal. They argue that they seek "to maintain an equal level of religiosity, not from racism". Responding to the charges, the Slonim Haredim invited Sephardi girls to school, and added in a statement: "All along, we said it's not about race, but the High Court went out against our rabbis, and therefore we went to prison."
Due to many cultural differences, and animosity towards a minority perceived to wish to annihilate Israel, a system of passively co-existing communities, segregated along ethnic lines has emerged in Israel, with Arab-Israeli minority communities being left "marooned outside the mainstream". This de facto segregation also exists between different Jewish ethnic groups (""edot"") such as Sepharadim, Ashkenazim and Beta Israel (Jews of Ethiopian descent), which leads to de facto segregated schools, housing and public policy. The government has embarked on a program to shut down such schools, in order to force integration, but some in the Ethiopian community complained that not all such schools have been closed. In a 2007 poll commissioned by the Center Against Racism and conducted by the GeoCartographia Institute, 75% of Israeli Jews would not agree to live in a building with Arab residents, 60% would not accept any Arab visitors at their homes, 40% believed that Arabs should be stripped of their right to vote, and 59% believe that the culture of Arabs is primitive. In 2012, a public opinion poll showed that 53% of the polled Israeli Jews said they would not object to an Arab living in their building, while 42% said they would. Asked whether they would object to Arab children being in their child's class in school, 49% said they would not, 42% said they would. The secular Israeli public was found to be the most tolerant, while the religious and Haredi respondents were the most discriminatory.
The end of British colonial rule in Kenya in 1964 led to an inadvertent increase in ethnic segregation. Through private purchases and government schemes, farm land previously held by European farmers was transferred to African owners. These farms were further sub-divided into smaller localities, and, due to joint migration, many adjacent localities were occupied by members of different ethnic groups. This separation along these boundaries persists today. Kimuli Kasara, in a study of recent ethnic violence in the wake of the disputed 2007/2008 Kenyan elections, used these post-colonial boundaries as an instrument for the degree of ethnic segregation. Through a 2 Stage Least Squares Regression analysis, Kasara showed that increased ethnic segregation in Kenya's Rift Valley Province is associated with an increase in ethnic violence.
Liberian Constitution limits Liberian nationality to Negro people (see also Liberian nationality law).
For example, Lebanese and Indian nationals are active in trading, as well as in the retail and service sectors. Europeans and Americans work in the mining and agricultural sectors. These minority groups have long tenured residence in the Republic, but many are precluded from becoming citizens as a result of their race.
Malaysia has an article in its constitution which distinguishes the ethnic Malaysians and the non-ethnic Malaysian people—i.e. bumiputra—from the non-Bumiputra such as ethnic Chinese and Indians under the social contract, of which by law would guarantee the former certain special rights and privileges. To question these rights and privileges however is strictly prohibited under the Internal Security Act, legalised by the 10th Article(IV) of the Constitution of Malaysia. The privileges mentioned herein covers—few of which—the economical and education aspects of Malaysians, e.g. the Malaysian New Economic Policy; an economic policy recently criticised by Thierry Rommel—who headed a European Commission's delegation to Malaysia—as an excuse for "significant protectionism" and a quota maintaining higher access of Malays into public universities.
While legal racial segregation in daily life is not practiced, self-segregation does exist.
Slavery in Mauritania was finally criminalized in August 2007. It was already abolished in 1980, although it was still affecting the black Africans. The number of slaves in the country was not known exactly, but it was estimated to be up to 600,000 men, women and children, or 20% of the population.
For centuries, the so-called Haratin lower class, mostly poor black Africans living in rural areas, have been considered natural slaves by white Moors of Arab/Berber ancestry. Many descendants of the Arab and Berber tribes today still adhere to the supremacist ideology of their ancestors. This ideology has led to oppression, discrimination and even enslavement of other groups in the region of Sudan and Western Sahara.
The United Kingdom has no legally sanctioned system of racial segregation and has a substantial list of laws that demand racial equality. However, due to many cultural differences between the pre-existing system of passively co-existing communities, segregation along racial lines has emerged in parts of the United Kingdom, with minority communities being left "marooned outside the mainstream".
The affected and 'ghettoised' communities are often largely representative of Pakistanis, Indians and other Sub-Continentals, and has been thought to be the basis of ethnic tensions, and a deterioration of the standard of living and levels of education and employment among ethnic minorities in poorer areas. These factors are considered by some to have been a cause of the 2001 race riots in Bradford, Oldham and Burnley in the north of England which have large Asian communities.
There may be some indication that such segregation, particularly in residential terms, seems to be the result of the unilateral 'steering' of ethnic groups into particular areas as well as a culture of vendor discrimination and distrust of ethnic minority clients by some estate agents and other property professionals. This may be indicative of a market preference amongst the more wealthy to reside in areas of less ethnic mixture; less ethnic mixture being perceived as increasing the value and desirability of a residential area. This is likely as other theories such as "ethnic self segregation" have sometimes been shown to be baseless, and a majority of ethnic respondents to a few surveys on the matter have been in favour of wider social and residential integration.
De facto segregation in the United States has increased since the civil rights movement. The Supreme Court ruled in Milliken v. Bradley (1974) that de facto racial segregation was acceptable, as long as schools were not actively making policies for racial exclusion; since then, schools have been segregated due to myriad indirect factors.
Redlining is part of how white communities maintained racist segregation. It is the practice of denying or increasing the cost of services, such as mortgages, banking, insurance, access to jobs, access to health care, or even supermarkets to residents in certain, often racially determined, areas. The most devastating form of redlining, and the most common use of the term, refers to mortgage discrimination. Over the next twenty years, a succession of further court decisions and federal laws, including the "Home Mortgage Disclosure Act" and measure to end mortgage discrimination in 1975, would completely invalidate "de jure" racial segregation and discrimination in the U.S., although "de facto" segregation and discrimination have proven more resilient. According to the Civil Rights Project at Harvard University, the actual de facto desegregation of U.S. public schools peaked in the late 1980s; since that time, the schools have, in fact, become more segregated mainly due to the ethnic segregation of the nation with whites dominating the suburbs and minorities the urban centers. According to Rajiv Sethi, an economist at Columbia University, black-white segregation in housing is slowly declining for most metropolitan areas in the US. Racial segregation or separation can lead to social, economic and political tensions. Thirty years (the year 2000) after the civil rights era, the United States remained in many areas a residentially segregated society, in which blacks, whites and Hispanics inhabit different neighborhoods of vastly different quality.
Dan Immergluck writes that in 2002 small businesses in black neighborhoods still received fewer loans, even after accounting for businesses density, businesses size, industrial mix, neighborhood income, and the credit quality of local businesses. Gregory D. Squires wrote in 2003 that it is clear that race has long affected and continues to affect the policies and practices of the insurance industry. Workers living in American inner cities have a harder time finding jobs than suburban workers.
The desire of many whites to avoid having their children attend integrated schools has been a factor in white flight to the suburbs. A 2007 study in San Francisco showed that groups of homeowners of all races tended to self-segregate in order to be with people of the same education level and race. By 1990, the legal barriers enforcing segregation had been mostly replaced by decentralized racism, where white people pay more than black people to live in predominantly white areas. Today, many whites are willing to pay a premium to live in a predominantly white neighborhood. Equivalent housing in white areas commands a higher rent. These higher rents are largely attributable to exclusionary zoning policies that restrict the supply of housing. Regulations ensure that all housing units are expensive enough to prevent access by undesirable groups. By bidding up the price of housing, many white neighborhoods effectively shut out black people, because they may be unwilling, or unable, to pay the premium to buy entry into these expensive neighborhoods. Conversely, equivalent housing in black neighborhoods is far more affordable to those who are unable or unwilling to pay a premium to live in white neighborhoods. Through the 1990s, residential segregation remained at its extreme and has been called "hypersegregation" by some sociologists or "American Apartheid".
In February 2005, the U.S. Supreme Court ruled in "Johnson v. California" that the California Department of Corrections' unwritten practice of racially segregating prisoners in its prison reception centers—which California claimed was for inmate safety (gangs in California, as throughout the U.S., usually organize on racial lines)—is to be subject to strict scrutiny, the highest level of constitutional review.
In Yemen, the Arab elite practices a form of discrimination against the lower class Al-Akhdam people based on their racial characteristics. | https://en.wikipedia.org/wiki?curid=26316 |
Roslagen
Roslagen is the name of the coastal areas of Uppland province in Sweden, which also constitutes the northern part of the Stockholm archipelago.
Historically, it was the name for all the coastal areas of the Baltic Sea, including the eastern parts of lake Mälaren, belonging to Svealand. The name was first mentioned in the year 1493 as "Rodzlagen". Before that the area was known as "Roden", which is the coastal equivalent to inland Hundreds. When the king would issue a call to leidang, the Viking Age equivalent of military conscript service, Roden districts were responsible for raising a number of ships for the leidang navy.
The name comes from the "rodslag", which is an old coastal Uppland word for a rowing crew of warrior oarsmen. Etymologically, Roden, or Roslagen, is the source of the Finnish and Estonian names for Sweden: ' and '.
A person from Roslagen is called a "Rospigg" which means "inhabitant of Ros". Swedes from the Roslagen area, that is "the people of Ros", gave their name to the Rus' people and thus to the states of Russia and Belarus (see Rus' (name)).
The area also gives its name to the endangered domesticated Roslag sheep, which originated from the area centuries ago. It is served by the Roslagsbanan, a narrow-gauge railway network from Stockholm. | https://en.wikipedia.org/wiki?curid=26318 |
Ramjet
A ramjet, sometimes referred to as a flying stovepipe or an athodyd (aero thermodynamic duct), is a form of airbreathing jet engine that uses the engine's forward motion to compress incoming air without an axial compressor or a centrifugal compressor. Because ramjets cannot produce thrust at zero airspeed, they cannot move an aircraft from a standstill. A ramjet-powered vehicle, therefore, requires an assisted take-off like a rocket assist to accelerate it to a speed where it begins to produce thrust. Ramjets work most efficiently at supersonic speeds around . This type of engine can operate up to speeds of .
Ramjets can be particularly useful in applications requiring a small and simple mechanism for high-speed use, such as missiles. The US, Canada, and UK had widespread ramjet powered missile defenses during the 1960s onward, such as the CIM-10 Bomarc and Bloodhound. Weapon designers are looking to use ramjet technology in artillery shells to give added range; a 120 mm mortar shell, if assisted by a ramjet, is thought to be able to attain a range of . They have also been used successfully, though not efficiently, as tip jets on the ends of helicopter rotors.
Ramjets differ from pulsejets, which use an intermittent combustion; ramjets employ a continuous combustion process.
As speed increases, the efficiency of a ramjet starts to drop as the air temperature in the inlet increases due to compression. As the inlet temperature gets closer to the exhaust temperature, less energy can be extracted in the form of thrust. To produce a usable amount of thrust at yet higher speeds, the ramjet must be modified so that the incoming air is not compressed (and therefore heated) nearly as much. This means that the air flowing through the combustion chamber is still moving very fast (relative to the engine), in fact it will be supersonic—hence the name supersonic-combustion ramjet, or scramjet.
"L'Autre Monde: ou les États et Empires de la Lune (Comical History of the States and Empires of the Moon)" (1657) was the first of three satirical novels written by Cyrano de Bergerac, that are considered among the first science fiction stories. Arthur C Clarke credited this book with inventing the ramjet, and being the first example of a rocket-powered space flight.
The ramjet was conceived in 1913 by French inventor René Lorin, who was granted a patent for his device. Attempts to build a prototype failed due to inadequate materials.
In 1915, Hungarian inventor Albert Fonó devised a solution for increasing the range of artillery, comprising a gun-launched projectile which was to be united with a ramjet propulsion unit, thus giving a long range from relatively low muzzle velocities, allowing heavy shells to be fired from relatively lightweight guns. Fonó submitted his invention to the Austro-Hungarian Army, but the proposal was rejected. After World War I, Fonó returned to the subject of jet propulsion, in May 1928 describing an "air-jet engine" which he described as being suitable for high-altitude supersonic aircraft, in a German patent application. In an additional patent application, he adapted the engine for subsonic speed. The patent was granted in 1932 after four years of examination (German Patent No. 554,906, 1932-11-02).
In the Soviet Union, a theory of supersonic ramjet engines was presented in 1928 by Boris Stechkin. Yuri Pobedonostsev, chief of GIRD's 3rd Brigade, carried out a great deal of research into ramjet engines. The first engine, the GIRD-04, was designed by I.A. Merkulov and tested in April 1933. To simulate supersonic flight, it was fed by air compressed to , and was fueled with hydrogen. The GIRD-08 phosphorus-fueled ramjet was tested by firing it from an artillery cannon. These shells may have been the first jet-powered projectiles to break the speed of sound.
In 1939, Merkulov did further ramjet tests using a two-stage rocket, the R-3. That August, he developed the first ramjet engine for use as an auxiliary motor of an aircraft, the DM-1. The world's first ramjet-powered airplane flight took place in December 1940, using two DM-2 engines on a modified Polikarpov I-15. Merkulov designed a ramjet fighter "Samolet D" in 1941, which was never completed. Two of his DM-4 engines were installed on the Yak-7 PVRD fighter, during World War II. In 1940, the Kostikov-302 experimental plane was designed, powered by a liquid fuel rocket for take-off and ramjet engines for flight. That project was cancelled in 1944.
In 1947, Mstislav Keldysh proposed a long-range antipodal bomber, similar to the Sänger-Bredt bomber, but powered by ramjet instead of rocket. In 1954, NPO Lavochkin and the Keldysh Institute began development of a Mach 3 ramjet-powered cruise missile, "Burya". This project competed with the R-7 ICBM being developed by Sergei Korolev, and was cancelled in 1957.
On March 1, 2018 President Vladimir Putin announced Russia had developed a (presumed) nuclear powered ramjet cruise missile capable of extended long range flight.
In 1936, Hellmuth Walter constructed a test engine powered by natural gas. Theoretical work was carried out at BMW and Junkers, as well as DFL. In 1941, Eugen Sänger of DFL proposed a ramjet engine with a very high combustion chamber temperature. He constructed very large ramjet pipes with and diameter and carried out combustion tests on lorries and on a special test rig on a Dornier Do 17Z at flight speeds of up to . Later, with petrol becoming scarce in Germany due to wartime conditions, tests were carried out with blocks of pressed coal dust as a fuel, which were not successful due to slow combustion.
The US Navy developed a series of air-to-air missiles under the name of "Gorgon" using different propulsion mechanisms, including ramjet propulsion on the Gorgon IV. The ramjet Gorgon IVs, made by Glenn Martin, were tested in 1948 and 1949 at Naval Air Station Point Mugu. The ramjet engine itself was designed at the University of Southern California and manufactured by the Marquardt Aircraft Company. The engine was long and in diameter and was positioned below the missile.
In the early 1950s the US developed a Mach 4+ ramjet under the Lockheed X-7 program. This was developed into the Lockheed AQM-60 Kingfisher. Further development resulted in the Lockheed D-21 spy drone.
In the late 1950s the US Navy introduced a system called the RIM-8 Talos, which was a long range surface to air missile fired from ships. It successfully shot down several enemy fighters during the Vietnam war, and was the first ship launched missile to ever successfully destroy an enemy aircraft in combat. On May 23, 1968, a Talos fired from USS Long Beach shot down a Vietnamese MiG at a range of about 65 miles. It was also used as a surface to surface weapon, and was also successfully modified to destroy land based radar systems.
Using the technology proven by the AQM-60, In the late 1950s and early 1960s the US produced a widespread defense system called the CIM-10 Bomarc, which was equipped with hundreds of nuclear armed ramjet missiles with a range of several hundred miles. It was powered by the same engines as the AQM-60, but with improved materials to withstand the longer flight times. The system was withdrawn in the 1970s as the threat from bombers was reduced.
In the late 1950s and early 1960s the UK developed several ramjet missiles.
A project called Blue Envoy was supposed to equip the country with a long range ramjet powered air defense against bombers, but the system was eventually cancelled.
It was replaced by a much shorter range ramjet missile system called the Bloodhound. The system was designed as a second line of defense in case attackers were able to bypass the fleet of defending English Electric Lightning fighters.
In the 1960s the Royal Navy developed and deployed a ramjet powered surface to air missile for ships called the Sea Dart. It had a range of 40-80 miles and a speed of Mach 3. It was use successfully in combat against multiple types of aircraft during the Falklands War.
Eminent Swiss astrophysicist Fritz Zwicky was research director at Aerojet and holds many patents in jet propulsion. U.S. patents 5121670 and 4722261 are for ram accelerators. The U.S. Navy would not allow Fritz Zwicky to publicly discuss his own invention, U.S. Patent 2,461,797 for the Underwater Jet, a ram jet that performs in a fluid medium. "Time" magazine reported Fritz Zwicky's work in the articles "Missed Swiss" on July 11, 1955 and "Underwater Jet" in the March 14, 1949 issue.
In France, the works of René Leduc were notable. Leduc's Model, the Leduc 0.10 was one of the first ramjet-powered aircraft to fly, in 1949.
The Nord 1500 Griffon reached in 1958.
The Brayton cycle is a thermodynamic cycle that describes the workings of the gas turbine engine, the basis of the airbreathing jet engine and others. It is named after George Brayton, the American engineer who developed it, although it was originally proposed and patented by Englishman John Barber in 1791. It is also sometimes known as the Joule cycle.
A ramjet is designed around its inlet. An object moving at high speed through air generates a high pressure region upstream. A ramjet uses this high pressure in front of the engine to force air through the tube, where it is heated by combusting some of it with fuel. It is then passed through a nozzle to accelerate it to supersonic speeds. This acceleration gives the ramjet forward thrust.
A ramjet is sometimes referred to as a "flying stovepipe", a very simple device comprising an air intake, a combustor, and a nozzle. Normally, the only moving parts are those within the turbopump, which pumps the fuel to the combustor in a liquid-fuel ramjet. Solid-fuel ramjets are even simpler.
By way of comparison, a turbojet uses a gas turbine-driven fan to compress the air further. This gives greater compression and efficiency and far more power at low speeds (where the ram effect is weak), but is more complex, heavier, expensive, and the temperature limits of the turbine section limit the top speed and thrust at high speed.
Ramjets try to exploit the very high dynamic pressure within the air approaching the intake lip. An efficient intake will recover much of the freestream stagnation pressure, which is used to support the combustion and expansion process in the nozzle.
Most ramjets operate at supersonic flight speeds and use one or more conical (or oblique) shock waves, terminated by a strong normal shock, to slow down the airflow to a subsonic velocity at the exit of the intake. Further diffusion is then required to get the air velocity down to a suitable level for the combustor.
Subsonic ramjets do not need such a sophisticated inlet, since the airflow is already subsonic, and a simple hole is usually used. This would also work at slightly supersonic speeds, but as the air will choke at the inlet, this is inefficient.
The inlet is divergent, to provide a constant inlet speed of .
As with other jet engines, the combustor's job is to create hot air, by burning a fuel with the air at essentially constant pressure. The airflow through the jet engine is usually quite high, so sheltered combustion zones are produced by using "flame holders" to stop the flames from blowing out.
Since there is no downstream turbine, a ramjet combustor can safely operate at stoichiometric fuel:air ratios, which implies a combustor exit stagnation temperature of the order of for kerosene. Normally, the combustor must be capable of operating over a wide range of throttle settings, for a range of flight speeds/altitudes. Usually, a sheltered pilot region enables combustion to continue when the vehicle intake undergoes high yaw/pitch during turns. Other flame stabilization techniques make use of flame holders, which vary in design from combustor cans to simple flat plates, to shelter the flame and improve fuel mixing. Overfuelling the combustor can cause the normal shock within a supersonic intake system to be pushed forward beyond the intake lip, resulting in a substantial drop in engine airflow and net thrust.
The propelling nozzle is a critical part of a ramjet design, since it accelerates exhaust flow to produce thrust.
For a ramjet operating at a subsonic-flight Mach number, exhaust flow is accelerated through a converging nozzle. For a supersonic-flight Mach number, acceleration is typically achieved by a convergent–divergent nozzle.
Although ramjets have been run as slow as , below about they give little thrust and are highly inefficient due to their low pressure ratios.
Above this speed, given sufficient initial flight velocity, a ramjet will be self-sustaining. Indeed, unless the vehicle drag is extremely high, the engine/airframe combination will tend to accelerate to higher and higher flight speeds, substantially increasing the air intake temperature. As this could have a detrimental effect on the integrity of the engine and/or airframe, the fuel control system must reduce engine fuel flow to stabilize the flight Mach number and, thereby, air intake temperature to reasonable levels.
Due to the stoichiometric combustion temperature, efficiency is usually good at high speeds (around ), whereas at low speeds the relatively poor pressure ratio means the ramjets are outperformed by turbojets, or even rockets.
Ramjets can be classified according to the type of fuel, liquid or solid; and the booster.
In a liquid fuel ramjet (LFRJ), hydrocarbon fuel (typically) is injected into the combustor ahead of a flameholder which stabilises the flame resulting from the combustion of the fuel with the compressed air from the intake(s). A means of pressurizing and supplying the fuel to the ramcombustor is required, which can be complicated and expensive. Aérospatiale-Celerg designed an LFRJ where the fuel is forced into the injectors by an elastomer bladder which inflates progressively along the length of the fuel tank. Initially, the bladder forms a close-fitting sheath around the compressed air bottle from which it is inflated, which is mounted lengthwise in the tank. This offers a lower-cost approach than a regulated LFRJ requiring a turbopump and associated hardware to supply the fuel.
A ramjet generates no static thrust and needs a booster to achieve a forward velocity high enough for efficient operation of the intake system. The first ramjet-powered missiles used external boosters, usually solid-propellant rockets, either in tandem, where the booster is mounted immediately aft of the ramjet, e.g. Sea Dart, or wraparound where multiple boosters are attached alongside the outside of the ramjet, e.g. 2K11 Krug. The choice of booster arrangement is usually driven by the size of the launch platform. A tandem booster increases the overall length of the system, whereas wraparound boosters increase the overall diameter. Wraparound boosters will usually generate higher drag than a tandem arrangement.
Integrated boosters provide a more efficient packaging option, since the booster propellant is cast inside the otherwise empty combustor. This approach has been used on solid, for example 2K12 Kub, liquid, for example ASMP, and ducted rocket, for example Meteor, designs. Integrated designs are complicated by the different nozzle requirements of the boost and ramjet phases of flight. Due to the higher thrust levels of the booster, a differently shaped nozzle is required for optimum thrust compared to that required for the lower thrust ramjet sustainer. This is usually achieved via a separate nozzle, which is ejected after booster burnout. However, designs such as Meteor feature nozzleless boosters. This offers the advantages of elimination of the hazard to launch aircraft from the ejected boost nozzle debris, simplicity, reliability, and reduced mass and cost, although this must be traded against the reduction in performance compared with that provided by a dedicated booster nozzle.
A slight variation on the ramjet uses the supersonic exhaust from a rocket combustion process to compress and react with the incoming air in the main combustion chamber. This has the advantage of giving thrust even at zero speed.
In a solid fuel integrated rocket ramjet (SFIRR), the solid fuel is cast along the outer wall of the ramcombustor. In this case, fuel injection is through ablation of the propellant by the hot compressed air from the intake(s). An aft mixer may be used to improve combustion efficiency. SFIRRs are preferred over LFRJs for some applications because of the simplicity of the fuel supply, but only when the throttling requirements are minimal, i.e. when variations in altitude or Mach number are limited.
In a ducted rocket, a solid fuel gas generator produces a hot fuel-rich gas which is burnt in the ramcombustor with the compressed air supplied by the intake(s). The flow of gas improves the mixing of the fuel and air and increases total pressure recovery. In a throttleable ducted rocket, also known as a variable flow ducted rocket, a valve allows the gas generator exhaust to be throttled allowing control of the thrust. Unlike an LFRJ, solid propellant ramjets cannot flame out. The ducted rocket sits somewhere between the simplicity of the SFRJ and the unlimited throttleability of the LFRJ.
Ramjets generally give little or no thrust below about half the speed of sound, and they are inefficient (less than 600 seconds) until the airspeed exceeds due to low compression ratios.
Even above the minimum speed, a wide flight envelope (range of flight conditions), such as low to high speeds and low to high altitudes, can force significant design compromises, and they tend to work best optimised for one designed speed and altitude (point designs). However, ramjets generally outperform gas turbine-based jet engine designs and work best at supersonic speeds (Mach 2–4). Although inefficient at slower speeds, they are more fuel-efficient than rockets over their entire useful working range up to at least .
The performance of conventional ramjets falls off above Mach 6 due to dissociation and pressure loss caused by shock as the incoming air is slowed to subsonic velocities for combustion. In addition, the combustion chamber's inlet temperature increases to very high values, approaching the dissociation limit at some limiting Mach number.
An air turboramjet has a compressor powered by a gas heated via a heat exchanger within the combustion chamber.
Ramjets always slow the incoming air to a subsonic velocity within the combustor. Scramjets are similar to ramjets, but some of the air goes through the entire engine at supersonic speeds. This increases the stagnation pressure recovered from the freestream and improves net thrust. Thermal choking of the exhaust is avoided by having a relatively high supersonic air velocity at combustor entry. Fuel injection is often into a sheltered region below a step in the combustor wall. Although scramjet engines have been studied for many decades, only recently have small experimental units been flight tested and then only very briefly (e.g. the Boeing X-43).
As of May 2010, this engine has been tested to attain for 200 seconds on the X-51A Waverider.
A variant of the pure ramjet is the 'combined cycle' engine, intended to overcome the limitations of the pure ramjet. One example of this is the SABRE engine; this uses a precooler, behind which is the ramjet and turbine machinery.
The ATREX engine developed in Japan is an experimental implementation of this concept. It uses liquid hydrogen fuel in a fairly exotic single-fan arrangement. The liquid hydrogen fuel is pumped through a heat exchanger in the air intake, simultaneously heating the liquid hydrogen and cooling the incoming air. This cooling of the incoming air is critical to achieving a reasonable efficiency. The hydrogen then continues through a second heat exchanger position after the combustion section, where the hot exhaust is used to further heat the hydrogen, turning it into a very high pressure gas. This gas is then passed through the tips of the fan to provide driving power to the fan at subsonic speeds. After mixing with the air, it is burned in the combustion chamber.
The Reaction Engines Scimitar has been proposed for the LAPCAT hypersonic airliner, and the Reaction Engines SABRE for the Reaction Engines Skylon spaceplane.
During the Cold War, the United States designed and ground-tested a nuclear-powered ramjet called Project Pluto. This system, intended for use in a cruise missile, used no combustion; a high-temperature, unshielded nuclear reactor heated the air instead. The ramjet was predicted to be able to fly at supersonic speeds for months. Because the reactor was unshielded, it was dangerous to anyone in or around the flight path of the low-flying vehicle (although the exhaust itself wasn't radioactive). The project was ultimately cancelled because ICBMs seemed to serve the purpose better.
The upper atmosphere above about contains monatomic oxygen produced by the sun through photochemistry. A concept was created by NASA for recombining this thin gas back to diatomic molecules at orbital speeds to power a ramjet.
The Bussard ramjet is a spacecraft propulsion concept intended to fuse interstellar wind and exhaust it at high speed from the rear of the vehicle.
An afterburning turbojet or bypass engine can be described as transitioning from turbo to ramjet mode if it can attain a flight speed at which the engine pressure ratio (epr) has fallen to one. The turbo afterburner then acts as a ramburner. The intake ram pressure is present at entry to the afterburner but is no longer augmented with a pressure rise from the turbomachinery. Further increase in speed introduces a pressure loss due to the presence of the turbomachinery as the epr drops below one.
A notable example was the propulsion system for the Lockheed SR-71 Blackbird with an epr= 0.9 at Mach 3.2. The thrust required, airflow and exhaust temperature, to reach this speed came from a standard method for increasing airflow through a compressor running at low corrected speeds, compressor bleed, and being able to increase the afterburner temperature as a result of cooling the duct and nozzle using the air taken from the compressor rather than the usual, much hotter, turbine exhaust gas. | https://en.wikipedia.org/wiki?curid=26321 |
Ranma ½
"Ranma ½" has a comedic formula and a sex-changing main character, who often willfully transforms into a girl to advance his goals. The series also contains many other characters, whose intricate relationships with each other, unusual characteristics, and eccentric personalities drive most of the stories. Although the characters and their relationships are complicated, they rarely change once they are firmly introduced and settled into the series.
The manga has been adapted into two anime series created by Studio Deen: "Ranma ½" and , which together were broadcast on Fuji Television from 1989 to 1992. In addition, they developed 12 original video animations and three films. In 2011, a live-action television special was produced and aired on Nippon Television. The manga and anime series were licensed by Viz Media for English-language releases in North America. Madman Entertainment released the manga, part of the anime series and the first two movies in Australasia, while MVM Films released the first two movies in the United Kingdom. The "Ranma ½" manga has over 53 million copies in print in Japan. Both the manga and anime are cited as some of the first of their mediums to have become popular in the United States.
On a training journey in the Bayankala Mountain Range in the Qinghai Province of China, Ranma Saotome and his father Genma fall into the cursed springs at . When someone falls into a cursed spring, they take the physical form of whatever drowned there hundreds or thousands of years ago whenever they come into contact with cold water. The curse will revert when exposed to hot water until their next cold water exposure. Genma fell into the spring of a drowned panda while Ranma fell into the spring of a drowned girl.
Soun Tendo is a fellow practitioner of or "Anything-Goes School" of martial arts and owner of a dojo. Genma and Soun agreed years ago that their children would marry and carry on the Tendo Dojo. Soun has three teenaged daughters: the polite and easygoing Kasumi, the greedy and indifferent Nabiki and the short-tempered, martial arts practicing Akane. Akane, who is Ranma's age, is appointed for bridal duty by her sisters with the reasoning that they are the older sisters and can dump the duty on her, and that they all dislike the arranged engagement and think Akane's dislike of men is the right way to express it to the fathers. At the appointed time they are surprised when a panda comes in and puts a girl in front of their father. The Tendo girls all laugh. It takes several more pages for the situation to be explained to Soun Tendo and his daughters. Both Ranma and Akane refuse the engagement initially, having not been consulted on the decision, but the fathers are insistent and they are generally treated as betrothed and end up helping or saving each other on some occasions. They are frequently found in each other's company and are constantly arguing in their trademark awkward love-hate manner that is a franchise focus.
Ranma goes to school with Akane at , where he meets his recurring opponent Tatewaki Kuno, the conceited kendo team captain who aggressively pursues Akane, but also falls in love with Ranma's female form without ever discovering his curse (despite most other characters eventually knowing it). Nerima serves as a backdrop for more martial arts mayhem with the introduction of Ranma's regular rivals, such as the eternally lost Ryoga Hibiki who traveled halfway across Japan getting from the front of his house to the back, where Ranma spent three days waiting for him. Ryoga, seeking revenge on Ranma, followed him to Jusenkyo where he ultimately fell into the Spring of the Drowned Piglet. Now when splashed with cold water he takes the form of a little black pig. Not knowing this, Akane takes the piglet as a pet and names it P-chan, but Ranma knows and hates him for keeping this secret and taking advantage of the situation. Another rival is the nearsighted Mousse, who also fell into a cursed spring and becomes a duck when he gets wet, and finally, there is Genma and Soun's impish grandmaster, Happosai, who spends his time stealing the underwear of schoolgirls.
Ranma's prospective paramours include the martial arts rhythmic gymnastics champion Kodachi Kuno, and his second fiancée and childhood friend Ukyo Kuonji the okonomiyaki vendor, along with the Chinese Amazon Shampoo, supported by her great-grandmother Cologne. As the series progresses, the school becomes more eccentric with the return of the demented, Hawaii-obsessed Principal Kuno and the placement of the power-leeching alternating child/adult Hinako Ninomiya as Ranma's English teacher. Ranma's indecision to choose his true love causes chaos in his romantic and school life.
Rumiko Takahashi stated that "Ranma ½" was conceived to be a martial arts manga that connects all aspects of everyday life to martial arts. Because her previous series had female protagonists, the author decided that she wanted a male this time. However, she was worried about writing a male main character, and therefore decided to make him half-female. Before deciding on water for initiating his changes, she considered Ranma changing every time he was punched. It was after deciding this that she felt Jusenkyo had to be set in China, as it is the only place that could have such mysterious springs. She drew inspiration for "Ranma ½" from a variety of real-world objects. Some of the places frequently seen in the series are modeled after actual locations in Nerima, Tokyo (both the home of Takahashi and the setting of "Ranma ½").
In a 1990 interview with "Amazing Heroes", Takahashi stated that she had four assistants that draw the backgrounds, panel lines and tone, while she creates the story and layout, and pencils and inks the characters. All her assistants are female; Takahashi stated that "I don't use male assistants so that the girls will work more seriously if they aren't worried about boys." In 1992, she explained her process as beginning with laying out the chapter in the evening so as to finish it by dawn, and resting for a day before calling her assistants. They finish it in two or three nights, usually utilizing five days for a chapter.
Takahashi purposefully aimed the series to be popular with women and children. In 1993, an "Animerica" interviewer talking with Takahashi asked her if she intended the sex-changing theme "as an effort to enlighten a male-dominated society." Takahashi said that she does not think in terms of societal agendas and that she created the "Ranma ½" concept from simply wanting "a simple, fun idea". She added that she, as a woman and while recalling what manga she liked to read as a child, felt that "humans turning into animals might also be fun and ... you know, like a fairy tale." In 2013, she revealed that at the start of "Ranma" her editor told her to make it more dramatic, but she felt that was something she could not do. However, she admitted that drama did start to appear at the end. She also sat in on the voice actor auditions for the anime, where she insisted that male and female Ranma be voiced by different actors whose gender corresponded to that of the part.
Written and illustrated by Rumiko Takahashi, "Ranma ½" began publication in "Weekly Shōnen Sunday" issue #36 published on August 19, 1987, following the ending of her series "Urusei Yatsura". From August 1987 until March 1996, the manga was published on a near weekly basis with the occasional colored page to spruce up the usually black and white stories. After nearly a decade of storylines, the final chapter was published in "Weekly Shōnen Sunday" issue #12 on March 6, 1996. The 407 chapters were periodically collected and published by Shogakukan into a total of 38 black and white "tankōbon" volumes from 1988 to 1996. They were reassembled into 38 "shinsōban" from April 2002 to October 2003.
North American publisher Viz Media originally released "Ranma ½" in a monthly comic book format that contained two chapters each from 1992 to 2003, and had the images "flipped" to read left-to-right, causing the art to be mirrored. These were periodically collected into graphic novels. On March 18, 2004, after releasing 21 volumes, Viz announced that it would reprint a number of its graphic novels. The content remained the same, but the novels moved to a smaller format with different covers and a price drop. Each volume covers roughly the same amount of material as the Japanese volumes, but retained its left-to-right format and had minor differences in grouping so that it spans 36 volumes rather than the original 38. The final volume was released in stores on November 14, 2006, thus making it Viz's longest running manga, spanning over 13 years. At Anime Expo on July 7, 2013, Viz Media announced re-release of the manga in a format that combines two individual volumes into a single large one, and restores the original right-to-left reading order (a first in North America for this series). The first 2-in-1 book (volumes 1-2) was published on March 11, 2014; the final (volumes 35-36) in January, 2017. Madman Entertainment publishes the two-in-one version in Australasia.
Together with "Spriggan", it was the first manga published in Portugal, by Texto Editora in 1995.
An anime television series was created by Studio Deen and aired weekly between April 15, 1989, and September 16, 1989, on Fuji TV for 18 episodes, before being canceled due to low ratings. The series was then reworked by most of the same staff, retitled and launched in a different time slot, running for 143 episodes from October 20, 1989, to September 25, 1992. The anime stays true to the original manga but does differ by keeping Ranma's sex transformation a secret from the high school students, at least throughout most of its length. It also does not introduce Hikaru Gosunkugi until very late in the series, instead, Sasuke Sarugakure, the diminutive ninja retainer of the Kuno family fills a number of Gosunkugi's roles in early storylines but is a major character in his own right. The anime also alters the placement of many story arcs and contains numerous original episodes and characters not adapted from the manga.
Viz Media licensed both anime series in 1993, making "Ranma ½" one of the very first anime titles licensed by Viz. The English dub produced for the series was recorded by The Ocean Group in Vancouver, British Columbia. They released the series on VHS from their own "Viz Video" label, and on DVD a few years later in association with Pioneer Home Entertainment. Their releases collected both anime series as one, separated episodes into what they call "seasons", and changed the ordering of many of the episodes. Viz themselves re-released it on DVD in 2007 using their own DVD production company. At Otakon 2013, Viz announced that they re-acquired the TV series for Blu-ray and DVD release in 2014. The show is streamed on their anime channel service Neon Alley since Autumn 2013. Madman Entertainment licensed some of the series for release in Australasia, although their rights expired after releasing only the first four "seasons" as one series.
Studio Deen also created three theatrical films; "The Battle of Nekonron, China! A Battle to Defy the Rules!" on November 2, 1991; "Battle at Togenkyo! Get Back the Brides" on August 1, 1992; and "Super Indiscriminate Decisive Battle! Team Ranma vs. the Legendary Phoenix" on August 20, 1994. The first two movies are feature length, but the third was originally shown in theaters with two other movies: "Ghost Sweeper Mikami" and "Heisei Dog Stories: Bow".
Following the ending of the TV series, 11 original video animations were released directly to home video, the earliest on December 7, 1993, and the eleventh on June 4, 1996. All but one are based on stories originally in the manga. Twelve years later, a "Ranma" animation was created for the "It's a Rumic World" exhibition of Rumiko Takahashi's artwork. Based on the "Nightmare! Incense of Deep Sleep" manga story from volume 34, it was shown on odd numbered days at the exhibition in Tokyo from July 30 to August 11, 2008. But it was not released until January 29, 2010, when it was put in a DVD box set with the "Urusei Yatsura" and "Inuyasha" specials that premiered at the same exhibit. It was then released on DVD and Blu-ray by itself on October 20, 2010. Viz Media also licensed all three movies, and the original 11 OVAs for distribution in North America (however they released the third movie as an OVA). MVM Films has released the first two movies in the United Kingdom, while Madman Entertainment released them in Australasia.
There have been fifteen video games based on the "Ranma ½" franchise. While most are fighting games, there have been several RPGs and puzzle games. Only two have been released in Western countries. "Ranma ½: Chōnai Gekitōhen" was released in the US as "Street Combat"; the characters were Americanized, having their appearances completely changed, and the music was changed as well. However, "" was released in both North America and Europe unaltered.
A live action television adaption of "Ranma ½" aired on Nippon TV, in a two-hour time-slot, on December 9, 2011. Although it was initially reported that the special would contain an original story, the movie does take its main plot from one of the manga's early stories with several other early scenes mixed in. The special stars Yui Aragaki as Akane, with Kento Kaku and Natsuna Watanabe playing male and female Ranma respectively. Ryōsei Tayama is cast as the antagonist, the new original character Okamada. The all-girl pop group 9nine contribute "Chikutaku☆2Nite" as the theme song. It was released on both DVD and Blu-ray on March 21, 2012.
"The Ranma ½ Memorial Book" was published just as the manga ended in 1996. Acting as an end-cap to the series, it collects various illustrations from the series, features an interview with Takahashi, and includes tidbits about Ranma: summaries of his battles, his daily schedule, trivia, and a few exclusive illustrations.
A "Movie + OVA Visual Comic" was released to illustrate the theatrical film "Super Indiscriminate Decisive Battle! Team Ranma vs. the Legendary Phoenix" and the OVA episodes "The One to Carry On" (both parts). It also included information on the voice actors, character designs, and a layout of the Tendo dojo.
Additionally, guidebooks were released for three of the "Ranma ½" video games; these included not only strategies, but also interviews. Two books including interviews with the cast of the live-action TV drama, and some select stories, were released in 2011.
The music from the "Ranma ½" TV series, films and OVAs have been released on various CDs. Four from the TV series, two from the first movie, one from the second, one from the third movie and OVAs, and three compiling the music by DoCo used in the OVAs. DoCo is a pop group composed of the anime's main female characters' voice actresses. Several compilation albums were also released, some composed of the opening and closing theme songs and others of image songs. Many of the image songs were first released as singles.
Rumiko Takahashi said that after "Urusei Yatsura", which was popular with high school and college students, she purposefully aimed "Ranma ½" to be popular among women and children. Both series' peak readership figures were with 15-year-olds, but the distribution of "Ranma ½" readers was skewed towards younger females. By November 2006, it was reported that the series had sold over 49 million manga volumes in Japan. Shogakukan has printed 53 million copies as of November 2011. Although Lum from Takahashi's first series "Urusei Yatsura" is often cited as the first "tsundere" character in anime and manga, Theron Martin of Anime News Network stated that "Ranma ½"s Akane Tendo is closer to how they would later typically be portrayed in the 2000s. He also suggested that one could argue "Ranma" is an early example of a harem or reverse harem series, due to the main character attracting suitors in both genders. The series's publication in North America proved highly successful as well, being many Americans' first introduction to manga and its anime adaptation one of the first Japanese animation shows to achieve popularity in the US. Western comic book artists that have cited "Ranma ½" as an influence include Canadian Bryan Lee O'Malley on his series "Scott Pilgrim" and American Colleen Coover on her erotic series "Small Favors". Matt Bozon, creator of the "Shantae" video game series, cited "Ranma ½" as a big influence on his work. The title of the fourth game, "", is also a tribute to the series.
In an overview of the series, Jason Thompson called "Ranma ½" "the direct ancestor of all comedy-action manga, like "Sumomomo Momomo" and "History's Strongest Disciple Kenichi"", although noted that it was not the first, but only spanned the period when manga and anime sales were at their height. Relating it to Takahashi's other works, he summed the series up as "At the start, the fighting is minimal and it's almost a semi-serious relationship comedy, like "Maison Ikkoku"; then it turns completely ridiculous; and by the climax, when Ranma fights the evil bird-people of Phoenix Mountain in an excessively long and un-funny shonen fight scene, it's like a warmup for "Inuyasha"." He states that "Eventually Takahashi adds too many characters, and the manga starts repeating itself. Because of the lack of a strong story arc, a lot of people stop reading "Ranma ½" at some point in the middle". Reviewing Viz Media's final English volume of the manga, Anime News Network remarked that "Every dimension of Rumiko Takahashi's storytelling skills come into play here: comedy, romance and introspection, and of course, high-flying fantasy martial-arts action." However, they felt some of the action scenes were hard to follow and noted that the mirroring to left-to-right format caused errors with the art.
The "Ranma ½" anime was ranked number 17 on "Anime Insider"'s 2001 list of the Top 50 Anime, although the list was limited to series that were released in North America. It ranked 36th on TV Asahi's 2006 list of Japan's 100 favorite animated TV series, which is based on an online poll of the Japanese people, up from the previous year's list where it ranked 45th. In November 2006, the New York Comic Con announced that it would host the first-ever American Anime Awards. Fans had the chance to vote for their favorite anime online during the month of January 2007. Only the five nominees receiving the most votes for each category were announced on February 5. Among the 12 different categories, "Ranma ½" was voted into the "Best Comedy Anime" category, and the "Ranma ½" OVAs were voted into the "Best Short Series" category. In their review of Viz Media's season five DVD box set, Anime News Network praised the Japanese cast's performance and the animation, but criticized the English version's slight script changes and minor voice actors while praising its main cast. They also remarked that while "Ranma ½" is a classic, after a hundred episodes, the same jokes are just not funny anymore. THEM Anime Reviews' Raphael See called the television series and the OVAs "one of the funniest things [he's] ever seen, anime or otherwise" and also praised the English dub as some of the best. However, he was much more critical of the first two movies particularly for both using the same damsel in distress plot. Mike Toole of ANN included "Big Trouble in Nekronon, China" at number 83 on The Other 100 Best Anime Movies of All Time, a list of "lesser-known, lesser-loved classics," calling it "a solid action-comedy and a good, well-rounded example of the appeal of "Ranma ½"" | https://en.wikipedia.org/wiki?curid=26324 |
Royal Australian Navy
The Royal Australian Navy (RAN) is the naval branch of the Australian Defence Force. Following the Federation of Australia in 1901, the ships and resources of the separate colonial navies were integrated into a national force, called the Commonwealth Naval Forces. Originally intended for local defence, the navy was granted the title of 'Royal Australian Navy' in 1911, and became increasingly responsible for defence of the region.
Britain's Royal Navy’s Australian Squadron was assigned to the Australia Station and provided support to the RAN. The Australian and New Zealand governments helped to fund the Australian Squadron until 1913, while the Admiralty committed itself to keeping the Squadron at a constant strength. The Australian Squadron ceased on 4 October 1913, when RAN ships entered Sydney Harbour for the first time.
The Royal Navy continued to provide blue-water defence capability in the Pacific up to the early years of the Second World War. Then, rapid wartime expansion saw the acquisition of large surface vessels and the building of many smaller warships. In the decade following the war, the RAN acquired a small number of aircraft carriers, the last of which was decommissioned in 1982.
Today, the RAN consists of 46 commissioned vessels, 3 non-commissioned vessels and over 16,000 personnel. The navy is one of the largest and most sophisticated naval forces in the South Pacific region, with a significant presence in the Indian Ocean and worldwide operations in support of military campaigns and peacekeeping missions. The current Chief of Navy is Vice Admiral Michael Noonan.
The Commonwealth Naval Forces were established on 1 March 1901, two months after the federation of Australia, when the naval forces of the separate Australian colonies were amalgamated. A period of uncertainty followed as the policy makers sought to determine the newly established force's requirements and purpose, with the debate focusing upon whether Australia's naval force would be structured mainly for local defence or whether it would be designed to serve as a fleet unit within a larger imperial force, controlled centrally by the British Admiralty. In 1908–09, the decision was made to pursue a compromise solution, and the Australian government agreed to establish a force that would be used for local defence but which would be capable of forming a fleet unit within the imperial naval strategy, albeit without central control. As a result, the navy's force structure was set at "one battlecruiser, three light cruisers, six destroyers and three submarines".
On 10 July 1911, King George V granted the service the title of "Royal Australian Navy". The first of the RAN's new vessels, the destroyer "Yarra", was completed in September 1910 and by the outbreak of the First World War the majority of the RAN's planned new fleet had been realised. The Australian Squadron was placed under control of the British Admiralty, and initially it was tasked with capturing many of Germany's South Pacific colonies and protecting Australian shipping from the German East Asia Squadron. Later in the war, most of the RAN's major ships operated as part of Royal Navy forces in the Mediterranean and North Seas, and then later in the Adriatic, and then the Black Sea following the surrender of the Ottoman Empire.
In 1919, the RAN received a force of six destroyers, three sloops and six submarines from the Royal Navy, but throughout the 1920s and early 1930s, the RAN was drastically reduced in size due to a variety of factors including political apathy and economic hardship as a result of the Great Depression. In this time the focus of Australia's naval policy shifted from defence against invasion to trade protection, and several fleet units were sunk as targets or scrapped. By 1923, the size of the navy had fallen to eight vessels, and by the end of the decade it had fallen further to five, with just 3,500 personnel. In the late 1930s, as international tensions increased, the RAN was modernised and expanded, with the service receiving primacy of funding over the Army and Air Force during this time as Australia began to prepare for war.
Early in the Second World War, RAN ships again operated as part of Royal Navy formations, many serving with distinction in the Mediterranean, the Red Sea, the Persian Gulf, the Indian Ocean, and off the West African coast. Following the outbreak of the Pacific War and the virtual destruction of Allied naval forces in south-east Asia, the RAN operated more independently, or as part of United States Navy formations. As the navy took on an even greater role, it was expanded significantly and at its height the RAN was the fourth-largest navy in the world, with 39,650 personnel operating 337 warships. A total of 34 vessels were lost during the war, including three cruisers and four destroyers.
After the Second World War, the size of the RAN was again reduced, but it gained new capabilities with the acquisition of two aircraft carriers, "Sydney" and "Melbourne". The RAN saw action in many Cold War–era conflicts in the Asia-Pacific region and operated alongside the Royal Navy and United States Navy off Korea, Malaysia, and Vietnam. Since the end of the Cold War, the RAN has been part of Coalition forces in the Persian Gulf and Indian Ocean, operating in support of Operation Slipper and undertaking counter piracy operations. It was also deployed in support of Australian peacekeeping operations in East Timor and the Solomon Islands.
The high demand for personnel in the Second World War led to the establishment of the Women's Royal Australian Naval Service (WRANS) branch in 1942, where over 3,000 women served in shore-based positions. The WRANS was disbanded in 1947, but then re-established in 1951 during the Cold War. It was given permanent status in 1959, and the RAN was the final branch to integrate women in the Australian military in 1985.
The strategic command structure of the RAN was overhauled during the New Generation Navy changes. The RAN is commanded through Naval Headquarters (NHQ) in Canberra. The professional head is the Chief of Navy (CN), who holds the rank of vice admiral. NHQ is responsible for implementing policy decisions handed down from the Department of Defence and for overseeing tactical and operational issues that are the purview of the subordinate commands.
Beneath NHQ are two subordinate commands:
Fleet Command was previously made up of seven Force Element Groups, but after the New Generation Navy changes, this was restructured into four Force Commands:
As of October 2018, the RAN fleet consisted of 48 warships, including destroyers, frigates, submarines, patrol boats and auxiliary ships. Ships commissioned into the RAN are given the prefix HMAS (His/Her Majesty's Australian Ship).
The RAN has two primary bases for its fleet:
In addition, three other bases are home to the majority of the RAN's minor war vessels:
The RAN currently operates 46 commissioned vessels, made up of eight ship classes and three individual ships, plus three non-commissioned vessels. In addition, DMS Maritime operates a large number of civilian-crewed vessels under contract to the Australian Defence Force.
The Fleet Air Arm (previously known as the Australian Navy Aviation Group) provides the RAN's aviation capability. As of 2018, the FAA consists of two front line helicopter squadrons (one focused on anti-submarine and anti-shipping warfare and the other a transport unit), two training squadrons and a trials squadron.
In addition to the helicopter squadrons of the Fleet Air Arm, the RAN operated an additional flying unit that came under the operational responsibility of the Australian Hydrographic Service. The Laser Airborne Depth Sounder (LADS) Flight was the sole remaining fixed-wing aircraft operated by the RAN, and was based at in Cairns, Queensland. The final LADS flight was conducted in November 2019. The capability will be replaced by commercial hydrographic companies through the HydroScheme Industry Partnership Program (HIPP).
The Clearance Diving Branch is composed of two "Clearance Diving Teams" (CDT) that serve as parent units for naval clearance divers:
When clearance divers are sent into combat, Clearance Diving Team Three (AUSCDT THREE) is formed.
The CDTs have two primary roles:
There are currently several major projects underway that will see upgrades to RAN capabilities:
The RAN currently has forces deployed on four major operations:
As of June 2011, the RAN has 14,215 permanent full-time personnel, 161 gap year personnel, and 2,150 reserve personnel. The permanent full-time force consisted of 3,357 commissioned officers, and 10,697 enlisted personnel. In June 2010, male personnel made up 82% of the permanent full-time force, while female personnel made up 18%. The RAN has the highest percentage of women in the ADF, compared to the RAAF's 17.8% and the Army's 9.7%.
The following are the current senior Royal Australian Navy officers:
The uniforms of the Royal Australian Navy are very similar in cut, colour and insignia to their British Royal Navy forerunners. However, beginning with the Second World War, all RAN personnel began wearing shoulder flashes reading "Australia", a practice continuing today. These are cloth arcs at shoulder height on uniforms, metallic gold on officers' shoulder boards, and embroidered on shoulder slip-ons.
Commissioned officers of the Australian Navy have pay grades ranging from S-1 to O-11. The only O-11 position in the navy is honorary and has only ever been held by royalty, currently being held by The Duke of Edinburgh. The highest position occupied in the current Royal Australian Navy structure is O-9, a vice admiral who serves as the Chief of the Navy. O-8 (rear admiral) to O-11 (admiral of the fleet) are referred to as flag officers, O-5 (commander) and above are referred to as senior officers, while S-1 (midshipman) to O-4 (lieutenant commander) are referred to as junior officers. All officers of the navy receive a commission from Her Majesty Queen Elizabeth II, Queen of Australia. The commissioning scroll issued in recognition of the commission is signed by the Governor General of Australia as Commander-in-Chief and the serving Minister for Defence.
Naval officers are trained at the Royal Australian Naval College (HMAS "Creswell") in Jervis Bay and the Australian Defence Force Academy in Canberra.
Chaplains in the Royal Australian Navy are commissioned officers who complete the same training as other officers in the RAN at the Royal Australian Naval College, HMAS Creswell. RAN regulations group RAN chaplains with commanders for purposes of protocol such as marks of respect (saluting); however, RAN chaplains have no other rank other than "chaplain", and their rank emblem is identifiable by a Maltese cross with gold anchor. Senior chaplains are grouped with captains, and principal chaplains are grouped with commodores, but their chaplain rank slide remains the same. Principal chaplains, however, have gold braid on the peak of their white service cap.
Royal Australian Navy Other Ranks wear "right arm rates" insignia, called "Category Insignia" to indicate speciality training qualifications. The use pattern mirrors that of the Royal Navy, and has since formation. Stars or a Crown are added to these to indicate higher qualifications.
The Warrant Officer of the Navy (WO-N) is an appointment held by the most senior sailor in the RAN, and holds the rank of warrant officer (WO). However, the WO-N does not wear the WO rank insignia; instead, they wear the special insignia of the appointment. The WO-N appointment has similar equivalent appointments in the other services, each holding the rank of warrant officer, each being the most senior sailor/soldier/airman in that service, and each wearing their own special insignia rather than their rank insignia. The Australian Army equivalent is the Regimental Sergeant Major of the Army (RSM-A) and the Royal Australian Air Force equivalent is the Warrant Officer of the Air Force (WOFF-AF). | https://en.wikipedia.org/wiki?curid=26327 |
Royal Australian Air Force
The Royal Australian Air Force (RAAF), formed in March 1921, is the aerial warfare branch of the Australian Defence Force (ADF). It operates the majority of the ADF's fixed wing aircraft, although both the Australian Army and Royal Australian Navy also operate aircraft in various roles. It directly continues the traditions of the Australian Flying Corps (AFC), formed on 22 October 1912. The RAAF provides support across a spectrum of operations such as air superiority, precision strikes, intelligence, surveillance and reconnaissance, air mobility, space surveillance, and humanitarian support.
The RAAF took part in many of the 20th century's major conflicts. During the early years of the Second World War a number of RAAF bomber, fighter, reconnaissance and other squadrons served in Britain, and with the Desert Air Force located in North Africa and the Mediterranean. From 1942, many RAAF units were formed in Australia, and fought in South West Pacific Area. Thousands of Australians also served with other Commonwealth air forces in Europe, including during the bomber offensive against Germany. By the time the war ended, a total of 216,900 men and women served in the RAAF, of whom 10,562 were killed in action.
Later the RAAF served in the Berlin Airlift, Korean War, Malayan Emergency, Indonesia–Malaysia Confrontation and Vietnam War. More recently, the RAAF has participated in operations in East Timor, the Iraq War, the War in Afghanistan, and the military intervention against the Islamic State of Iraq and the Levant (ISIL).
The RAAF has 259 aircraft, of which 110 are combat aircraft.
The RAAF traces its history back to the Imperial Conference held in London in 1911, where it was decided aviation should be developed within the armed forces of the British Empire. Australia implemented this decision, the first dominion to do so, by approving the establishment of the "Australian Aviation Corps". This initially consisted of the Central Flying School at Point Cook, Victoria, opening on 22 October 1912. By 1914 the corps was known as the "Australian Flying Corps".
Soon after the outbreak of war in 1914, the Australian Flying Corps sent aircraft to assist in capturing German colonies in what is now north-east New Guinea. However, these colonies surrendered quickly, before the planes were even unpacked. The first operational flights did not occur until 27 May 1915, when the Mesopotamian Half Flight was called upon to assist the Indian Army in protecting British oil interests in what is now Iraq.
The corps later saw action in Egypt, Palestine and on the Western Front throughout the remainder of the First World War. By the end of the war, four squadrons—Nos. 1, 2, 3 and 4—had seen operational service, while another four training squadrons—Nos. 5, 6, 7 and 8—had also been established. A total of 460 officers and 2,234 other ranks served in the AFC, whilst another 200 men served as aircrew in the British flying services. Casualties included 175 dead, 111 wounded, 6 gassed and 40 captured.
The Australian Flying Corps remained part of the Australian Army until 1919, when it was disbanded along with the First Australian Imperial Force (AIF). Although the Central Flying School continued to operate at Point Cook, military flying virtually ceased until 1920, when the Australian Air Corps (AAC) was formed. The Australian Air Force was formed on 31 March 1921. King George V approved the prefix "Royal" in June 1921 and became effective on 31 August 1921. The RAAF then became the second Royal air arm to be formed in the British Commonwealth, following the British Royal Air Force. When formed the RAAF had more aircraft than personnel, with 21 officers and 128 other ranks and 153 aircraft.
In September 1939, the Australian Air Board directly controlled the Air Force via RAAF Station Laverton, RAAF Station Richmond, RAAF Station Pearce, No. 1 Flying Training School RAAF at Point Cook, RAAF Station Rathmines and five smaller units.
In 1939, just after the outbreak of the Second World War, Australia joined the Empire Air Training Scheme, under which flight crews received basic training in Australia before travelling to Canada for advanced training. A total of 17 RAAF bomber, fighter, reconnaissance and other squadrons served initially in Britain and with the Desert Air Force located in North Africa and the Mediterranean. Thousands of Australians also served with other Commonwealth air forces in Europe during the Second World War. About nine percent of the personnel who served under British RAF commands in Europe and the Mediterranean were RAAF personnel.
With British manufacturing targeted by the German Luftwaffe, in 1941 the Australian government created the Department of Aircraft Production (DAP; later known as the Government Aircraft Factories) to supply Commonwealth air forces, and the RAAF was eventually provided with large numbers of locally built versions of British designs such as the DAP Beaufort torpedo bomber, Beaufighters and Mosquitos, as well as other types such as Wirraways, Boomerangs, and Mustangs.
In the European theatre of the war, RAAF personnel were especially notable in RAF Bomber Command: although they represented just two percent of all Australian enlistments during the war, they accounted for almost twenty percent of those killed in action. This statistic is further illustrated by the fact that No. 460 Squadron RAAF, mostly flying Avro Lancasters, had an official establishment of about 200 aircrew and yet had 1,018 combat deaths. The squadron was therefore effectively wiped out five times over. Total RAAF casualties in Europe were 5,488 killed or missing.
The beginning of the Pacific War—and the rapid advance of Japanese forces—threatened the Australian mainland for the first time in its history. The RAAF was quite unprepared for the emergency, and initially had negligible forces available for service in the Pacific. In 1941 and early 1942, many RAAF airmen, including Nos. 1, 8, 21 and 453 Squadrons, saw action with the RAF Far East Command in the Malayan, Singapore and Dutch East Indies campaigns. Equipped with aircraft such as the Brewster Buffalo, and Lockheed Hudsons, the Australian squadrons suffered heavily against Japanese Zeros.
During the fighting for Rabaul in early 1942, No. 24 Squadron RAAF fought a brief, but ultimately futile defence as the Japanese advanced south towards Australia. The devastating air raids on Darwin on 19 February 1942 increased concerns about the direct threat facing Australia. In response, some RAAF squadrons were transferred from the northern hemisphere—although a substantial number remained there until the end of the war. Shortages of fighter and ground attack planes led to the acquisition of US-built Curtiss P-40 Kittyhawks and the rapid design and manufacture of the first Australian fighter, the CAC Boomerang. RAAF Kittyhawks came to play a crucial role in the New Guinea and Solomon Islands campaigns, especially in operations like the Battle of Milne Bay. As a response to a possible Japanese chemical warfare threat the RAAF imported hundreds of thousands of chemical weapons into Australia.
In the Battle of the Bismarck Sea, imported Bristol Beaufighters proved to be highly effective ground attack and maritime strike aircraft. Beaufighters were later made locally by the DAP from 1944. Although it was much bigger than Japanese fighters, the Beaufighter had the speed to outrun them. The RAAF operated a number of Consolidated PBY Catalina as long range bombers and scouts. The RAAF's heavy bomber force was predominantly made up of 287 B-24 Liberators, equipping seven squadrons, which could bomb Japanese targets as far away as Borneo and the Philippines from airfields in Australia and New Guinea. By late 1945, the RAAF had received or ordered about 500 P-51 Mustangs, for fighter/ground attack purposes. The Commonwealth Aircraft Corporation initially assembled US-made Mustangs, but later manufactured most of those used.
By mid-1945, the RAAF's main operational formation in the Pacific, the First Tactical Air Force (1st TAF), consisted of over 21,000 personnel, while the RAAF as a whole consisted of about 50 squadrons and 6,000 aircraft, of which over 3,000 were operational. The 1st TAF's final campaigns were fought in support of Australian ground forces in Borneo, but had the war continued some of its personnel and equipment would likely have been allocated to the invasion of the Japanese mainland, along with some of the RAAF bomber squadrons in Europe, which were to be grouped together with British and Canadian squadrons as part of the proposed Tiger Force. However, the war was brought to a sudden end by the US nuclear attacks on Japan. The RAAF's casualties in the Pacific were around 2,000 killed, wounded or captured.
By the time the war ended, a total of 216,900 men and women served in the RAAF, of whom 10,562 were killed in action; a total of 76 squadrons were formed. With over 152,000 personnel operating nearly 6,000 aircraft it was the world's fourth largest air force.
During the Berlin Airlift, in 1948–49, the RAAF Squadron Berlin Air Lift aided the international effort to fly in supplies to the stricken city; two RAF Avro York aircraft were also crewed by RAAF personnel. Although a small part of the operation, the RAAF contribution was significant, flying 2,062 sorties and carrying 7,030 tons of freight and 6,964 passengers.
In the Korean War, from 1950–53, North American Mustangs from No. 77 Squadron RAAF, stationed in Japan with the British Commonwealth Occupation Force, were among the first United Nations aircraft to be deployed, in ground support, combat air patrol, and escort missions. When the UN planes were confronted by North Korean Mikoyan-Gurevich MiG-15 jet fighters, 77 Sqn acquired Gloster Meteors, however the MiGs remained superior and the Meteors were relegated to ground support missions as the North Koreans gained experience. The air force also operated transport aircraft during the conflict. No. 77 Squadron flew 18,872 sorties, claiming the destruction of 3,700 buildings, 1,408 vehicles, 16 bridges, 98 railway carriages and an unknown number of enemy personnel. Three MiG-15s were confirmed destroyed, and two others probably destroyed. RAAF casualties included 41 killed and seven captured; 66 aircraft – 22 Mustangs and 44 Meteors – were lost.
In July 1952, No. 78 Wing RAAF was deployed to Malta in the Mediterranean where it formed part of a British force which sought to counter the Soviet Union's influence in the Middle East as part of Australia's Cold War commitments. Consisting of No. 75 and 76 Squadrons equipped with de Havilland Vampire jet fighters, the wing provided an air garrison for the island for the next two and half years, returning to Australia in late 1954.
In 1953, a Royal Air Force officer, Air Marshal Sir Donald Hardman, was brought out to Australia to become Chief of the Air Staff. He reorganised the RAAF into three commands: Home Command, Maintenance Command, and Training Command. Five years later, Home Command was renamed Operational Command, and Training Command and Maintenance Command were amalgamated to form Support Command.
In the Malayan Emergency, from 1950–60, six Avro Lincolns from No. 1 Squadron RAAF and a flight of Douglas Dakotas from No. 38 Squadron RAAF took part in operations against the communist guerrillas (labelled as "Communist Terrorists" by the British authorities) as part of the RAF Far East Air Force. The Dakotas were used on cargo runs, in troop movement and in paratroop and leaflet drops within Malaya. The Lincolns, operating from bases in Singapore and from Kuala Lumpur, formed the backbone of the air war against the CTs, conducting bombing missions against their jungle bases. Although results were often difficult to assess, they allowed the government to harass CT forces, attack their base camps when identified and keep them on the move. Later, in 1958, Canberra bombers from No. 2 Squadron RAAF were deployed to Malaya and took part in bombing missions against the CTs.
During the Vietnam War, from 1964–72, the RAAF contributed Caribou STOL transport aircraft as part of the RAAF Transport Flight Vietnam, later redesignated No. 35 Squadron RAAF, UH-1 Iroquois helicopters from No. 9 Squadron RAAF, and English Electric Canberra bombers from No. 2 Squadron RAAF. The Canberras flew 11,963 bombing sorties, and two aircraft were lost. One went missing during a bombing raid. The wreckage of the aircraft was recovered in April 2009, and the remains of Flying Officer Michael Herbert and Pilot Officer Robert Carver were found in late July 2009. The other was shot down by a surface-to-air missile, although both crew were rescued. They dropped 76,389 bombs and were credited with 786 enemy personnel confirmed killed and a further 3,390 estimated killed, 8,637 structures, 15,568 bunkers, 1,267 sampans and 74 bridges destroyed. RAAF transport aircraft also supported anti-communist ground forces. The UH-1 helicopters were used in many roles including medical evacuation and close air support. RAAF casualties in Vietnam included six killed in action, eight non-battle fatalities, 30 wounded in action and 30 injured. A small number of RAAF pilots also served in United States Air Force units, flying F-4 Phantom fighter-bombers or serving as forward air controllers.
Military airlifts were conducted for a number of purposes in the intervening decades, such as the peacekeeping operations in East Timor from 1999. Australia's combat aircraft were not used again in combat until the Iraq War in 2003, when 14 F/A-18s from No. 75 Squadron RAAF operated in the escort and ground attack roles, flying a total of 350 sorties and dropping 122 laser-guided bombs. A detachment of AP-3C Orion maritime patrol aircraft were deployed in the Middle East between 2003 and 2012. These aircraft conducted maritime surveillance patrols over the Persian Gulf and North Arabian Sea in support of Coalition warships and boarding parties, as well as conducting extensive overland flights of Iraq and Afghanistan on intelligence, surveillance and reconnaissance missions, and supporting counter-piracy operations in Somalia.
From 2007 to 2009, a detachment of No. 114 Mobile Control and Reporting Unit RAAF was on active service at Kandahar Airfield in southern Afghanistan.
Approximately 75 personnel deployed with the AN/TPS-77 radar assigned the responsibility to co-ordinate coalition air operations. A detachment of IAI Heron unmanned aerial vehicles has been deployed in Afghanistan since January 2010.
In late September 2014, an Air Task Group consisting of up to eight F/A-18F Super Hornets, a KC-30A Multi Role Tanker Transport, an E-7A Wedgetail Airborne Early Warning & Control aircraft and 400 personnel was deployed to Al Minhad Air Base in the United Arab Emirates as part of the coalition to combat Islamic State forces in Iraq. Operations began on 1 October. A number of C-17 and C-130J Super Hercules transport aircraft based in the Middle East have also been used to conduct airdrops of humanitarian aid and to airlift arms and munitions since August.
In June 2017 two RAAF AP-3C Orion maritime patrol aircraft were deployed to the southern Philippines in response to the Marawi crisis.
The RAAF established the Women's Auxiliary Australian Air Force (WAAAF) in March 1941, which then became the Women's Royal Australian Air Force (WRAAF) in 1951. The service merged with the RAAF in 1977; however, all women in the Australian military were barred from combat-related roles until 1990. Women have been eligible for flying roles in the RAAF since 1987, with the RAAF's first women pilots awarded their "wings" in 1988. In 2016, the remaining restrictions on women in frontline combat roles were removed, and the first two female RAAF fast jet fighter pilots graduated in December 2017.
The rank structure of the nascent RAAF was established to ensure that the service remained separate from the Army and Navy. The service's predecessors, the AFC and the AAC, had used the Army's rank structure. In November 1920 it was decided by the Air Board that the RAAF would adopt the structure adopted by the RAF the previous year. As a result, the RAAF's rank structure came to be: Aircraftman, Leading Aircraftman, Corporal, Sergeant, Flight Sergeant, Warrant Officer, Officer Cadet, Pilot Officer, Flying Officer. Flight Lieutenant, Squadron Leader, Wing Commander, Group Captain, Air Commodore, Air Vice Marshal, Air Marshal, Air Chief Marshal, Marshal of the RAAF.
In 1922, the colour of the RAAF winter uniform was determined by Air Marshal Sir Richard Williams on a visit to the Geelong Wool Mill. He asked for one dye dip fewer than the RAN blue (three indigo dips rather than four). There was a change to a lighter blue when an all-seasons uniform was introduced in the 1970s. The original colour and style were re-adopted around 2005. Slip-on rank epaulettes, known as "Soft Rank Insignia" (SRI), displaying the word "AUSTRALIA" are worn on the shoulders of the service dress uniform. When not in the service dress or "ceremonial" uniform, RAAF personnel wear the General Purpose Uniform (GPU) as a working dress, which is a blue version of the Australian Multicam Pattern.
Originally, the air force used the red, white and blue roundel of the RAF. However, during the Second World War the inner red circle, which was visually similar to the Japanese "hinomaru", was removed after a No. 11 Squadron Catalina was mistaken for a Japanese aircraft and attacked by a Grumman Wildcat of VMF-212 of the United States Marine Corps on 27 June 1942.
After the war, a range of options for the RAAF roundel was proposed, including the Southern Cross, a boomerang, a sprig of wattle, and a red kangaroo. On 2 July 1956, the current version of the roundel was formally adopted. This consists of a white inner circle with a red kangaroo surrounded by a royal blue circle. The kangaroo faces left, except when used on aircraft or vehicles, when the kangaroo should always face forward. Low visibility versions of the roundel exist, with the white omitted and the red and blue replaced with light or dark grey.
The RAAF badge was accepted by the Chester Herald in 1939. The badge is composed of the imperial crown mounted on a circle featuring the words Royal Australian Air Force, beneath which scroll work displays the Latin motto "Per Ardua Ad Astra", which it shares with the Royal Air Force. Surmounting the badge is a wedge-tailed eagle. "Per Ardua Ad Astra" is attributed with the meaning "Through Adversity to the Stars" and is from Sir Henry Rider Haggard's novel "The People of the Mist".
As of June 2018, the RAAF had 14,313 permanent full-time personnel and 5,499 part-time active reserve personnel.
The Roulettes are the RAAF's formation aerobatic display team. They perform around Australia and South-east Asia, and are part of the RAAF Central Flying School (CFS) at RAAF Base East Sale, Victoria. The Roulettes use the Pilatus PC-21 and formations for shows are done in a group of six aircraft. The pilots learn many formations including loops, rolls, corkscrews, and ripple roles. Most of the performances are done at the low altitude of 500 feet (150 metres).
This list includes aircraft on order or a requirement which has been identified:
Lists:
Memorials and Museums: | https://en.wikipedia.org/wiki?curid=26328 |
Responsible government
Responsible government is a conception of a system of government that embodies the principle of parliamentary accountability, the foundation of the Westminster system of parliamentary democracy. Governments (the equivalent of the executive branch) in Westminster democracies are responsible to parliament rather than to the monarch, or, in a colonial context, to the imperial government, and in a republican context, to the president, either in full or in part. If the parliament is bicameral, then the government is responsible first to the parliament's lower house, which is more representative than the upper house, as it usually has more members and they are always directly elected.
Responsible government of parliamentary accountability manifests itself in several ways. Ministers account to Parliament for their decisions and for the performance of their departments. This requirement to make announcements and to answer questions in Parliament means that ministers must have the privileges of the "floor", which are only granted to those who are members of either house of Parliament. Secondly, and most importantly, although ministers are officially appointed by the authority of the head of state and can theoretically be dismissed at the pleasure of the sovereign, they concurrently retain their office subject to their holding the confidence of the lower house of Parliament. When the lower house has passed a motion of no confidence in the government, the government must immediately resign or submit itself to the electorate in a new general election.
Lastly, the head of state is in turn required to effectuate their executive power only through these responsible ministers. They must never attempt to set up a "shadow" government of executives or advisors and attempt to use them as instruments of government, or to rely upon their "unofficial" advice. They are bound to take no decision or action that is put into effect under the colour of their executive power without that action being as a result of the counsel and advisement of their responsible ministers. Their ministers are required to counsel them (i.e., explain to them and be sure they understand any issue that they will be called upon to decide) and to form and have recommendations for them (i.e., their advice or advisement) to choose from, which are the ministers' formal, reasoned, recommendations as to what course of action should be taken.
An exception to this is Israel, which operates under a simplified version of the Westminster system.
Responsible government was implemented in several colonies of British North America (present day Canada), between 1848 and 1850, with the executive council formulating policy with the assistance of the legislative branch, the legislature voting approval or disapproval, and the appointed governor enacting those policies that it had approved. This replaced the previous system whereby the governor took advice from an executive council, and used the legislature chiefly to raise money.
Responsible government was a major element of the gradual development of Canada towards independence. The concept of responsible government is associated in Canada more with self-government than with parliamentary accountability; hence there is the notion that the Dominion of Newfoundland "gave up responsible government" when it suspended its self-governing status in 1933, as a result of financial problems. It did not regain responsible government until it became a province of Canada in 1949.
After the formation of elected legislative assemblies starting with Nova Scotia in 1758, governors and their executive councils did not require the consent of elected legislators in order to carry out all their roles. It was only in the decades leading up to Canadian Confederation in 1867 that the governing councils of those British North American colonies became responsible to the elected representatives of the people.
In the aftermath of the American Revolution, based on the perceived shortcomings of virtual representation, the British government became more sensitive to unrest in its remaining colonies with large populations of European-descended colonists. Elected assemblies were introduced to both Upper Canada and Lower Canada with the Constitutional Act of 1791. Many reformers thought that these assemblies should have some control over the executive power, leading to political unrest between the governors and assemblies in both Upper and Lower Canada. The Lieutenant Governor of Upper Canada Sir Francis Bond Head wrote in one dispatch to London that if responsible government were implemented "Democracy, in the worst possible Form, will prevail in our Colonies."
After the Rebellions of 1837–1838 in the Canadas, Lord Durham was appointed governor general of British North America and had the task of examining the issues and determining how to defuse tensions. In his report, one of his recommendations was that colonies which were developed enough should be granted "responsible government". This term specifically meant the policy that British-appointed governors should bow to the will of elected colonial assemblies.
The first instance of responsible government in the British Empire outside of the United Kingdom itself was achieved by the colony of Nova Scotia in January–February 1848 through the efforts of Joseph Howe. Howe's push for responsible government was inspired by the work of Thomas McCulloch and Jotham Blanchard almost two decades earlier. The plaque in the Nova Scotia House of Assembly erected by the Historic Sites and Monuments Board of Canada reads:
First Responsible Government in the British Empire.
The first Executive Council chosen exclusively from the party having a majority in the representative branch of a colonial legislature was formed in Nova Scotia on 2 February 1848. Following a vote of want of confidence in the preceding Council, James Boyle Uniacke, who had moved the resolution, became Attorney General and leader of the Government. Joseph Howe, the long-time campaigner for this "Peaceable Revolution", became Provincial Secretary. Other members of the Council were Hugh Bell, Wm. F. Desbarres, Lawrence O.C. Doyle, Herbert Huntingdon, James McNab, Michael Tobin, and George R. Young.
The colony of New Brunswick soon followed in May 1848 when Lieutenant Governor Edmund Walker Head brought in a more balanced representation of Members of the Legislative Assembly to the Executive Council and ceded more powers to that body.
In the Province of Canada, responsible government was introduced with the ministry of Louis-Hippolyte LaFontaine and Robert Baldwin in spring 1848; it was put to the test in 1849, when Reformers in the legislature passed the Rebellion Losses Bill. This was a law that provided compensation to French-Canadians who suffered losses during the Rebellions of 1837–1838 in Lower-Canada.
The Governor General, Lord Elgin, had serious misgivings about the bill but nonetheless assented to it despite demands from the Tories that he refuse to do so. Elgin was physically assaulted by an English-speaking mob for this, and the Montreal Parliament building was burned to the ground in the ensuing riots. Nonetheless, the Rebellion Losses Bill helped entrench responsible government into Canadian politics.
In time, the granting of responsible government became the first step on the road to complete independence. Canada gradually gained greater and greater autonomy over a considerable period of time through inter imperial and commonwealth diplomacy, including the British North America Act of 1867, the Statute of Westminster of 1931, and even as late as the patriation of the Constitution Act in 1982 (see Constitution of Canada).
While the various colonies in Australia were either sparsely populated or penal settlements or both, executive power was in the hands of the Governors, who, because of the great distance from their superiors in London and the resulting very slow communication, necessarily exercised vast powers.
However, the early colonists, coming mostly from the United Kingdom, were familiar with the Westminster system and made efforts to reform local government in order to increase the opportunity for ordinary men to participate.
The Governors and London therefore set in motion a gradual process of establishing a Westminster system in the colonies, not so fast as to get ahead of population or economic growth, nor so slow as to provoke clamouring for revolutionary change as happened in the American Revolutionary War and threatened in the Rebellions of 1837–1838 in Canada. This first took the form of appointed or partially elected Legislative Councils. By the 1850s the Australian colonies (with the exception of Western Australia that did so in 1890) and New Zealand had established both representative and responsible government.
The Cape Colony, in Southern Africa, was under responsible self-government from 1872 until 1910 when it became the Cape Province of the new Union of South Africa.
Under its previous system of representative government, the Ministers of the Cape Government reported directly to the British Imperial Governor, and not to the locally elected representatives in the Cape Parliament. Among Cape citizens of all races, growing anger at their powerlessness in influencing unpopular imperial decisions had repeatedly led to protests and rowdy political meetings – especially during the early "Convict Crisis" of the 1840s.
A popular political movement for responsible government soon emerged, under local leader John Molteno. A protracted struggle was then conducted over the ensuing years as the movement (known informally as "the responsibles") grew increasingly powerful, and used their parliamentary majority to put pressure on the British Governor, withholding public finances from him, and conducting public agitations. Not everyone favoured responsible government though, and pro-imperial press outlets even accused the movement of constituting "crafts and assaults of the devil".
Supporters believed that the most effective means of instituting responsible government was simply to change the section of the constitution which prevented government officials from being elected to parliament or members of parliament from serving in executive positions. The conflict therefore centred on the changing of this specific section. "Although responsible government merely required an amendment to s.79 of the constitution, it transpired only after nearly twenty years in 1872 when the so-called "responsibles" under Molteno were able to command sufficient support in both houses to secure the passage of the necessary bill." Finally, with a parliamentary majority and with the Colonial Office and new Governor Henry Barkly won over, Molteno instituted responsible government, making the Ministers directly responsible to the Cape Parliament, and becoming the Cape's first Prime Minister.
The ensuing period saw an economic recovery, a massive growth in exports and an expansion of the colony's frontiers. Despite political complications that arose from time to time (such as an ill-fated scheme by the British Colonial Office to enforce a confederation in Southern Africa in 1878, and tensions with the Afrikaner-dominated Government of Transvaal over trade and railroad construction), economic and social progress in the Cape Colony continued at a steady pace until a renewed attempt to extend British control over the hinterland caused the outbreak of the Anglo-Boer Wars in 1899.
An important feature of the Cape Colony under responsible government was that it was the only state in southern Africa (and one of very few in the world at the time) to have a non-racial system of voting.
Later however – following the South Africa Act 1909 to form the Union of South Africa – this multi-racial universal suffrage was steadily eroded, and eventually abolished by the Apartheid government in 1948.
In the early 1860s, the Prussian Prime Minister Otto von Bismarck was involved in a bitter dispute with the Liberals, who sought to institute a system of responsible government modeled on that of Britain. Bismarck, who strongly opposed that demand, managed to deflect the pressure by embarking energetically and successfully on the unification of Germany. The Liberals, who were also strong German nationalists, backed Bismarck's unification efforts and tacitly accepted that the Constitution of Imperial Germany, crafted by Bismarck, did not include a responsible government – the Chancellor being accountable solely to the emperor and needing no parliamentary confidence. Germany gained a responsible government only with the Weimar Republic and more securely with the creation of the Federal Republic of Germany. Historians account the lack of responsible government in the formative decades of united Germany as one of the factors contributing to the prolonged weakness of German democratic institutions, lasting also after such a government was finally instituted. | https://en.wikipedia.org/wiki?curid=26329 |
Rural flight
Rural flight (or rural exodus) is the migratory pattern of peoples from rural areas into urban areas. It is urbanization seen from the rural perspective.
In modern times, it often occurs in a region following the industrialization of agriculture—when fewer people are needed to bring the same amount of agricultural output to market—and related agricultural services and industries are consolidated. Rural flight is exacerbated when the population decline leads to the loss of rural services (such as business enterprises and schools), which leads to greater loss of population as people leave to seek those features.
Prior to the Industrial Revolution, rural flight occurred in mostly localized regions. Pre-industrial societies did not experience large rural-urban migration flows primarily due to the inability of cities to support large populations. Lack of large employment industries, high urban mortality, and low food supplies all served as checks keeping pre-industrial cities much smaller than their modern counterparts. Ancient Athens and Rome, scholars estimate, had peak populations of 80,000 and 500,000.
The onset of the Industrial Revolution in Europe in the late 19th century removed many of these checks. As food supplies increased and stabilized and industrialized centers arose, cities began to support larger populations, sparking the start of rural flight on a massive scale. The United Kingdom went from having 20% of the population living in urban areas in 1800 to more than 70% by 1925. While the late 19th century and early 20th century saw much of rural flight focused in Western Europe and the United States, as industrialization spread throughout the world during the 20th century, rural flight and urbanization followed quickly behind. Today, rural flight is an especially distinctive phenomenon in some of the newer urbanized areas including China and more recently sub-Saharan Africa.
The shift from mixed subsistence farming to commodity crops and livestock began in the late 19th century. New capital market systems and the railroad network began the trend towards larger farms that employed fewer people per acre. These larger farms used more efficient technologies such as steel plows, mechanical reapers, and higher-yield seed stock, which reduced human input per unit of production. The other issue on the Great Plains was that people were using inappropriate farming techniques for the soil and weather conditions. Most homesteaders had family farms generally considered too small to survive (under 320 acres), and European-American subsistence farming could not continue as it was then practiced.
During the Dust Bowl and the Great Depression of the 1930s, large numbers of people fled rural areas of the Great Plains and the Midwest due to depressed commodity prices and high debt loads exacerbated by several years of drought and large dust storms. Rural flight from the Great Plains has been depicted in literature, such as John Steinbeck's novel "The Grapes of Wrath" (1939), in which a family from the Great Plains migrates to California during the Dust Bowl period of the 1930s.
Post-World War II rural flight has been caused primarily by the spread of industrialized agriculture. Small, labor-intensive family farms have grown into, or have been replaced by, heavily mechanized and specialized industrial farms. While a small family farm typically produced a wide range of crop, garden, and animal products—all requiring substantial labor—large industrial farms typically specialize in just a few crop or livestock varieties, using large machinery and high-density livestock containment systems that require a fraction of the labor per unit produced. For example, Iowa State University reports the number of hog farmers in Iowa dropped from 65,000 in 1980 to 10,000 in 2002, while the number of hogs per farm increased from 200 to 1,400.
The consolidation of the feed, seed, processed grain, and livestock industries has meant that there are fewer small businesses in rural areas. This decrease in turn exacerbated the decreased demand for labor. Rural areas that used to be able to provide employment for all young adults willing to work in challenging conditions, increasingly provide fewer opportunities for young adults. The situation is made worse by the decrease in services such as schools, business, and cultural opportunities that accompany the decline in population, and the increasing age of the remaining population further stresses the social service system of rural areas.
The rise of corporate agricultural structures directly affects small rural communities, resulting in decreased populations, decreased incomes for some segments, increased income inequality, decreased community participation, fewer retail outlets and less retail trade, and increased environmental pollution. "Human dehabitation" of rural settlements is a megatrend in aging societies across the globe, perhaps partially reversing a historic boom in land use for settlements that coincided with population growth that began in earnest alongside the spread of the industrial revolution and curative medicine. Since the 1990s, China has merged schools into more centralized village-, town-, or county-level schools in rural areas to address some of these very problems. Chernobyl is one example of how human abandonment of land can lead to the return of abundant animal life.
There are several determinants, push and pull, that contribute to rural flight: lower levels of (perceived) economic opportunity in rural communities versus urban ones, lower levels of government investment in rural communities, greater education opportunities in cities, marriages, increased social acceptance in urban areas, and higher levels of rural fertility.
Some migrants choose to leave rural communities out of the desire to pursue greater economic opportunity in urban areas. Greater economic opportunities can be real or perceived. According to the Harris-Todaro Model, migration to urban areas will continue as long as "expected urban real income at the margin exceeds real agricultural product" (127). However, sociologist Josef Gugler points out that while individual benefits of increased wages may outweigh the costs of migration, if enough individuals follow this rationale, it can produce harmful effects such as overcrowding and unemployment on a national level. This phenomenon, when the rate of urbanization outpaces the rate of economic growth, is known as overurbanization. Since the industrialization of agriculture, mechanization has reduced the number of jobs present in rural communities. Some scholars have also attributed rural flight to the effects of globalization as the demand for increased economic competitiveness leads people to choose capital over labor. At the same time, rural fertility rates have historically been higher than urban fertility rates. The combination of declining rural jobs and a persistently high rural fertility rate has led to rural-urban migration streams. Rural flight also contains a positive feedback loop where previous migrants from rural communities assist new migrants in adjusting to city life. Also known as chain migration, migrant networks lower barriers to rural flight. For example, an overwhelming majority of rural migrants in China located jobs in urban areas through migrant networks.
Some families choose to send their children to cities as a form of investment for the future. A study conducted by Bates and Bennett (1974) concluded that rural communities in Zambia that had other viable investment opportunities, like livestock for instance, had lower rates of rural-urban migration as compared to regions without viable investment opportunities. Sending their children into cities can serve as long-term investments with the hope that their children will be able to send remittances back home after getting a job in the city.
There are severe challenges faced by poorer people in the agriculture sector because of diminishing access to productive farmland. Foreign investors through Foreign Direct Investment (FDI) schemes have been encouraged to lease land in rural areas in Cambodia and Ethiopia. This has led to the loss of farmland, range land, woodlands and water sources from local communities. Large-scale agricultural projects funded by FDI only employed a few experts specialized in the relevant new technologies.
In other instances, rural flight may occur in response to social determinants. A study conducted in 2012 indicated that a significant proportion of rural flight in India occurred due to social factors such as migration with household, marriage, and education. Migration with households and marriage affect women in particular as most often they are the ones required to move with households and move for marriage, especially in developing regions.
Rural youth may choose to leave their rural communities as a method of transitioning into adulthood, seeking avenues to greater prosperity. With the stagnation of the rural economy and encouragement from their parents, rural youth may choose to migrate to cities out of social norms – demonstrating leadership and self-respect. With this societal encouragement combined with depressed rural economies, rural youth form a large proportion of the migrants moving to urban areas. In Sub-Saharan Africa, a study conducted by Touray in 2006 indicated that about 15% (26 million) of urban migrants were youth.
Lastly, natural disasters can often be single-point events that lead to temporarily massive rural-urban migration flows. The 1930s Dust Bowl in the United States, for example, led to the flight of 2.5 million people from the Plains by 1940, many to the new cities in the West. It is estimated that as many as one out of every four residents in the Plains States left during the 1930s. More recently, drought in Syria from 2006–2011 has prompted a rural exodus to major urban centers. Massive influxes in urban areas, combined with difficult living conditions, have prompted some scholars to link the drought to the arrival of the Arab Spring in Syria.
The terms are used in the United States and Canada to describe the flight of people from rural areas in the Great Plains and Midwest regions, and to a lesser extent rural areas of the northeast and southeast and Appalachia. It is also particularly noticeable in parts of Atlantic Canada (especially Newfoundland), since the collapse of Atlantic cod fishing fields in 1992.
China, like many other currently industrializing countries, has had a relatively late start to rural flight. Until 1983, the Chinese government, through the hukou system, greatly restricted the ability of their citizens to internally migrate. Since 1983, the Chinese government has progressively lifted the restrictions on internal migration. This has led to a great increase in the number of people migrating to urban areas. However, even today, the hukou system limits the ability of rural migrants to receive full access to urban social services at the urban subsidized costs.
As with most examples of rural flight, several factors have led towards China's massive urbanization. Income disparity, family pressure, surplus labor in rural areas due to higher average fertility rates, and improved living conditions all play a role in contributing to the flows of migrants from rural to urban areas. Approximately, 250 million rural migrants now live in cities with 54% of the total Chinese population living in urban areas.
A focus by landowners on efficient production led to the enclosure of the commons in the 16th and 17th centuries. This created unrest in rural areas as tenants were then unable to graze their livestock. They sometimes resorted to illegal means to support their families. This was followed, in turn, by penal transportation which sent offenders out of the country, often Australia. Eventually, economic measures produced the British Agricultural Revolution.
Rural flight has been occurring to some degree in Germany since the 11th century. A corresponding principle of German law is "Stadtluft macht frei" ("city air makes you free"), in longer form "Stadtluft macht frei nach Jahr und Tag" ("city air makes you free after a year and a day"): by custom and, from 1231/32, by statute, a serf who had spent a year and a day in a city was free, and could not be reclaimed by their former master.
"Landflucht" ("flight from the land") refers to the mass migration of peasants into the cities that occurred in Germany (and throughout most of Europe) in the late 19th century.
In 1870 the rural population of Germany constituted 64% of the population; by 1907 it had shrunk to 33%. In 1900 alone, the Prussian provinces of East Prussia, West Prussia, Posen, Silesia, and Pomerania lost about 1,600,000 people to the cities, where these former agricultural workers were absorbed into the rapidly growing factory labor class; One of the causes of this mass-migration was the decrease in rural income compared to the rates of pay in the cities.
Landflucht resulted in a major transformation of the German countryside and agriculture. Mechanized agriculture and migrant workers, particularly Poles from the east (Sachsengänger), became more common. This was especially true in the province of Posen that was gained by Prussia when Poland was partitioned. The Polish population of eastern Germany was one of the justifications for the creation of the "Polish corridor" after World War I and the absorption of the land east of the Oder-Neisse line into Poland after World War II. Also, some labor-intensive enterprises were replaced by much less labor-intensive ones such as game preserves.
The word "Landflucht" has negative connotations in German, as it was coined by agricultural employers, often of the German aristocracy, who were lamenting their labor shortages.
The rural exodus of Scotland followed that of England, but delayed by several centuries. Consolidation of farms and elimination of inefficient tenants occurred over about 110 years from the 18th to the 19th centuries. Samuel Johnson encountered this in 1773 and documented it in his work "A Journey to the Western Islands of Scotland." He deplored the exodus but did not have the information to analyze the problem.
Rural flight and out-migration in Sweden can be traced in two distinct waves. The first, beginning in the 1850s when 82% of the Swedish population lived in rural areas, and continuing till the late 1880s, was mostly due to push factors in the countryside related to poverty, unemployment, low agricultural wages, debt peonage, semi-feudalism, and religious oppression by the State church. Most of the migration was ad-hoc and directed towards emigration to the three big cities of Sweden, America, Denmark, or Germany. Many of these first emigrants were unskilled, barely literate laborers who sought farm work or daily wage labour in the cities.
The second wave started from the late 1890s and reached its peak between 1922 and 1967, with the highest rates of rural flight occurring in the 1920s and the 1950s. This was mostly "pull factors" due to the economic boom and industrial prosperity in Sweden wherein the massive economic expansion and wage increases in the urban areas pulled young people to migrate for work and at the same time drove down work opportunities in the countryside. Between 1925 and 1965, Sweden's GDP per capita increased from US$850 to US$6200. Simultaneously, the percentage of the population living in rural areas decreased drastically from 54% in 1925 to 21% in 1965.
Rural flight began later for Russia and the former states of the USSR than in Western Europe. In 1926 only 18% of Russians lived in urban areas, compared to over 75% at the same time in the United Kingdom. Although the process began later, throughout World War II and the decades immediately proceeding, rural flight proceeded at a rapid pace. By 1965, 53% of Russians lived in urban areas. Statistics compiled by M. Ya Sonin, a Soviet author, in 1959, demonstrate the rapid urbanization of the USSR. Between 1939 and 1959, the rural population declined by 21.3 million, while that of urban centers increased by 39.4 million. Of this dramatic shift in population, rural flight accounts for more than 60% of the change. Generally, most rural migrants tended to settle in cities and towns within their district. Rural flight persisted through the majority of the 20th century. However, with the end of the Soviet Union, rural flight reversed as political and economic instability in the cities prompted many urban dwellers to return to rural villages.
Rural flight did not occur uniformly throughout the USSR. Western Russia and Ukraine experienced the greatest declines in rural population, 30% and 17% respectively. Conversely, peripheral regions of the USSR, like Central Asia, experienced gains, contradicting the general pattern of rural-urban migration of this period. Increased diversification of crops and labor shortages were primary contributors to the gains in rural population in the periphery.
Rural flight in Russia and the former USSR had several major determinants. The industrialization of agriculture, which came later in Russia and the former USSR, led to declines in available rural jobs. Lower living standards and tough work also motivated some peasants to migrate to urban areas. In particular, the Soviet "kolkhoz" system (the collective farms in the Soviet Union) aided in maintaining low living standards for Soviet peasants. Beginning around 1928, the kolkhoz system replaced family farms throughout the Soviet Union. Forced to work long hours for low pay at rates fixed by the government and often unadjusted to inflation, Russian peasants experienced quite low living-conditions - especially compared to urban life. While Brezhnev's wage reforms in 1965 ameliorated the low wages received by peasants, rural life remained suffocating, especially for the skilled and the educated.
Although migrants came from all segments of society, several groups were more likely to migrate than others. Like other examples of rural flight, the young were more likely than the old to migrate to the cities. Young women under 20 were the most likely segment of the population to leave rural life. This exodus of young women further exacerbated the demographic transitions occurring in rural communities as the rate of natural increase dropped precipitously over the course of the 20th century. Lastly, the skilled and educated were also likely to migrate to urban areas.
Rural flight in Mexico occurred throughout the 1930s up until the present day. Like other developing nations, the beginning of industrialization in Mexico quickly accelerated the rate of rural flight.
In the 1930s, President Cardenas implemented a series of agricultural reforms that led to massive redistribution of agricultural land among the rural peasants. Some commentators have subsequently dubbed the period from 1940–1965 as the "Golden Era for Mexican Migration." During this period, Mexican agriculture grew at an average rate of 5.7% outpacing the natural increase of 3% of the rural population. Concurrently, government policies favoring industrialization led to a massive increase of industrial jobs in the cities. Statistics compiled in Mexico City demonstrate this trend with over 1.8 million jobs created over the course of the 1940s, 50s, and 60s. Young people with schooling were the segment of population most likely to migrate away from rural life to urban life, attracted by the promise of many jobs and a more modern lifestyle as compared to the conservative conditions in rural villages. Additionally, due to the large demand for new workers, many of these jobs had low entrance requirements that also provided on-site job training opening the avenue for migration to many rural residents. From 1940 to about 1965, rural flight occurred in a slow, yet steady pace with both agriculture and industry growing concurrently.
However, as government policies increasingly favored industry over agriculture, rural conditions began to deteriorate. In 1957, the Mexican government began to regulate the price of maize through massive imports in order to keep low urban food costs. This regulation severely undercut the market price of maize lowering the profit margins of small farmers. At the same time, the Green Revolution had entered into Mexican agriculture. Inspired by the work of Norman Borlaug, farmers that employed hybrid seeds and fertilizer supplements were able to double or even triple their yields per acre. Unfortunately, these products came at a relatively high cost, out of the reach of many farmers struggling after the devaluation of the price of maize. The combined effects of the maize price regulation and the Green Revolution was the consolidation of small farms into larger estates. A 1974 study conducted by Osorio concluded that in 1960, about 50.3% of the individual land plots in Mexico contained less than 5 hectares of land. In contrast, the top 0.5% of estates by land spanned 28.3% of all arable land. As many small farmers lost land, they either migrated to the cities or became migrant workers roving from large estate to large estate. Between 1950 and 1970, the proportion of migrant workers increased from 36.7% to 54% of the total population. The centralized pattern of industrial development and government policies overwhelmingly favoring industrialization contributed to massive rural flight in Mexico beginning in the late 1960s until the present day.
Rural migrants to cities face several challenges that may hinder their quality of life upon moving into urbanized areas. Many migrants do not have the education or skills to acquire decent jobs in cities and are then forced into unstable, low paying jobs. The steady stream of new rural migrants worsens underemployment and unemployment, common among rural migrants. Employers offer lower wages and poorer labor conditions to rural migrants, who must compete with each other for limited jobs, often unaware of their labor rights. Rural migrants often experience poor living conditions as well. Many cities have exploded in population; services and infrastructure, in these cities, are unable to keep up with population growth. Massive influxes in rural population can lead to severe housing shortages, inadequate water and energy supply, and general slum-like conditions throughout cities.
Additionally, rural migrants often struggle adjusting to city life. In some instances, there are cultural differences between the rural and urban areas of a region. Lost in urban regions, it becomes difficult for them to continue holding onto their cultural traditions. Urban residents may also look down upon these newcomers to the city who are often unaware of city social norms. Both marginalized and separated from their home cultures, migrants face many social challenges when moving to cities.
Women, in particular, face a unique set of challenges. Some women undergo rural flight to escape domestic abuse or forced early marriages. Some parents choose to send women to cities to find jobs in order to send remittances back home. Once in the city, employers may attempt to take advantage of these women preying on their unfamiliarity with labor laws and social networks on which to rely. In the worst of cases, destitution may force women into prostitution, exposing them to social stigma and the risks of sexually transmitted diseases. | https://en.wikipedia.org/wiki?curid=26332 |
Paleolithic
The Paleolithic or Palaeolithic or Palæolithic (), also called the Old Stone Age, is a period in human prehistory distinguished by the original development of stone tools that covers 99% of the time period of human technological prehistory. It extends from the earliest known use of stone tools by hominins 3.3 million years ago, to the end of the Pleistocene 11,650 cal BP.
The Paleolithic Age in Europe preceded the Mesolithic Age, although the date of the transition varies geographically by several thousand years. During the Paleolithic Age, hominins grouped together in small societies such as bands and subsisted by gathering plants, fishing, and hunting or scavenging wild animals. The Paleolithic Age is characterized by the use of knapped stone tools, although at the time humans also used wood and bone tools. Other organic commodities were adapted for use as tools, including leather and vegetable fibers; however, due to rapid decomposition, these have not survived to any great degree.
About 50,000 years ago a marked increase in the diversity of artifacts occurred. In Africa, bone artifacts and the first art appear in the archaeological record. The first evidence of human fishing is also noted, from artifacts in places such as Blombos cave in South Africa. Archaeologists classify artifacts of the last 50,000 years into many different categories, such as projectile points, engraving tools, knife blades, and drilling and piercing tools.
Humankind gradually evolved from early members of the genus "Homo"—such as "Homo habilis", who used simple stone tools—into anatomically modern humans as well as behaviourally modern humans by the Upper Paleolithic. During the end of the Paleolithic Age, specifically the Middle or Upper Paleolithic Age, humans began to produce the earliest works of art and to engage in religious or spiritual behavior such as burial and ritual. Conditions during the Paleolithic Age went through a set of glacial and interglacial periods in which the climate periodically fluctuated between warm and cool temperatures. Archaeological and genetic data suggest that the source populations of Paleolithic humans survived in sparsely-wooded areas and dispersed through areas of high primary productivity while avoiding dense forest-cover.
By BP, the first humans set foot in Australia. By BP, humans lived at 61°N latitude in Europe. By BP, Japan was reached, and by BP humans were present in Siberia, above the Arctic Circle. At the end of the Upper Paleolithic Age a group of humans crossed Beringia and quickly expanded throughout the Americas.
The term "Palaeolithic" was coined by archaeologist John Lubbock in 1865. It derives from Greek: παλαιός, "palaios", "old"; and λίθος, "lithos", "stone", meaning "old age of the stone" or "Old Stone Age".
The Paleolithic coincides almost exactly with the Pleistocene epoch of geologic time, which lasted from 2.6 million years ago to about 12,000 years ago. This epoch experienced important geographic and climatic changes that affected human societies.
During the preceding Pliocene, continents had continued to drift from possibly as far as from their present locations to positions only from their current location. South America became linked to North America through the Isthmus of Panama, bringing a nearly complete end to South America's distinctive marsupial fauna. The formation of the isthmus had major consequences on global temperatures, because warm equatorial ocean currents were cut off, and the cold Arctic and Antarctic waters lowered temperatures in the now-isolated Atlantic Ocean.
Most of Central America formed during the Pliocene to connect the continents of North and South America, allowing fauna from these continents to leave their native habitats and colonize new areas. Africa's collision with Asia created the Mediterranean, cutting off the remnants of the Tethys Ocean. During the Pleistocene, the modern continents were essentially at their present positions; the tectonic plates on which they sit have probably moved at most from each other since the beginning of the period.
Climates during the Pliocene became cooler and drier, and seasonal, similar to modern climates. Ice sheets grew on Antarctica. The formation of an Arctic ice cap around 3 million years ago is signaled by an abrupt shift in oxygen isotope ratios and ice-rafted cobbles in the North Atlantic and North Pacific Ocean beds. Mid-latitude glaciation probably began before the end of the epoch. The global cooling that occurred during the Pliocene may have spurred on the disappearance of forests and the spread of grasslands and savannas.
The Pleistocene climate was characterized by repeated glacial cycles during which continental glaciers pushed to the 40th parallel in some places. Four major glacial events have been identified, as well as many minor intervening events. A major event is a general glacial excursion, termed a "glacial". Glacials are separated by "interglacials". During a glacial, the glacier experiences minor advances and retreats. The minor excursion is a "stadial"; times between stadials are "interstadials". Each glacial advance tied up huge volumes of water in continental ice sheets deep, resulting in temporary sea level drops of or more over the entire surface of the Earth. During interglacial times, such as at present, drowned coastlines were common, mitigated by isostatic or other emergent motion of some regions.
The effects of glaciation were global. Antarctica was ice-bound throughout the Pleistocene and the preceding Pliocene. The Andes were covered in the south by the Patagonian ice cap. There were glaciers in New Zealand and Tasmania. The now decaying glaciers of Mount Kenya, Mount Kilimanjaro, and the Ruwenzori Range in east and central Africa were larger. Glaciers existed in the mountains of Ethiopia and to the west in the Atlas mountains. In the northern hemisphere, many glaciers fused into one. The Cordilleran Ice Sheet covered the North American northwest; the Laurentide covered the east. The Fenno-Scandian ice sheet covered northern Europe, including Great Britain; the Alpine ice sheet covered the Alps. Scattered domes stretched across Siberia and the Arctic shelf. The northern seas were frozen. During the late Upper Paleolithic (Latest Pleistocene) BP, the Beringia land bridge between Asia and North America was blocked by ice, which may have prevented early Paleo-Indians such as the Clovis culture from directly crossing Beringia to reach the Americas.
According to Mark Lynas (through collected data), the Pleistocene's overall climate could be characterized as a continuous El Niño with trade winds in the south Pacific weakening or heading east, warm air rising near Peru, warm water spreading from the west Pacific and the Indian Ocean to the east Pacific, and other El Niño markers.
The Paleolithic is often held to finish at the end of the ice age (the end of the Pleistocene epoch), and Earth's climate became warmer. This may have caused or contributed to the extinction of the Pleistocene megafauna, although it is also possible that the late Pleistocene extinctions were (at least in part) caused by other factors such as disease and overhunting by humans. New research suggests that the extinction of the woolly mammoth may have been caused by the combined effect of climatic change and human hunting. Scientists suggest that climate change during the end of the Pleistocene caused the mammoths' habitat to shrink in size, resulting in a drop in population. The small populations were then hunted out by Paleolithic humans. The global warming that occurred during the end of the Pleistocene and the beginning of the Holocene may have made it easier for humans to reach mammoth habitats that were previously frozen and inaccessible. Small populations of woolly mammoths survived on isolated Arctic islands, Saint Paul Island and Wrangel Island, until BP and BP respectively. The Wrangel Island population became extinct around the same time the island was settled by prehistoric humans. There is no evidence of prehistoric human presence on Saint Paul island (though early human settlements dating as far back as 6500 BP were found on the nearby Aleutian Islands).
Nearly all of our knowledge of Paleolithic human culture and way of life comes from archaeology and ethnographic comparisons to modern hunter-gatherer cultures such as the !Kung San who live similarly to their Paleolithic predecessors. The economy of a typical Paleolithic society was a hunter-gatherer economy. Humans hunted wild animals for meat and gathered food, firewood, and materials for their tools, clothes, or shelters.
Human population density was very low, around only one person per square mile. This was most likely due to low body fat, infanticide, women regularly engaging in intense endurance exercise, late weaning of infants, and a nomadic lifestyle. Like contemporary hunter-gatherers, Paleolithic humans enjoyed an abundance of leisure time unparalleled in both Neolithic farming societies and modern industrial societies. At the end of the Paleolithic, specifically the Middle or Upper Paleolithic, humans began to produce works of art such as cave paintings, rock art and jewellery and began to engage in religious behavior such as burial and ritual.
At the beginning of the Paleolithic, hominins were found primarily in eastern Africa, east of the Great Rift Valley. Most known hominin fossils dating earlier than one million years before present are found in this area, particularly in Kenya, Tanzania, and Ethiopia.
By BP, groups of hominins began leaving Africa and settling southern Europe and Asia. Southern Caucasus was occupied by BP, and northern China was reached by BP. By the end of the Lower Paleolithic, members of the hominin family were living in what is now China, western Indonesia, and, in Europe, around the Mediterranean and as far north as England, France, southern Germany, and Bulgaria. Their further northward expansion may have been limited by the lack of control of fire: studies of cave settlements in Europe indicate no regular use of fire prior to BP.
East Asian fossils from this period are typically placed in the genus "Homo erectus". Very little fossil evidence is available at known Lower Paleolithic sites in Europe, but it is believed that hominins who inhabited these sites were likewise "Homo erectus". There is no evidence of hominins in America, Australia, or almost anywhere in Oceania during this time period.
Fates of these early colonists, and their relationships to modern humans, are still subject to debate. According to current archaeological and genetic models, there were at least two notable expansion events subsequent to peopling of Eurasia BP. Around 500,000 BP a group of early humans, frequently called "Homo heidelbergensis", came to Europe from Africa and eventually evolved into "Homo neanderthalensis" (Neanderthals). In the Middle Paleolithic, Neanderthals were present in the region now occupied by Poland.
Both "Homo erectus" and "Homo neanderthalensis" became extinct by the end of the Paleolithic. Descended from "Homo sapiens", the anatomically modern "Homo sapiens sapiens" emerged in eastern Africa BP, left Africa around 50,000 BP, and expanded throughout the planet. Multiple hominid groups coexisted for some time in certain locations. "Homo neanderthalensis" were still found in parts of Eurasia BP years, and engaged in an unknown degree of interbreeding with "Homo sapiens sapiens". DNA studies also suggest an unknown degree of interbreeding between "Homo sapiens sapiens" and "Homo sapiens denisova".
Hominin fossils not belonging either to "Homo neanderthalensis" or to "Homo sapiens" species, found in the Altai Mountains and Indonesia, were radiocarbon dated to BP and BP respectively.
For the duration of the Paleolithic, human populations remained low, especially outside the equatorial region. The entire population of Europe between 16,000 and 11,000 BP likely averaged some 30,000 individuals, and between 40,000 and 16,000 BP, it was even lower at 4,000–6,000 individuals.
Paleolithic humans made tools of stone, bone, and wood. The early paleolithic hominins, "Australopithecus", were the first users of stone tools. Excavations in Gona, Ethiopia have produced thousands of artifacts, and through radioisotopic dating and magnetostratigraphy, the sites can be firmly dated to 2.6 million years ago. Evidence shows these early hominins intentionally selected raw materials with good flaking qualities and chose appropriate sized stones for their needs to produce sharp-edged tools for cutting.
The earliest Paleolithic stone tool industry, the Oldowan, began around 2.6 million years ago. It contained tools such as choppers, burins, and stitching awls. It was completely replaced around 250,000 years ago by the more complex Acheulean industry, which was first conceived by "Homo ergaster" around 1.8–1.65 million years ago. The Acheulean implements completely vanish from the archaeological record around 100,000 years ago and were replaced by more complex Middle Paleolithic tool kits such as the Mousterian and the Aterian industries.
Lower Paleolithic humans used a variety of stone tools, including hand axes and choppers. Although they appear to have used hand axes often, there is disagreement about their use. Interpretations range from cutting and chopping tools, to digging implements, to flaking cores, to the use in traps, and as a purely ritual significance, perhaps in courting behavior. William H. Calvin has suggested that some hand axes could have served as "killer Frisbees" meant to be thrown at a herd of animals at a waterhole so as to stun one of them. There are no indications of hafting, and some artifacts are far too large for that. Thus, a thrown hand axe would not usually have penetrated deeply enough to cause very serious injuries. Nevertheless, it could have been an effective weapon for defense against predators. Choppers and scrapers were likely used for skinning and butchering scavenged animals and sharp-ended sticks were often obtained for digging up edible roots. Presumably, early humans used wooden spears as early as 5 million years ago to hunt small animals, much as their relatives, chimpanzees, have been observed to do in Senegal, Africa. Lower Paleolithic humans constructed shelters, such as the possible wood hut at Terra Amata.
Fire was used by the Lower Paleolithic hominins "Homo erectus" and "Homo ergaster" as early as 300,000 to 1.5 million years ago and possibly even earlier by the early Lower Paleolithic (Oldowan) hominin "Homo habilis" or by robust "Australopithecines" such as "Paranthropus". However, the use of fire only became common in the societies of the following Middle Stone Age and Middle Paleolithic. Use of fire reduced mortality rates and provided protection against predators. Early hominins may have begun to cook their food as early as the Lower Paleolithic ( million years ago) or at the latest in the early Middle Paleolithic ( years ago). Some scientists have hypothesized that hominins began cooking food to defrost frozen meat, which would help ensure their survival in cold regions.
The Lower Paleolithic "Homo erectus" possibly invented rafts ( BP) to travel over large bodies of water, which may have allowed a group of "Homo erectus" to reach the island of Flores and evolve into the small hominin "Homo floresiensis". However, this hypothesis is disputed within the anthropological community. The possible use of rafts during the Lower Paleolithic may indicate that Lower Paleolithic hominins such as "Homo erectus" were more advanced than previously believed, and may have even spoken an early form of modern language. Supplementary evidence from Neanderthal and modern human sites located around the Mediterranean Sea, such as Coa de sa Multa ( BP), has also indicated that both Middle and Upper Paleolithic humans used rafts to travel over large bodies of water (i.e. the Mediterranean Sea) for the purpose of colonizing other bodies of land.
By around 200,000 BP, Middle Paleolithic stone tool manufacturing spawned a tool making technique known as the prepared-core technique, that was more elaborate than previous Acheulean techniques. This technique increased efficiency by allowing the creation of more controlled and consistent flakes. It allowed Middle Paleolithic humans to create stone tipped spears, which were the earliest composite tools, by hafting sharp, pointy stone flakes onto wooden shafts. In addition to improving tool making methods, the Middle Paleolithic also saw an improvement of the tools themselves that allowed access to a wider variety and amount of food sources. For example, microliths or small stone tools or points were invented around 70,000–65,000 BP and were essential to the invention of bows and spear throwers in the following Upper Paleolithic.
Harpoons were invented and used for the first time during the late Middle Paleolithic ( BP); the invention of these devices brought fish into the human diets, which provided a hedge against starvation and a more abundant food supply. Thanks to their technology and their advanced social structures, Paleolithic groups such as the Neanderthals—who had a Middle Paleolithic level of technology—appear to have hunted large game just as well as Upper Paleolithic modern humans. and the Neanderthals in particular may have likewise hunted with projectile weapons. Nonetheless, Neanderthal use of projectile weapons in hunting occurred very rarely (or perhaps never) and the Neanderthals hunted large game animals mostly by ambushing them and attacking them with mêlée weapons such as thrusting spears rather than attacking them from a distance with projectile weapons.
During the Upper Paleolithic, further inventions were made, such as the net or BP) bolas, the spear thrower ( BP), the bow and arrow ( or BP) and the oldest example of ceramic art, the Venus of Dolní Věstonice ( BP). Early dogs were domesticated, sometime between 30,000 and 14,000 BP, presumably to aid in hunting. However, the earliest instances of successful domestication of dogs may be much more ancient than this. Evidence from canine DNA collected by Robert K. Wayne suggests that dogs may have been first domesticated in the late Middle Paleolithic around 100,000 BP or perhaps even earlier.
Archaeological evidence from the Dordogne region of France demonstrates that members of the European early Upper Paleolithic culture known as the Aurignacian used calendars ( BP). This was a lunar calendar that was used to document the phases of the moon. Genuine solar calendars did not appear until the Neolithic. Upper Paleolithic cultures were probably able to time the migration of game animals such as wild horses and deer. This ability allowed humans to become efficient hunters and to exploit a wide variety of game animals. Recent research indicates that the Neanderthals timed their hunts and the migrations of game animals long before the beginning of the Upper Paleolithic.
The social organization of the earliest Paleolithic (Lower Paleolithic) societies remains largely unknown to scientists, though Lower Paleolithic hominins such as "Homo habilis" and "Homo erectus" are likely to have had more complex social structures than chimpanzee societies. Late Oldowan/Early Acheulean humans such as "Homo ergaster"/"Homo erectus" may have been the first people to invent central campsites or home bases and incorporate them into their foraging and hunting strategies like contemporary hunter-gatherers, possibly as early as 1.7 million years ago; however, the earliest solid evidence for the existence of home bases or central campsites (hearths and shelters) among humans only dates back to 500,000 years ago.
Similarly, scientists disagree whether Lower Paleolithic humans were largely monogamous or polygynous. In particular, the Provisional model suggests that bipedalism arose in pre-Paleolithic australopithecine societies as an adaptation to monogamous lifestyles; however, other researchers note that sexual dimorphism is more pronounced in Lower Paleolithic humans such as "Homo erectus" than in modern humans, who are less polygynous than other primates, which suggests that Lower Paleolithic humans had a largely polygynous lifestyle, because species that have the most pronounced sexual dimorphism tend more likely to be polygynous.
Human societies from the Paleolithic to the early Neolithic farming tribes lived without states and organized governments. For most of the Lower Paleolithic, human societies were possibly more hierarchical than their Middle and Upper Paleolithic descendants, and probably were not grouped into bands, though during the end of the Lower Paleolithic, the latest populations of the hominin "Homo erectus" may have begun living in small-scale (possibly egalitarian) bands similar to both Middle and Upper Paleolithic societies and modern hunter-gatherers.
Middle Paleolithic societies, unlike Lower Paleolithic and early Neolithic ones, consisted of bands that ranged from 20–30 or 25–100 members and were usually nomadic. These bands were formed by several families. Bands sometimes joined together into larger "macrobands" for activities such as acquiring mates and celebrations or where resources were abundant. By the end of the Paleolithic era ( BP), people began to settle down into permanent locations, and began to rely on agriculture for sustenance in many locations. Much evidence exists that humans took part in long-distance trade between bands for rare commodities (such as ochre, which was often used for religious purposes such as ritual) and raw materials, as early as 120,000 years ago in Middle Paleolithic. Inter-band trade may have appeared during the Middle Paleolithic because trade between bands would have helped ensure their survival by allowing them to exchange resources and commodities such as raw materials during times of relative scarcity (i.e. famine, drought). Like in modern hunter-gatherer societies, individuals in Paleolithic societies may have been subordinate to the band as a whole. Both Neanderthals and modern humans took care of the elderly members of their societies during the Middle and Upper Paleolithic.
Some sources claim that most Middle and Upper Paleolithic societies were possibly fundamentally egalitarian and may have rarely or never engaged in organized violence between groups (i.e. war).
Some Upper Paleolithic societies in resource-rich environments (such as societies in Sungir, in what is now Russia) may have had more complex and hierarchical organization (such as tribes with a pronounced hierarchy and a somewhat formal division of labor) and may have engaged in endemic warfare. Some argue that there was no formal leadership during the Middle and Upper Paleolithic. Like contemporary egalitarian hunter-gatherers such as the Mbuti pygmies, societies may have made decisions by communal consensus decision making rather than by appointing permanent rulers such as chiefs and monarchs. Nor was there a formal division of labor during the Paleolithic. Each member of the group was skilled at all tasks essential to survival, regardless of individual abilities. Theories to explain the apparent egalitarianism have arisen, notably the Marxist concept of primitive communism. Christopher Boehm (1999) has hypothesized that egalitarianism may have evolved in Paleolithic societies because of a need to distribute resources such as food and meat equally to avoid famine and ensure a stable food supply. Raymond C. Kelly speculates that the relative peacefulness of Middle and Upper Paleolithic societies resulted from a low population density, cooperative relationships between groups such as reciprocal exchange of commodities and collaboration on hunting expeditions, and because the invention of projectile weapons such as throwing spears provided less incentive for war, because they increased the damage done to the attacker and decreased the relative amount of territory attackers could gain. However, other sources claim that most Paleolithic groups may have been larger, more complex, sedentary and warlike than most contemporary hunter-gatherer societies, due to occupying more resource-abundant areas than most modern hunter-gatherers who have been pushed into more marginal habitats by agricultural societies.
Anthropologists have typically assumed that in Paleolithic societies, women were responsible for gathering wild plants and firewood, and men were responsible for hunting and scavenging dead animals. However, analogies to existent hunter-gatherer societies such as the Hadza people and the Aboriginal Australians suggest that the sexual division of labor in the Paleolithic was relatively flexible. Men may have participated in gathering plants, firewood and insects, and women may have procured small game animals for consumption and assisted men in driving herds of large game animals (such as woolly mammoths and deer) off cliffs. Additionally, recent research by anthropologist and archaeologist Steven Kuhn from the University of Arizona is argued to support that this division of labor did not exist prior to the Upper Paleolithic and was invented relatively recently in human pre-history. Sexual division of labor may have been developed to allow humans to acquire food and other resources more efficiently. Possibly there was approximate parity between men and women during the Middle and Upper Paleolithic, and that period may have been the most gender-equal time in human history. Archaeological evidence from art and funerary rituals indicates that a number of individual women enjoyed seemingly high status in their communities, and it is likely that both sexes participated in decision making. The earliest known Paleolithic shaman ( BP) was female. Jared Diamond suggests that the status of women declined with the adoption of agriculture because women in farming societies typically have more pregnancies and are expected to do more demanding work than women in hunter-gatherer societies. Like most contemporary hunter-gatherer societies, Paleolithic and the Mesolithic groups probably followed mostly matrilineal and ambilineal descent patterns; patrilineal descent patterns were probably rarer than in the Neolithic.
Early examples of artistic expression, such as the Venus of Tan-Tan and the patterns found on elephant bones from Bilzingsleben in Thuringia, may have been produced by Acheulean tool users such as "Homo erectus" prior to the start of the Middle Paleolithic period. However, the earliest undisputed evidence of art during the Paleolithic comes from Middle Paleolithic/Middle Stone Age sites such as Blombos Cave–South Africa–in the form of bracelets, beads, rock art, and ochre used as body paint and perhaps in ritual. Undisputed evidence of art only becomes common in the Upper Paleolithic.
Lower Paleolithic Acheulean tool users, according to Robert G. Bednarik, began to engage in symbolic behavior such as art around 850,000 BP. They decorated themselves with beads and collected exotic stones for aesthetic, rather than utilitarian qualities. According to him, traces of the pigment ochre from late Lower Paleolithic Acheulean archaeological sites suggests that Acheulean societies, like later Upper Paleolithic societies, collected and used ochre to create rock art. Nevertheless, it is also possible that the ochre traces found at Lower Paleolithic sites is naturally occurring.
Upper Paleolithic humans produced works of art such as cave paintings, Venus figurines, animal carvings, and rock paintings. Upper Paleolithic art can be divided into two broad categories: figurative art such as cave paintings that clearly depicts animals (or more rarely humans); and nonfigurative, which consists of shapes and symbols. Cave paintings have been interpreted in a number of ways by modern archaeologists. The earliest explanation, by the prehistorian Abbe Breuil, interpreted the paintings as a form of magic designed to ensure a successful hunt. However, this hypothesis fails to explain the existence of animals such as saber-toothed cats and lions, which were not hunted for food, and the existence of half-human, half-animal beings in cave paintings. The anthropologist David Lewis-Williams has suggested that Paleolithic cave paintings were indications of shamanistic practices, because the paintings of half-human, half-animal paintings and the remoteness of the caves are reminiscent of modern hunter-gatherer shamanistic practices. Symbol-like images are more common in Paleolithic cave paintings than are depictions of animals or humans, and unique symbolic patterns might have been trademarks that represent different Upper Paleolithic ethnic groups. Venus figurines have evoked similar controversy. Archaeologists and anthropologists have described the figurines as representations of goddesses, pornographic imagery, apotropaic amulets used for sympathetic magic, and even as self-portraits of women themselves.
R. Dale Guthrie has studied not only the most artistic and publicized paintings, but also a variety of lower-quality art and figurines, and he identifies a wide range of skill and ages among the artists. He also points out that the main themes in the paintings and other artifacts (powerful beasts, risky hunting scenes and the over-sexual representation of women) are to be expected in the fantasies of adolescent males during the Upper Paleolithic.
The "Venus" figurines have been theorized, not universally, as representing a mother goddess; the abundance of such female imagery has inspired the theory that religion and society in Paleolithic (and later Neolithic) cultures were primarily interested in, and may have been directed by, women. Adherents of the theory include archaeologist Marija Gimbutas and feminist scholar Merlin Stone, the author of the 1976 book "When God Was a Woman". Other explanations for the purpose of the figurines have been proposed, such as Catherine McCoid and LeRoy McDermott's hypothesis that they were self-portraits of woman artists and R.Dale Gutrie's hypothesis that served as "stone age pornography".
The origins of music during the Paleolithic are unknown. The earliest forms of music probably did not use musical instruments other than the human voice or natural objects such as rocks. This early music would not have left an archaeological footprint. Music may have developed from rhythmic sounds produced by daily chores, for example, cracking open nuts with stones. Maintaining a rhythm while working may have helped people to become more efficient at daily activities. An alternative theory originally proposed by Charles Darwin explains that music may have begun as a hominin mating strategy. Bird and other animal species produce music such as calls to attract mates. This hypothesis is generally less accepted than the previous hypothesis, but nonetheless provides a possible alternative.
Upper Paleolithic (and possibly Middle Paleolithic) humans used flute-like bone pipes as musical instruments, and music may have played a large role in the religious lives of Upper Paleolithic hunter-gatherers. As with modern hunter-gatherer societies, music may have been used in ritual or to help induce trances. In particular, it appears that animal skin drums may have been used in religious events by Upper Paleolithic shamans, as shown by the remains of drum-like instruments from some Upper Paleolithic graves of shamans and the ethnographic record of contemporary hunter-gatherer shamanic and ritual practices.
According to James B. Harrod humankind first developed religious and spiritual beliefs during the Middle Paleolithic or Upper Paleolithic. Controversial scholars of prehistoric religion and anthropology, James Harrod and Vincent W. Fallio, have recently proposed that religion and spirituality (and art) may have first arisen in Pre-Paleolithic chimpanzees or Early Lower Paleolithic (Oldowan) societies. According to Fallio, the common ancestor of chimpanzees and humans experienced altered states of consciousness and partook in ritual, and ritual was used in their societies to strengthen social bonding and group cohesion.
Middle Paleolithic humans' use of burials at sites such as Krapina, Croatia ( BP) and Qafzeh, Israel ( BP) have led some anthropologists and archaeologists, such as Philip Lieberman, to believe that Middle Paleolithic humans may have possessed a belief in an afterlife and a "concern for the dead that transcends daily life". Cut marks on Neanderthal bones from various sites, such as Combe-Grenal and Abri Moula in France, suggest that the Neanderthals—like some contemporary human cultures—may have practiced ritual defleshing for (presumably) religious reasons. According to recent archaeological findings from "Homo heidelbergensis" sites in Atapuerca, humans may have begun burying their dead much earlier, during the late Lower Paleolithic; but this theory is widely questioned in the scientific community.
Likewise, some scientists have proposed that Middle Paleolithic societies such as Neanderthal societies may also have practiced the earliest form of totemism or animal worship, in addition to their (presumably religious) burial of the dead. In particular, Emil Bächler suggested (based on archaeological evidence from Middle Paleolithic caves) that a bear cult was widespread among Middle Paleolithic Neanderthals. A claim that evidence was found for Middle Paleolithic animal worship BCE originates from the Tsodilo Hills in the African Kalahari desert has been denied by the original investigators of the site. Animal cults in the Upper Paleolithic, such as the bear cult, may have had their origins in these hypothetical Middle Paleolithic animal cults. Animal worship during the Upper Paleolithic was intertwined with hunting rites. For instance, archaeological evidence from art and bear remains reveals that the bear cult apparently involved a type of sacrificial bear ceremonialism, in which a bear was shot with arrows, finished off by a shot or thrust in the lungs, and ritually worshipped near a clay bear statue covered by a bear fur with the skull and the body of the bear buried separately. Barbara Ehrenreich controversially theorizes that the sacrificial hunting rites of the Upper Paleolithic (and by extension Paleolithic cooperative big-game hunting) gave rise to war or warlike raiding during the following Epipaleolithic and Mesolithic or late Upper Paleolithic.
The existence of anthropomorphic images and half-human, half-animal images in the Upper Paleolithic may further indicate that Upper Paleolithic humans were the first people to believe in a pantheon of gods or supernatural beings, though such images may instead indicate shamanistic practices similar to those of contemporary tribal societies. The earliest known undisputed burial of a shaman (and by extension the earliest undisputed evidence of shamans and shamanic practices) dates back to the early Upper Paleolithic era ( BP) in what is now the Czech Republic. However, during the early Upper Paleolithic it was probably more common for all members of the band to participate equally and fully in religious ceremonies, in contrast to the religious traditions of later periods when religious authorities and part-time ritual specialists such as shamans, priests and medicine men were relatively common and integral to religious life. Additionally, it is also possible that Upper Paleolithic religions, like contemporary and historical animistic and polytheistic religions, believed in the existence of a single creator deity in addition to other supernatural beings such as animistic spirits.
Religion was possibly apotropaic; specifically, it may have involved sympathetic magic. The Venus figurines, which are abundant in the Upper Paleolithic archaeological record, provide an example of possible Paleolithic sympathetic magic, as they may have been used for ensuring success in hunting and to bring about fertility of the land and women. The Upper Paleolithic Venus figurines have sometimes been explained as depictions of an earth goddess similar to Gaia, or as representations of a goddess who is the ruler or mother of the animals. James Harrod has described them as representative of female (and male) shamanistic spiritual transformation processes.
Paleolithic hunting and gathering people ate varying proportions of vegetables (including tubers and roots), fruit, seeds (including nuts and wild grass seeds) and insects, meat, fish, and shellfish. However, there is little direct evidence of the relative proportions of plant and animal foods. Although the term "paleolithic diet", without references to a specific timeframe or locale, is sometimes used with an implication that most humans shared a certain diet during the entire era, that is not entirely accurate. The Paleolithic was an extended period of time, during which multiple technological advances were made, many of which had impact on human dietary structure. For example, humans probably did not possess the control of fire until the Middle Paleolithic, or tools necessary to engage in extensive fishing. On the other hand, both these technologies are generally agreed to have been widely available to humans by the end of the Paleolithic (consequently, allowing humans in some regions of the planet to rely heavily on fishing and hunting). In addition, the Paleolithic involved a substantial geographical expansion of human populations. During the Lower Paleolithic, ancestors of modern humans are thought to have been constrained to Africa east of the Great Rift Valley. During the Middle and Upper Paleolithic, humans greatly expanded their area of settlement, reaching ecosystems as diverse as New Guinea and Alaska, and adapting their diets to whatever local resources were available.
Another view is that until the Upper Paleolithic, humans were frugivores (fruit eaters) who supplemented their meals with carrion, eggs, and small prey such as baby birds and mussels, and only on rare occasions managed to kill and consume big game such as antelopes. This view is supported by studies of higher apes, particularly chimpanzees. Chimpanzees are the closest to humans genetically, sharing more than 96% of their DNA code with humans, and their digestive tract is functionally very similar to that of humans. Chimpanzees are primarily frugivores, but they could and would consume and digest animal flesh, given the opportunity. In general, their actual diet in the wild is about 95% plant-based, with the remaining 5% filled with insects, eggs, and baby animals. In some ecosystems, however, chimpanzees are predatory, forming parties to hunt monkeys. Some comparative studies of human and higher primate digestive tracts do suggest that humans have evolved to obtain greater amounts of calories from sources such as animal foods, allowing them to shrink the size of the gastrointestinal tract relative to body mass and to increase the brain mass instead.
Anthropologists have diverse opinions about the proportions of plant and animal foods consumed. Just as with still existing hunters and gatherers, there were many varied "diets" in different groups, and also varying through this vast amount of time. Some paleolithic hunter-gatherers consumed a significant amount of meat and possibly obtained most of their food from hunting, while others are shown as a primarily plant-based diet. Most, if not all, are believed to have been opportunistic omnivores. One hypothesis is that carbohydrate tubers (plant underground storage organs) may have been eaten in high amounts by pre-agricultural humans. It is thought that the Paleolithic diet included as much as per day of fruit and vegetables. The relative proportions of plant and animal foods in the diets of Paleolithic people often varied between regions, with more meat being necessary in colder regions (which weren't populated by anatomically modern humans until BP). It is generally agreed that many modern hunting and fishing tools, such as fish hooks, nets, bows, and poisons, weren't introduced until the Upper Paleolithic and possibly even Neolithic. The only hunting tools widely available to humans during any significant part of the Paleolithic were hand-held spears and harpoons. There's evidence of Paleolithic people killing and eating seals and elands as far as BP. On the other hand, buffalo bones found in African caves from the same period are typically of very young or very old individuals, and there's no evidence that pigs, elephants, or rhinos were hunted by humans at the time.
Paleolithic peoples suffered less famine and malnutrition than the Neolithic farming tribes that followed them. This was partly because Paleolithic hunter-gatherers accessed a wider variety of natural foods, which allowed them a more nutritious diet and a decreased risk of famine. Many of the famines experienced by Neolithic (and some modern) farmers were caused or amplified by their dependence on a small number of crops. It is thought that wild foods can have a significantly different nutritional profile than cultivated foods. The greater amount of meat obtained by hunting big game animals in Paleolithic diets than Neolithic diets may have also allowed Paleolithic hunter-gatherers to enjoy a more nutritious diet than Neolithic agriculturalists. It has been argued that the shift from hunting and gathering to agriculture resulted in an increasing focus on a limited variety of foods, with meat likely taking a back seat to plants. It is also unlikely that Paleolithic hunter-gatherers were affected by modern diseases of affluence such as type 2 diabetes, coronary heart disease, and cerebrovascular disease, because they ate mostly lean meats and plants and frequently engaged in intense physical activity, and because the average lifespan was shorter than the age of common onset of these conditions.
Large-seeded legumes were part of the human diet long before the Neolithic Revolution, as evident from archaeobotanical finds from the Mousterian layers of Kebara Cave, in Israel. There is evidence suggesting that Paleolithic societies were gathering wild cereals for food use at least as early as 30,000 years ago. However, seeds—such as grains and beans—were rarely eaten and never in large quantities on a daily basis. Recent archaeological evidence also indicates that winemaking may have originated in the Paleolithic, when early humans drank the juice of naturally fermented wild grapes from animal-skin pouches. Paleolithic humans consumed animal organ meats, including the livers, kidneys, and brains. Upper Paleolithic cultures appear to have had significant knowledge about plants and herbs and may have, albeit very rarely, practiced rudimentary forms of horticulture. In particular, bananas and tubers may have been cultivated as early as 25,000 BP in southeast Asia. Late Upper Paleolithic societies also appear to have occasionally practiced pastoralism and animal husbandry, presumably for dietary reasons. For instance, some European late Upper Paleolithic cultures domesticated and raised reindeer, presumably for their meat or milk, as early as 14,000 BP. Humans also probably consumed hallucinogenic plants during the Paleolithic. The Aboriginal Australians have been consuming a variety of native animal and plant foods, called bushfood, for an estimated 60,000 years, since the Middle Paleolithic.
In February 2019, scientists reported evidence, based on isotope studies, that at least some Neanderthals may have eaten meat. People during the Middle Paleolithic, such as the Neanderthals and Middle Paleolithic Homo sapiens in Africa, began to catch shellfish for food as revealed by shellfish cooking in Neanderthal sites in Italy about 110,000 years ago and in Middle Paleolithic "Homo sapiens" sites at Pinnacle Point, Africa around 164,000 BP. Although fishing only became common during the Upper Paleolithic, fish have been part of human diets long before the dawn of the Upper Paleolithic and have certainly been consumed by humans since at least the Middle Paleolithic. For example, the Middle Paleolithic "Homo sapiens" in the region now occupied by the Democratic Republic of the Congo hunted large -long catfish with specialized barbed fishing points as early as 90,000 years ago. The invention of fishing allowed some Upper Paleolithic and later hunter-gatherer societies to become sedentary or semi-nomadic, which altered their social structures. Example societies are the Lepenski Vir as well as some contemporary hunter-gatherers, such as the Tlingit. In some instances (at least the Tlingit), they developed social stratification, slavery, and complex social structures such as chiefdoms.
Anthropologists such as Tim White suggest that cannibalism was common in human societies prior to the beginning of the Upper Paleolithic, based on the large amount of “butchered human" bones found in Neanderthal and other Lower/Middle Paleolithic sites. Cannibalism in the Lower and Middle Paleolithic may have occurred because of food shortages. However, it may have been for religious reasons, and would coincide with the development of religious practices thought to have occurred during the Upper Paleolithic. Nonetheless, it remains possible that Paleolithic societies never practiced cannibalism, and that the damage to recovered human bones was either the result of excarnation or predation by carnivores such as saber-toothed cats, lions, and hyenas.
A modern-day diet known as the Paleolithic diet exists, based on restricting consumption to the foods presumed to be available to anatomically modern humans prior to the advent of settled agriculture. | https://en.wikipedia.org/wiki?curid=22860 |
Presidential Medal of Freedom
The Presidential Medal of Freedom is an award bestowed by the president of the United States to recognize people who have made "an especially meritorious contribution to the security or national interests of the United States, world peace, cultural or other significant public or private endeavors". The Presidential Medal of Freedom and the Congressional Gold Medal are the highest civilian awards of the United States. The award is not limited to U.S. citizens and, while it is a civilian award, it can also be awarded to military personnel and worn on the uniform.
It was established in 1963 by President John F. Kennedy, superseding the Medal of Freedom that was established by President Harry S. Truman in 1945 to honor civilian service during World War II.
Similar in name to the Medal of Freedom, but much closer in meaning and precedence to the Medal for Merit, the Presidential Medal of Freedom is currently the supreme civilian decoration in precedence in the United States, whereas the Medal of Freedom was inferior in precedence to the Medal for Merit; the Medal of Freedom was awarded by any of three Cabinet secretaries, whereas the Medal for Merit was awarded by the president, as is the Presidential Medal of Freedom.
President John F. Kennedy established the current decoration in 1963 through , with unique and distinctive insignia, vastly expanded purpose, and far higher prestige. It was the first U.S. civilian neck decoration and, in the grade of Awarded With Distinction, is the only U.S. sash and star decoration (the Chief Commander degree of the Legion of Merit—which may only be awarded to foreign heads of state—is a star decoration but without a sash). The executive order calls for the medal to be awarded annually on or around July 4, and at other convenient times as chosen by the president, but it has not been awarded every year (e.g., 2001, 2010). Recipients are selected by the president, either on the president's own initiative or based on recommendations. The order establishing the medal also expanded the size and the responsibilities of the Distinguished Civilian Service Awards Board so it could serve as a major source of such recommendations.
The medal may be awarded to an individual more than once; Colin Powell received two awards, his second being With Distinction; Ellsworth Bunker received both of his awards With Distinction. It may also be awarded posthumously (after the death of the recipient); examples (in chronological order) include John Wayne, John F. Kennedy, Pope John XXIII, Lyndon Johnson, Paul "Bear" Bryant, Thurgood Marshall, Cesar Chavez, Walter Reuther, Roberto Clemente, Jack Kemp, Harvey Milk, James Chaney, Andrew Goodman, Michael Schwerner, Elouise Cobell, Grace Hopper, Antonin Scalia, Elvis Presley and Babe Ruth. (Chaney, Goodman and Schwerner, civil rights workers murdered in 1964, were awarded their medals in 2014, 50 years later.)
In 2015, in response to questions about the medal awarded to Bill Cosby in 2002, President Barack Obama stated that there was no precedent to revoke Presidential Medals of Freedom.
The badge of the Presidential Medal of Freedom might be in the form of a golden star with white enamel, with a red enamel pentagon behind it; the central disc bears thirteen gold stars on a blue enamel background (taken from the Great Seal of the United States) within a golden ring. Golden North American bald eagles with spread wings stand between the points of the star. It may be worn around the neck on a blue ribbon with white edge stripes.
A special rarely given grade of the medal, known as the Presidential Medal of Freedom with Distinction, has a larger execution of the same medal design worn as a star on the left chest along with a sash over the right shoulder (similar to how the insignia of a Grand Cross might be worn), with its rosette (blue with white edge, bearing the central disc of the medal at its center) resting on the left hip. When the medal With Distinction is awarded, the star is presented descending from a neck ribbon and can be identified by its larger size than the standard medal (compare the size of medals in pictures below).
Both medals may also be worn in miniature form on a ribbon on the left chest, with a silver North American bald eagle with spread wings on the ribbon, or a golden North American bald eagle for a medal awarded With Distinction. In addition, the medal can be accompanied by a service ribbon for wear on military service uniform, a miniature medal pendant for wear on mess dress or civilian formal wear, and a lapel badge for wear on civilian clothes (all shown in the accompanying photograph of the full presentation set). | https://en.wikipedia.org/wiki?curid=22873 |
Planet
A planet is an astronomical body orbiting a star or stellar remnant that is massive enough to be rounded by its own gravity, is not massive enough to cause thermonuclear fusion, and has cleared its neighbouring region of planetesimals.
The term "planet" is ancient, with ties to history, astrology, science, mythology, and religion. Five planets in the Solar System are visible to the naked eye. These were regarded by many early cultures as divine, or as emissaries of deities. As scientific knowledge advanced, human perception of the planets changed, incorporating a number of disparate objects. In 2006, the International Astronomical Union (IAU) officially adopted a resolution defining planets within the Solar System. This definition is controversial because it excludes many objects of planetary mass based on where or what they orbit. Although eight of the planetary bodies discovered before 1950 remain "planets" under the current definition, some celestial bodies, such as Ceres, Pallas, Juno and Vesta (each an object in the solar asteroid belt), and Pluto (the first trans-Neptunian object discovered), that were once considered "planets" by the scientific community, are no longer viewed as planets under the current definition of "planet".
The planets were thought by Ptolemy to orbit Earth in deferent and epicycle motions. Although the idea that the planets orbited the Sun had been suggested many times, it was not until the 17th century that this view was supported by evidence from the first telescopic astronomical observations, performed by Galileo Galilei. About the same time, by careful analysis of pre-telescopic observational data collected by Tycho Brahe, Johannes Kepler found the planets' orbits were elliptical rather than circular. As observational tools improved, astronomers saw that, like Earth, each of the planets rotated around an axis tilted with respect to its orbital pole, and some shared such features as ice caps and seasons. Since the dawn of the Space Age, close observation by space probes has found that Earth and the other planets share characteristics such as volcanism, hurricanes, tectonics, and even hydrology.
Planets in the Solar System are divided into two main types: large low-density giant planets, and smaller rocky terrestrials. There are eight planets in the Solar System. In order of increasing distance from the Sun, they are the four terrestrials, Mercury, Venus, Earth, and Mars, then the four giant planets, Jupiter, Saturn, Uranus, and Neptune. Six of the planets are orbited by one or more natural satellites.
Several thousands of planets around other stars ("extrasolar planets" or "exoplanets") have been discovered in the Milky Way. As of , known extrasolar planets in planetary systems (including multiple planetary systems), ranging in size from just above the size of the Moon to gas giants about twice as large as Jupiter have been discovered, out of which more than 100 planets are the same size as Earth, nine of which are at the same relative distance from their star as Earth from the Sun, i.e. in the circumstellar habitable zone. On December 20, 2011, the Kepler Space Telescope team reported the discovery of the first Earth-sized extrasolar planets, Kepler-20e and Kepler-20f, orbiting a Sun-like star, Kepler-20. A 2012 study, analyzing gravitational microlensing data, estimates an average of at least 1.6 bound planets for every star in the Milky Way.
Around one in five Sun-like stars is thought to have an Earth-sized planet in its habitable zone.
The idea of planets has evolved over its history, from the divine lights of antiquity to the earthly objects of the scientific age. The concept has expanded to include worlds not only in the Solar System, but in hundreds of other extrasolar systems. The ambiguities inherent in defining planets have led to much scientific controversy.
The five classical planets of the Solar System, being visible to the naked eye, have been known since ancient times and have had a significant impact on mythology, religious cosmology, and ancient astronomy. In ancient times, astronomers noted how certain lights moved across the sky, as opposed to the "fixed stars", which maintained a constant relative position in the sky. Ancient Greeks called these lights (, "wandering stars") or simply (, "wanderers"), from which today's word "planet" was derived. In ancient Greece, China, Babylon, and indeed all pre-modern civilizations, it was almost universally believed that Earth was the center of the Universe and that all the "planets" circled Earth. The reasons for this perception were that stars and planets appeared to revolve around Earth each day and the apparently common-sense perceptions that Earth was solid and stable and that it was not moving but at rest.
The first civilization known to have a functional theory of the planets were the Babylonians, who lived in Mesopotamia in the first and second millennia BC. The oldest surviving planetary astronomical text is the Babylonian Venus tablet of Ammisaduqa, a 7th-century BC copy of a list of observations of the motions of the planet Venus, that probably dates as early as the second millennium BC. The MUL.APIN is a pair of cuneiform tablets dating from the 7th century BC that lays out the motions of the Sun, Moon, and planets over the course of the year. The Babylonian astrologers also laid the foundations of what would eventually become Western astrology. The "Enuma anu enlil", written during the Neo-Assyrian period in the 7th century BC, comprises a list of omens and their relationships with various celestial phenomena including the motions of the planets. Venus, Mercury, and the outer planets Mars, Jupiter, and Saturn were all identified by Babylonian astronomers. These would remain the only known planets until the invention of the telescope in early modern times.
The ancient Greeks initially did not attach as much significance to the planets as the Babylonians. The Pythagoreans, in the 6th and 5th centuries BC appear to have developed their own independent planetary theory, which consisted of the Earth, Sun, Moon, and planets revolving around a "Central Fire" at the center of the Universe. Pythagoras or Parmenides is said to have been the first to identify the evening star (Hesperos) and morning star (Phosphoros) as one and the same (Aphrodite, Greek corresponding to Latin Venus), though this had long been known by the Babylonians. In the 3rd century BC, Aristarchus of Samos proposed a heliocentric system, according to which Earth and the planets revolved around the Sun. The geocentric system remained dominant until the Scientific Revolution.
By the 1st century BC, during the Hellenistic period, the Greeks had begun to develop their own mathematical schemes for predicting the positions of the planets. These schemes, which were based on geometry rather than the arithmetic of the Babylonians, would eventually eclipse the Babylonians' theories in complexity and comprehensiveness, and account for most of the astronomical movements observed from Earth with the naked eye. These theories would reach their fullest expression in the "Almagest" written by Ptolemy in the 2nd century CE. So complete was the domination of Ptolemy's model that it superseded all previous works on astronomy and remained the definitive astronomical text in the Western world for 13 centuries. To the Greeks and Romans there were seven known planets, each presumed to be circling Earth according to the complex laws laid out by Ptolemy. They were, in increasing order from Earth (in Ptolemy's order and using modern names): the Moon, Mercury, Venus, the Sun, Mars, Jupiter, and Saturn.
Cicero, in his "De Natura Deorum", enumerated the planets known during the 1st century BCE using the names for them in use at the time:
In 499 CE, the Indian astronomer Aryabhata propounded a planetary model that explicitly incorporated Earth's rotation about its axis, which he explains as the cause of what appears to be an apparent westward motion of the stars. He also believed that the orbits of planets are elliptical.
Aryabhata's followers were particularly strong in South India, where his principles of the diurnal rotation of Earth, among others, were followed and a number of secondary works were based on them.
In 1500, Nilakantha Somayaji of the Kerala school of astronomy and mathematics, in his "Tantrasangraha", revised Aryabhata's model. In his "Aryabhatiyabhasya", a commentary on Aryabhata's "Aryabhatiya", he developed a planetary model where Mercury, Venus, Mars, Jupiter and Saturn orbit the Sun, which in turn orbits Earth, similar to the Tychonic system later proposed by Tycho Brahe in the late 16th century. Most astronomers of the Kerala school who followed him accepted his planetary model.
In the 11th century, the transit of Venus was observed by Avicenna, who established that Venus was, at least sometimes, below the Sun. In the 12th century, Ibn Bajjah observed "two planets as black spots on the face of the Sun", which was later identified as a transit of Mercury and Venus by the Maragha astronomer Qotb al-Din Shirazi in the 13th century. Ibn Bajjah could not have observed a transit of Venus, because none occurred in his lifetime.
With the advent of the Scientific Revolution, use of the term "planet" changed from something that moved across the sky (in relation to the star field); to a body that orbited Earth (or that was believed to do so at the time); and by the 18th century to something that directly orbited the Sun when the heliocentric model of Copernicus, Galileo and Kepler gained sway.
Thus, Earth became included in the list of planets, whereas the Sun and Moon were excluded. At first, when the first satellites of Jupiter and Saturn were discovered in the 17th century, the terms "planet" and "satellite" were used interchangeably – although the latter would gradually become more prevalent in the following century. Until the mid-19th century, the number of "planets" rose rapidly because any newly discovered object directly orbiting the Sun was listed as a planet by the scientific community.
In the 19th century astronomers began to realize that recently discovered bodies that had been classified as planets for almost half a century (such as Ceres, Pallas, Juno, and Vesta) were very different from the traditional ones. These bodies shared the same region of space between Mars and Jupiter (the asteroid belt), and had a much smaller mass; as a result they were reclassified as "asteroids". In the absence of any formal definition, a "planet" came to be understood as any "large" body that orbited the Sun. Because there was a dramatic size gap between the asteroids and the planets, and the spate of new discoveries seemed to have ended after the discovery of Neptune in 1846, there was no apparent need to have a formal definition.
In the 20th century, Pluto was discovered. After initial observations led to the belief that it was larger than Earth, the object was immediately accepted as the ninth planet. Further monitoring found the body was actually much smaller: in 1936, Ray Lyttleton suggested that Pluto may be an escaped satellite of Neptune, and Fred Whipple suggested in 1964 that Pluto may be a comet. As it was still larger than all known asteroids and the population of dwarf planets & other trans-Neptunian objects was not well observed, it kept its status until 2006.
In 1992, astronomers Aleksander Wolszczan and Dale Frail announced the discovery of planets around a pulsar, PSR B1257+12. This discovery is generally considered to be the first definitive detection of a planetary system around another star. Then, on October 6, 1995, Michel Mayor and Didier Queloz of the Geneva Observatory announced the first definitive detection of an exoplanet orbiting an ordinary main-sequence star (51 Pegasi).
The discovery of extrasolar planets led to another ambiguity in defining a planet: the point at which a planet becomes a star. Many known extrasolar planets are many times the mass of Jupiter, approaching that of stellar objects known as brown dwarfs. Brown dwarfs are generally considered stars due to their ability to fuse deuterium, a heavier isotope of hydrogen. Although objects more massive than 75 times that of Jupiter fuse hydrogen, objects of only 13 Jupiter masses can fuse deuterium. Deuterium is quite rare, and most brown dwarfs would have ceased fusing deuterium long before their discovery, making them effectively indistinguishable from supermassive planets.
With the discovery during the latter half of the 20th century of more objects within the Solar System and large objects around other stars, disputes arose over what should constitute a planet. There were particular disagreements over whether an object should be considered a planet if it was part of a distinct population such as a belt, or if it was large enough to generate energy by the thermonuclear fusion of deuterium.
A growing number of astronomers argued for Pluto to be declassified as a planet, because many similar objects approaching its size had been found in the same region of the Solar System (the Kuiper belt) during the 1990s and early 2000s. Pluto was found to be just one small body in a population of thousands.
Some of them, such as Quaoar, Sedna, and Eris, were heralded in the popular press as the tenth planet, failing to receive widespread scientific recognition. The announcement of Eris in 2005, an object then thought of as 27% more massive than Pluto, created the necessity and public desire for an official definition of a planet.
Acknowledging the problem, the IAU set about creating the definition of planet, and produced one in August 2006. The number of planets dropped to the eight significantly larger bodies that had cleared their orbit (Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, and Neptune), and a new class of dwarf planets was created, initially containing three objects (Ceres, Pluto and Eris).
There is no official definition of extrasolar planets. In 2003, the International Astronomical Union (IAU) Working Group on Extrasolar Planets issued a position statement, but this position statement was never proposed as an official IAU resolution and was never voted on by IAU members. The positions statement incorporates the following guidelines, mostly focused upon the boundary between planets and brown dwarfs:
This working definition has since been widely used by astronomers when publishing discoveries of exoplanets in academic journals. Although temporary, it remains an effective working definition until a more permanent one is formally adopted. It does not address the dispute over the lower mass limit, and so it steered clear of the controversy regarding objects within the Solar System. This definition also makes no comment on the planetary status of objects orbiting brown dwarfs, such as 2M1207b.
One definition of a sub-brown dwarf is a planet-mass object that formed through cloud collapse rather than accretion. This formation distinction between a sub-brown dwarf and a planet is not universally agreed upon; astronomers are divided into two camps as whether to consider the formation process of a planet as part of its division in classification. One reason for the dissent is that often it may not be possible to determine the formation process. For example, a planet formed by accretion around a star may get ejected from the system to become free-floating, and likewise a sub-brown dwarf that formed on its own in a star cluster through cloud collapse may get captured into orbit around a star.
One study suggests that objects above formed through gravitational instability and should not be thought of as planets.
The 13 Jupiter-mass cutoff represents an average mass rather than a precise threshold value. Large objects will fuse most of their deuterium and smaller ones will fuse only a little, and the 13 value is somewhere in between. In fact, calculations show that an object fuses 50% of its initial deuterium content when the total mass ranges between 12 and 14 . The amount of deuterium fused depends not only on mass but also on the composition of the object, on the amount of helium and deuterium present. As of 2011 the Extrasolar Planets Encyclopaedia included objects up to 25 Jupiter masses, saying, "The fact that there is no special feature around in the observed mass spectrum reinforces the choice to forget this mass limit".
As of 2016 this limit was increased to 60 Jupiter masses based on a study of mass–density relationships. The Exoplanet Data Explorer includes objects up to 24 Jupiter masses with the advisory: "The 13 Jupiter-mass distinction by the IAU Working Group is physically unmotivated for planets with rocky cores, and observationally problematic due to the sin i ambiguity."
The NASA Exoplanet Archive includes objects with a mass (or minimum mass) equal to or less than 30 Jupiter masses.
Another criterion for separating planets and brown dwarfs, rather than deuterium fusion, formation process or location, is whether the core pressure is dominated by coulomb pressure or electron degeneracy pressure.
The matter of the lower limit was addressed during the 2006 meeting of the IAU's General Assembly. After much debate and one failed proposal, a large majority of those remaining at the meeting voted to pass a resolution. The 2006 resolution defines planets within the Solar System as follows:
Under this definition, the Solar System is considered to have eight planets. Bodies that fulfill the first two conditions but not the third (such as Ceres, Pluto, and Eris) are classified as dwarf planets, provided they are not also natural satellites of other planets. Originally an IAU committee had proposed a definition that would have included a much larger number of planets as it did not include (c) as a criterion. After much discussion, it was decided via a vote that those bodies should instead be classified as dwarf planets.
This definition is based in theories of planetary formation, in which planetary embryos initially clear their orbital neighborhood of other smaller objects. As described by astronomer Steven Soter:
The 2006 IAU definition presents some challenges for exoplanets because the language is specific to the Solar System and because the criteria of roundness and orbital zone clearance are not presently observable. Astronomer Jean-Luc Margot proposed a mathematical criterion that determines whether an object can clear its orbit during the lifetime of its host star, based on the mass of the planet, its semimajor axis, and the mass of its host star. This formula produces a value π that is greater than 1 for planets. The eight known planets and all known exoplanets have π values above 100, while Ceres, Pluto, and Eris have π values of 0.1 or less. Objects with π values of 1 or more are also expected to be approximately spherical, so that objects that fulfill the orbital zone clearance requirement automatically fulfill the roundness requirement.
The table below lists Solar System bodies once considered to be planets.
Beyond the scientific community, Pluto still holds cultural significance for many in the general public due to its historical classification as a planet from 1930 to 2006.
The names for the planets in the Western world are derived from the naming practices of the Romans, which ultimately derive from those of the Greeks and the Babylonians. In ancient Greece, the two great luminaries the Sun and the Moon were called "Helios" and "Selene"; the farthest planet (Saturn) was called "Phainon", the shiner; followed by "Phaethon" (Jupiter), "bright"; the red planet (Mars) was known as "Pyroeis", the "fiery"; the brightest (Venus) was known as "Phosphoros", the light bringer; and the fleeting final planet (Mercury) was called "Stilbon", the gleamer. The Greeks also made each planet sacred to one among their pantheon of gods, the Olympians: Helios and Selene were the names of both planets and gods; Phainon was sacred to Cronus, the Titan who fathered the Olympians; Phaethon was sacred to Zeus, Cronus's son who deposed him as king; Pyroeis was given to Ares, son of Zeus and god of war; Phosphoros was ruled by Aphrodite, the goddess of love; and Hermes, messenger of the gods and god of learning and wit, ruled over Stilbon.
The Greek practice of grafting their gods' names onto the planets was almost certainly borrowed from the Babylonians. The Babylonians named Phosphoros after their goddess of love, "Ishtar"; Pyroeis after their god of war, "Nergal", Stilbon after their god of wisdom Nabu, and Phaethon after their chief god, "Marduk". There are too many concordances between Greek and Babylonian naming conventions for them to have arisen separately. The translation was not perfect. For instance, the Babylonian Nergal was a god of war, and thus the Greeks identified him with Ares. Unlike Ares, Nergal was also god of pestilence and the underworld.
Today, most people in the western world know the planets by names derived from the Olympian pantheon of gods. Although modern Greeks still use their ancient names for the planets, other European languages, because of the influence of the Roman Empire and, later, the Catholic Church, use the Roman (Latin) names rather than the Greek ones. The Romans, who, like the Greeks, were Indo-Europeans, shared with them a common pantheon under different names but lacked the rich narrative traditions that Greek poetic culture had given their gods. During the later period of the Roman Republic, Roman writers borrowed much of the Greek narratives and applied them to their own pantheon, to the point where they became virtually indistinguishable. When the Romans studied Greek astronomy, they gave the planets their own gods' names: "Mercurius" (for Hermes), "Venus" (Aphrodite), "Mars" (Ares), "Iuppiter" (Zeus) and "Saturnus" (Cronus). When subsequent planets were discovered in the 18th and 19th centuries, the naming practice was retained with "Neptūnus" (Poseidon). Uranus is unique in that it is named for a Greek deity rather than his Roman counterpart.
Some Romans, following a belief possibly originating in Mesopotamia but developed in Hellenistic Egypt, believed that the seven gods after whom the planets were named took hourly shifts in looking after affairs on Earth. The order of shifts went Saturn, Jupiter, Mars, Sun, Venus, Mercury, Moon (from the farthest to the closest planet). Therefore, the first day was started by Saturn (1st hour), second day by Sun (25th hour), followed by Moon (49th hour), Mars, Mercury, Jupiter and Venus. Because each day was named by the god that started it, this is also the order of the days of the week in the Roman calendar after the Nundinal cycle was rejected – and still preserved in many modern languages. In English, "Saturday, Sunday," and "Monday" are straightforward translations of these Roman names. The other days were renamed after "Tiw" (Tuesday), "Wóden" (Wednesday), "Thunor" (Thursday), and "Fríge" (Friday), the Anglo-Saxon gods considered similar or equivalent to Mars, Mercury, Jupiter, and Venus, respectively.
Earth is the only planet whose name in English is not derived from Greco-Roman mythology. Because it was only generally accepted as a planet in the 17th century, there is no tradition of naming it after a god. (The same is true, in English at least, of the Sun and the Moon, though they are no longer generally considered planets.) The name originates from the 8th century Anglo-Saxon word "erda", which means ground or soil and was first used in writing as the name of the sphere of Earth perhaps around 1300. As with its equivalents in the other Germanic languages, it derives ultimately from the Proto-Germanic word "ertho", "ground", as can be seen in the English "earth", the German "Erde", the Dutch "aarde", and the Scandinavian "jord". Many of the Romance languages retain the old Roman word "terra" (or some variation of it) that was used with the meaning of "dry land" as opposed to "sea". The non-Romance languages use their own native words. The Greeks retain their original name, "Γή" "(Ge)".
Non-European cultures use other planetary-naming systems. India uses a system based on the Navagraha, which incorporates the seven traditional planets ("Surya" for the Sun, "Chandra" for the Moon, "Budha" for Mercury, "Shukra" for Venus, "Mangala" for Mars, "" for Jupiter, and "Shani" for Saturn) and the ascending and descending lunar nodes "Rahu" and "Ketu".
China and the countries of eastern Asia historically subject to Chinese cultural influence (such as Japan, Korea and Vietnam) use a naming system based on the five Chinese elements: water (Mercury), metal (Venus), fire (Mars), wood (Jupiter) and earth (Saturn).
In traditional Hebrew astronomy, the seven traditional planets have (for the most part) descriptive names – the Sun is חמה "Ḥammah" or "the hot one," the Moon is לבנה "Levanah" or "the white one," Venus is כוכב נוגה "Kokhav Nogah" or "the bright planet," Mercury is כוכב "Kokhav" or "the planet" (given its lack of distinguishing features), Mars is מאדים "Ma'adim" or "the red one," and Saturn is שבתאי "Shabbatai" or "the resting one" (in reference to its slow movement compared to the other visible planets). The odd one out is Jupiter, called צדק "Tzedeq" or "justice". Steiglitz suggests that this may be a euphemism for the original name of כוכב בעל "Kokhav Ba'al" or "Baal's planet", seen as idolatrous and euphemized in a similar manner to Ishbosheth from II Samuel.
In Arabic, Mercury is عُطَارِد ("ʿUṭārid", cognate with Ishtar / Astarte), Venus is الزهرة ("az-Zuhara", "the bright one", an epithet of the goddess Al-'Uzzá), Earth is الأرض ("al-ʾArḍ", from the same root as eretz), Mars is اَلْمِرِّيخ ("al-Mirrīkh", meaning "featherless arrow" due to its retrograde motion), Jupiter is المشتري ("al-Muštarī", "the reliable one", from Akkadian) and Saturn is زُحَل ("Zuḥal", "withdrawer").
It is not known with certainty how planets are formed. The prevailing theory is that they are formed during the collapse of a nebula into a thin disk of gas and dust. A protostar forms at the core, surrounded by a rotating protoplanetary disk. Through accretion (a process of sticky collision) dust particles in the disk steadily accumulate mass to form ever-larger bodies. Local concentrations of mass known as planetesimals form, and these accelerate the accretion process by drawing in additional material by their gravitational attraction. These concentrations become ever denser until they collapse inward under gravity to form protoplanets. After a planet reaches a mass somewhat larger than Mars' mass, it begins to accumulate an extended atmosphere, greatly increasing the capture rate of the planetesimals by means of atmospheric drag. Depending on the accretion history of solids and gas, a giant planet, an ice giant, or a terrestrial planet may result.
When the protostar has grown such that it ignites to form a star, the surviving disk is removed from the inside outward by photoevaporation, the solar wind, Poynting–Robertson drag and other effects. Thereafter there still may be many protoplanets orbiting the star or each other, but over time many will collide, either to form a single larger planet or release material for other larger protoplanets or planets to absorb. Those objects that have become massive enough will capture most matter in their orbital neighbourhoods to become planets. Protoplanets that have avoided collisions may become natural satellites of planets through a process of gravitational capture, or remain in belts of other objects to become either dwarf planets or small bodies.
The energetic impacts of the smaller planetesimals (as well as radioactive decay) will heat up the growing planet, causing it to at least partially melt. The interior of the planet begins to differentiate by mass, developing a denser core. Smaller terrestrial planets lose most of their atmospheres because of this accretion, but the lost gases can be replaced by outgassing from the mantle and from the subsequent impact of comets. (Smaller planets will lose any atmosphere they gain through various escape mechanisms.)
With the discovery and observation of planetary systems around stars other than the Sun, it is becoming possible to elaborate, revise or even replace this account. The level of metallicity—an astronomical term describing the abundance of chemical elements with an atomic number greater than 2 (helium)—is now thought to determine the likelihood that a star will have planets. Hence, it is thought that a metal-rich population I star will likely have a more substantial planetary system than a metal-poor, population II star.
There are eight planets in the Solar System, which are in increasing distance from the Sun:
Jupiter is the largest, at 318 Earth masses, whereas Mercury is the smallest, at 0.055 Earth masses.
The planets of the Solar System can be divided into categories based on their composition:
An exoplanet (extrasolar planet) is a planet outside the Solar System.
In early 1992, radio astronomers Aleksander Wolszczan and Dale Frail announced the discovery of two planets orbiting the pulsar PSR 1257+12. This discovery was confirmed, and is generally considered to be the first definitive detection of exoplanets. These pulsar planets are believed to have formed from the unusual remnants of the supernova that produced the pulsar, in a second round of planet formation, or else to be the remaining rocky cores of giant planets that survived the supernova and then decayed into their current orbits.
The first confirmed discovery of an extrasolar planet orbiting an ordinary main-sequence star occurred on 6 October 1995, when Michel Mayor and Didier Queloz of the University of Geneva announced the detection of an exoplanet around 51 Pegasi. From then until the Kepler mission most known extrasolar planets were gas giants comparable in mass to Jupiter or larger as they were more easily detected. The catalog of Kepler candidate planets consists mostly of planets the size of Neptune and smaller, down to smaller than Mercury.
There are types of planets that do not exist in the Solar System: super-Earths and mini-Neptunes, which could be rocky like Earth or a mixture of volatiles and gas like Neptune—a radius of 1.75 times that of Earth is a possible dividing line between the two types of planet. There are hot Jupiters that orbit very close to their star and may evaporate to become chthonian planets, which are the leftover cores. Another possible type of planet is carbon planets, which form in systems with a higher proportion of carbon than in the Solar System.
A 2012 study, analyzing gravitational microlensing data, estimates an average of at least 1.6 bound planets for every star in the Milky Way.
On December 20, 2011, the Kepler Space Telescope team reported the discovery of the first Earth-size exoplanets, Kepler-20e and Kepler-20f, orbiting a Sun-like star, Kepler-20.
Around 1 in 5 Sun-like stars have an "Earth-sized" planet in the habitable zone, so the nearest would be expected to be within 12 light-years distance from Earth.
The frequency of occurrence of such terrestrial planets is one of the variables in the Drake equation, which estimates the number of intelligent, communicating civilizations that exist in the Milky Way.
There are exoplanets that are much closer to their parent star than any planet in the Solar System is to the Sun, and there are also exoplanets that are much farther from their star. Mercury, the closest planet to the Sun at 0.4 AU, takes 88 days for an orbit, but the shortest known orbits for exoplanets take only a few hours, see Ultra-short period planet. The Kepler-11 system has five of its planets in shorter orbits than Mercury's, all of them much more massive than Mercury. Neptune is 30 AU from the Sun and takes 165 years to orbit, but there are exoplanets that are hundreds of AU from their star and take more than a thousand years to orbit, e.g. 1RXS1609 b.
A planetary-mass object (PMO), planemo, or planetary body is a celestial object with a mass that falls within the range of the definition of a planet: massive enough to achieve hydrostatic equilibrium (to be rounded under its own gravity), but not enough to sustain core fusion like a star. By definition, all planets are "planetary-mass objects", but the purpose of this term is to refer to objects that do not conform to typical expectations for a planet. These include dwarf planets, which are rounded by their own gravity but not massive enough to clear their own orbit, the larger moons, and free-floating planemos, which may have been ejected from a system (rogue planets) or formed through cloud-collapse rather than accretion (sometimes called sub-brown dwarfs).
A dwarf planet is a planetary-mass object that is neither a true planet nor a natural satellite; it is in direct orbit of a star, and is massive enough for its gravity to compress it into a hydrostatically equilibrious shape (usually a spheroid), but has not cleared the neighborhood of other material around its orbit. Alan Stern, who proposed the term 'dwarf planet', has argued that location should not matter and that only geophysical attributes should be taken into account (geophysical planet definition), and that dwarf planets are thus a subtype of planet. However, the IAU classifies dwarf planets as a separate category. The number of dwarf planets in the Solar System is unknown. IAU has recognized three (Ceres, Pluto and Eris) and assigned the naming of two additional candidates, Haumea and Makemake, to the IAU dwarf-planet naming committee.
Several computer simulations of stellar and planetary system formation have suggested that some objects of planetary mass would be ejected into interstellar space. Some scientists have argued that such objects found roaming in deep space should be classed as "planets", although others have suggested that they should be called low-mass brown dwarfs.
Stars form via the gravitational collapse of gas clouds, but smaller objects can also form via cloud-collapse. Planetary-mass objects formed this way are sometimes called sub-brown dwarfs. Sub-brown dwarfs may be free-floating such as Cha 110913-773444 and OTS 44, or orbiting a larger object such as 2MASS J04414489+2301513.
Binary systems of sub-brown dwarfs are theoretically possible; Oph 162225-240515 was initially thought to be a binary system of a brown dwarf of 14 Jupiter masses and a sub-brown dwarf of 7 Jupiter masses, but further observations revised the estimated masses upwards to greater than 13 Jupiter masses, making them brown dwarfs according to the IAU working definitions.
In close binary star systems one of the stars can lose mass to a heavier companion. Accretion-powered pulsars may drive mass loss. The shrinking star can then become a planetary-mass object. An example is a Jupiter-mass object orbiting the pulsar PSR J1719-1438. These shrunken white dwarfs may become a helium planet or carbon planet.
Some large satellites (moons) are of similar size or larger than the planet Mercury, e.g. Jupiter's Galilean moons and Titan. Alan Stern has argued that location should not matter and that only geophysical attributes should be taken into account in the definition of a planet, and proposes the term "satellite planet" for a planet-sized satellite.
Rogue planets in stellar clusters have similar velocities to the stars and so can be recaptured. They are typically captured into wide orbits between 100 and 105 AU. The capture efficiency decreases with increasing cluster volume, and for a given cluster size it increases with the host/primary mass. It is almost independent of the planetary mass. Single and multiple planets could be captured into arbitrary unaligned orbits, non-coplanar with each other or with the stellar host spin, or pre-existing planetary system.
Although each planet has unique physical characteristics, a number of broad commonalities do exist among them. Some of these characteristics, such as rings or natural satellites, have only as yet been observed in planets in the Solar System, whereas others are also commonly observed in extrasolar planets.
According to current definitions, all planets must revolve around stars; thus, any potential "rogue planets" are excluded. In the Solar System, all the planets orbit the Sun in the same direction as the Sun rotates (counter-clockwise as seen from above the Sun's north pole). At least one extrasolar planet, WASP-17b, has been found to orbit in the opposite direction to its star's rotation. The period of one revolution of a planet's orbit is known as its sidereal period or "year". A planet's year depends on its distance from its star; the farther a planet is from its star, not only the longer the distance it must travel, but also the slower its speed, because it is less affected by its star's gravity. No planet's orbit is perfectly circular, and hence the distance of each varies over the course of its year. The closest approach to its star is called its periastron (perihelion in the Solar System), whereas its farthest separation from the star is called its apastron (aphelion). As a planet approaches periastron, its speed increases as it trades gravitational potential energy for kinetic energy, just as a falling object on Earth accelerates as it falls; as the planet reaches apastron, its speed decreases, just as an object thrown upwards on Earth slows down as it reaches the apex of its trajectory.
Each planet's orbit is delineated by a set of elements:
Planets also have varying degrees of axial tilt; they lie at an angle to the plane of their stars' equators. This causes the amount of light received by each hemisphere to vary over the course of its year; when the northern hemisphere points away from its star, the southern hemisphere points towards it, and vice versa. Each planet therefore has seasons, changes to the climate over the course of its year. The time at which each hemisphere points farthest or nearest from its star is known as its solstice. Each planet has two in the course of its orbit; when one hemisphere has its summer solstice, when its day is longest, the other has its winter solstice, when its day is shortest. The varying amount of light and heat received by each hemisphere creates annual changes in weather patterns for each half of the planet. Jupiter's axial tilt is very small, so its seasonal variation is minimal; Uranus, on the other hand, has an axial tilt so extreme it is virtually on its side, which means that its hemispheres are either perpetually in sunlight or perpetually in darkness around the time of its solstices. Among extrasolar planets, axial tilts are not known for certain, though most hot Jupiters are believed to have negligible to no axial tilt as a result of their proximity to their stars.
The planets rotate around invisible axes through their centres. A planet's rotation period is known as a stellar day. Most of the planets in the Solar System rotate in the same direction as they orbit the Sun, which is counter-clockwise as seen from above the Sun's north pole, the exceptions being Venus and Uranus, which rotate clockwise, though Uranus's extreme axial tilt means there are differing conventions on which of its poles is "north", and therefore whether it is rotating clockwise or anti-clockwise. Regardless of which convention is used, Uranus has a retrograde rotation relative to its orbit.
The rotation of a planet can be induced by several factors during formation. A net angular momentum can be induced by the individual angular momentum contributions of accreted objects. The accretion of gas by the giant planets can also contribute to the angular momentum. Finally, during the last stages of planet building, a stochastic process of protoplanetary accretion can randomly alter the spin axis of the planet. There is great variation in the length of day between the planets, with Venus taking 243 days to rotate, and the giant planets only a few hours. The rotational periods of extrasolar planets are not known. However, for "hot" Jupiters, their proximity to their stars means that they are tidally locked (i.e., their orbits are in sync with their rotations). This means, they always show one face to their stars, with one side in perpetual day, the other in perpetual night.
The defining dynamic characteristic of a planet is that it has "cleared its neighborhood". A planet that has cleared its neighborhood has accumulated enough mass to gather up or sweep away all the planetesimals in its orbit. In effect, it orbits its star in isolation, as opposed to sharing its orbit with a multitude of similar-sized objects. This characteristic was mandated as part of the IAU's official definition of a planet in August, 2006. This criterion excludes such planetary bodies as Pluto, Eris and Ceres from full-fledged planethood, making them instead dwarf planets. Although to date this criterion only applies to the Solar System, a number of young extrasolar systems have been found in which evidence suggests orbital clearing is taking place within their circumstellar discs.
A planet's defining physical characteristic is that it is massive enough for the force of its own gravity to dominate over the electromagnetic forces binding its physical structure, leading to a state of hydrostatic equilibrium. This effectively means that all planets are spherical or spheroidal. Up to a certain mass, an object can be irregular in shape, but beyond that point, which varies depending on the chemical makeup of the object, gravity begins to pull an object towards its own centre of mass until the object collapses into a sphere.
Mass is also the prime attribute by which planets are distinguished from stars. The upper mass limit for planethood is roughly 13 times Jupiter's mass for objects with solar-type isotopic abundance, beyond which it achieves conditions suitable for nuclear fusion. Other than the Sun, no objects of such mass exist in the Solar System; but there are exoplanets of this size. The 13-Jupiter-mass limit is not universally agreed upon and the Extrasolar Planets Encyclopaedia includes objects up to 60 Jupiter masses, and the Exoplanet Data Explorer up to 24 Jupiter masses.
The smallest known planet is PSR B1257+12A, one of the first extrasolar planets discovered, which was found in 1992 in orbit around a pulsar. Its mass is roughly half that of the planet Mercury. The smallest known planet orbiting a main-sequence star other than the Sun is Kepler-37b, with a mass (and radius) slightly higher than that of the Moon.
Every planet began its existence in an entirely fluid state; in early formation, the denser, heavier materials sank to the centre, leaving the lighter materials near the surface. Each therefore has a differentiated interior consisting of a dense planetary core surrounded by a mantle that either is or was a fluid. The terrestrial planets are sealed within hard crusts, but in the giant planets the mantle simply blends into the upper cloud layers. The terrestrial planets have cores of elements such as iron and nickel, and mantles of silicates. Jupiter and Saturn are believed to have cores of rock and metal surrounded by mantles of metallic hydrogen. Uranus and Neptune, which are smaller, have rocky cores surrounded by mantles of water, ammonia, methane and other ices. The fluid action within these planets' cores creates a geodynamo that generates a magnetic field.
All of the Solar System planets except Mercury have substantial atmospheres because their gravity is strong enough to keep gases close to the surface. The larger giant planets are massive enough to keep large amounts of the light gases hydrogen and helium, whereas the smaller planets lose these gases into space. The composition of Earth's atmosphere is different from the other planets because the various life processes that have transpired on the planet have introduced free molecular oxygen.
Planetary atmospheres are affected by the varying insolation or internal energy, leading to the formation of dynamic weather systems such as hurricanes, (on Earth), planet-wide dust storms (on Mars), a greater-than-Earth-sized anticyclone on Jupiter (called the Great Red Spot), and holes in the atmosphere (on Neptune). At least one extrasolar planet, HD 189733 b, has been claimed to have such a weather system, similar to the Great Red Spot but twice as large.
Hot Jupiters, due to their extreme proximities to their host stars, have been shown to be losing their atmospheres into space due to stellar radiation, much like the tails of comets. These planets may have vast differences in temperature between their day and night sides that produce supersonic winds, although the day and night sides of HD 189733 b appear to have very similar temperatures, indicating that that planet's atmosphere effectively redistributes the star's energy around the planet.
One important characteristic of the planets is their intrinsic magnetic moments, which in turn give rise to magnetospheres. The presence of a magnetic field indicates that the planet is still geologically alive. In other words, magnetized planets have flows of electrically conducting material in their interiors, which generate their magnetic fields. These fields significantly change the interaction of the planet and solar wind. A magnetized planet creates a cavity in the solar wind around itself called the magnetosphere, which the wind cannot penetrate. The magnetosphere can be much larger than the planet itself. In contrast, non-magnetized planets have only small magnetospheres induced by interaction of the ionosphere with the solar wind, which cannot effectively protect the planet.
Of the eight planets in the Solar System, only Venus and Mars lack such a magnetic field. In addition, the moon of Jupiter Ganymede also has one. Of the magnetized planets the magnetic field of Mercury is the weakest, and is barely able to deflect the solar wind. Ganymede's magnetic field is several times larger, and Jupiter's is the strongest in the Solar System (so strong in fact that it poses a serious health risk to future manned missions to its moons). The magnetic fields of the other giant planets are roughly similar in strength to that of Earth, but their magnetic moments are significantly larger. The magnetic fields of Uranus and Neptune are strongly tilted relative the rotational axis and displaced from the centre of the planet.
In 2004, a team of astronomers in Hawaii observed an extrasolar planet around the star HD 179949, which appeared to be creating a sunspot on the surface of its parent star. The team hypothesized that the planet's magnetosphere was transferring energy onto the star's surface, increasing its already high 7,760 °C temperature by an additional 400 °C.
Several planets or dwarf planets in the Solar System (such as Neptune and Pluto) have orbital periods that are in resonance with each other or with smaller bodies (this is also common in satellite systems). All except Mercury and Venus have natural satellites, often called "moons". Earth has one, Mars has two, and the giant planets have numerous moons in complex planetary-type systems. Many moons of the giant planets have features similar to those on the terrestrial planets and dwarf planets, and some have been studied as possible abodes of life (especially Europa).
The four giant planets are also orbited by planetary rings of varying size and complexity. The rings are composed primarily of dust or particulate matter, but can host tiny 'moonlets' whose gravity shapes and maintains their structure. Although the origins of planetary rings is not precisely known, they are believed to be the result of natural satellites that fell below their parent planet's Roche limit and were torn apart by tidal forces.
No secondary characteristics have been observed around extrasolar planets. The sub-brown dwarf Cha 110913-773444, which has been described as a rogue planet, is believed to be orbited by a tiny protoplanetary disc and the sub-brown dwarf OTS 44 was shown to be surrounded by a substantial protoplanetary disk of at least 10 Earth masses. | https://en.wikipedia.org/wiki?curid=22915 |
Paramount Pictures
Paramount Pictures Corporation (also known simply as Paramount) is an American film studio and subsidiary of ViacomCBS. It is the fifth oldest film studio in the world, the second oldest film studio in the United States, and the sole member of the "Big Five" film studios still located in the city limits of Los Angeles.
In 1916, film producer Adolph Zukor put 22 actors and actresses under contract and honored each with a star on the logo. In 2014, Paramount Pictures became the first major Hollywood studio to distribute all of its films in digital form only. The company's headquarters and studios are located at 5555 Melrose Avenue, Hollywood, California, United States.
Paramount Pictures is a member of the Motion Picture Association (MPA).
Paramount is the fifth oldest surviving film studio in the world after the French studios Gaumont Film Company (1895) and Pathé (1896), followed by the Nordisk Film company (1906), and Universal Studios (1912). It is the last major film studio still headquartered in the Hollywood district of Los Angeles.
Paramount Pictures dates its existence from the 1912 founding date of the Famous Players Film Company. Hungarian-born founder Adolph Zukor, who had been an early investor in nickelodeons, saw that movies appealed mainly to working-class immigrants. With partners Daniel Frohman and Charles Frohman he planned to offer feature-length films that would appeal to the middle class by featuring the leading theatrical players of the time (leading to the slogan "Famous Players in Famous Plays"). By mid-1913, Famous Players had completed five films, and Zukor was on his way to success. Its first film was "Les Amours de la reine Élisabeth", which starred Sarah Bernhardt.
That same year, another aspiring producer, Jesse L. Lasky, opened his Lasky Feature Play Company with money borrowed from his brother-in-law, Samuel Goldfish, later known as Samuel Goldwyn. The Lasky company hired as their first employee a stage director with virtually no film experience, Cecil B. DeMille, who would find a suitable site in Hollywood, near Los Angeles, for his first feature film, "The Squaw Man".
Starting in 1914, both Lasky and Famous Players released their films through a start-up company, Paramount Pictures Corporation, organized early that year by a Utah theatre owner, W. W. Hodkinson, who had bought and merged several smaller firms. Hodkinson and actor, director, producer Hobart Bosworth had started production of a series of Jack London movies. Paramount was the first successful nationwide distributor; until this time, films were sold on a statewide or regional basis which had proved costly to film producers. Also, Famous Players and Lasky were privately owned while Paramount was a corporation.
In 1916, Zukor maneuvered a three-way merger of his Famous Players, the Lasky Company, and Paramount. Zukor and Lasky bought Hodkinson out of Paramount, and merged the three companies into one. The new company Lasky and Zukor founded, Famous Players-Lasky Corporation, grew quickly, with Lasky and his partners Goldwyn and DeMille running the production side, Hiram Abrams in charge of distribution, and Zukor making great plans. With only the exhibitor-owned First National as a rival, Famous Players-Lasky and its "Paramount Pictures" soon dominated the business.
Because Zukor believed in stars, he signed and developed many of the leading early stars, including Mary Pickford, Marguerite Clark, Pauline Frederick, Douglas Fairbanks, Gloria Swanson, Rudolph Valentino, and Wallace Reid. With so many important players, Paramount was able to introduce "block booking", which meant that an exhibitor who wanted a particular star's films had to buy a year's worth of other Paramount productions. It was this system that gave Paramount a leading position in the 1920s and 1930s, but which led the government to pursue it on antitrust grounds for more than twenty years.
The driving force behind Paramount's rise was Zukor. Through the teens and twenties, he built the Publix Theatres Corporation, a chain of nearly 2,000 screens, ran two production studios (in Astoria, New York, now the Kaufman Astoria Studios, and Hollywood, California), and became an early investor in radio, taking a 50% interest in the new Columbia Broadcasting System in 1928 (selling it within a few years; this would not be the last time Paramount and CBS crossed paths).
In 1926, Zukor hired independent producer B. P. Schulberg, an unerring eye for new talent, to run the new West Coast operations. They purchased the Robert Brunton Studios, a 26-acre facility at 5451 Marathon Street for US$1 million. In 1927, Famous Players-Lasky took the name Paramount Famous Lasky Corporation. Three years later, because of the importance of the Publix Theatres, it became Paramount Publix Corporation.
In 1928, Paramount began releasing "Inkwell Imps," animated cartoons produced by Max and Dave Fleischer's Fleischer Studios in New York City. The Fleischers, veterans in the animation industry, were among the few animation producers capable of challenging the prominence of Walt Disney. The Paramount newsreel series Paramount News ran from 1927 to 1957. Paramount was also one of the first Hollywood studios to release what were known at that time as "talkies", and in 1929, released their first musical, "Innocents of Paris". Richard A. Whiting and Leo Robin composed the score for the film; Maurice Chevalier starred and sung the most famous song from the film, "Louise".
By acquiring the successful Balaban & Katz chain in 1926, Zukor gained the services of Barney Balaban (who would eventually become Paramount's president in 1936), his brother A. J. Balaban (who would eventually supervise all stage production nationwide and produce talkie shorts), and their partner Sam Katz (who would run the Paramount-Publix theatre chain in New York City from the thirty-five-story Paramount Theatre Building on Times Square).
Balaban and Katz had developed the Wonder Theater concept, first publicized around 1918 in Chicago. The Chicago Theater was created as a very ornate theater and advertised as a "wonder theater." When Publix acquired Balaban, they embarked on a project to expand the wonder theaters, and starting building in New York in 1927. While Balaban and Public were dominant in Chicago, Loew's was the big player in New York, and did not want the Publix theaters to overshadow theirs. The two companies brokered a non-competition deal for New York and Chicago, and Loew's took over the New York area projects, developing five wonder theaters. Publix continued Balaban's wonder theater development in its home area.
Eventually, Zukor shed most of his early partners; the Frohman brothers, Hodkinson and Goldwyn were out by 1917 while Lasky hung on until 1932, when, blamed for the near-collapse of Paramount in the Depression years, he too was tossed out. Zukor's over-expansion and use of overvalued Paramount stock for purchases led the company into receivership in 1933. A bank-mandated reorganization team, led by John Hertz and Otto Kahn kept the company intact, and, miraculously, Zukor was kept on. In 1935, Paramount-Publix went bankrupt. In June 1935 John E. Otterson and in 1936 Barney Balaban became president, and Zukor was bumped up to chairman of the board. In this role, Zukor reorganized the company as Paramount Pictures, Inc. and was able to successfully bring the studio out of bankruptcy.
As always, Paramount films continued to emphasize stars; in the 1920s there were Gloria Swanson, Wallace Reid, Rudolph Valentino, Florence Vidor, Thomas Meighan, Pola Negri, Bebe Daniels, Antonio Moreno, Richard Dix, Esther Ralston, Emil Jannings, George Bancroft, Betty Compson, Clara Bow, Adolphe Menjou, and Charles Buddy Rogers. By the late 1920s and the early 1930s, talkies brought in a range of powerful draws: Richard Arlen, Nancy Carroll, Maurice Chevalier, Gary Cooper, Marlene Dietrich, Charles Ruggles, Ruth Chatterton, William Powell, Mae West, Sylvia Sidney, Bing Crosby, Claudette Colbert, the Marx Brothers, W.C. Fields, Fredric March, Jack Oakie, Jeanette MacDonald (whose first two films were shot at Paramount's Astoria, New York, studio), Carole Lombard, George Raft, Miriam Hopkins, Cary Grant and Stuart Erwin, among them. In this period Paramount can truly be described as a movie factory, turning out sixty to seventy pictures a year. Such were the benefits of having a huge theater chain to fill, and of block booking to persuade other chains to go along. In 1933, Mae West would also add greatly to Paramount's success with her suggestive movies "She Done Him Wrong" and "I'm No Angel". However, the sex appeal West gave in these movies would also lead to the enforcement of the Production Code, as the newly formed organization the Catholic Legion of Decency threatened a boycott if it was not enforced.
Paramount cartoons produced by Fleischer Studios continued to be successful, with characters such as Betty Boop and Popeye the Sailor becoming widely successful. One Fleischer series, "Screen Songs", featured live-action music stars under contract to Paramount hosting sing-alongs of popular songs. The animation studio would rebound with Popeye, and in 1935, polls showed that Popeye was even more popular than Mickey Mouse. After an unsuccessful expansion into feature films, as well as the fact that Max and Dave Fleischer were no longer speaking to one another, Fleischer Studios was acquired by Paramount, which renamed the operation Famous Studios. That incarnation of the animation studio continued cartoon production until 1967, but has been historically dismissed as having largely failed to maintain the artistic acclaim the Fleischer brothers achieved under their management.
In 1940, Paramount agreed to a government-instituted consent decree: block booking and "pre-selling" (the practice of collecting up-front money for films not yet in production) would end. Immediately, Paramount cut back on production, from 71 films to a more modest 19 annually in the war years. Still, with more new stars like Bob Hope, Alan Ladd, Veronica Lake, Paulette Goddard, and Betty Hutton, and with war-time attendance at astronomical numbers, Paramount and the other integrated studio-theatre combines made more money than ever. At this, the Federal Trade Commission and the Justice Department decided to reopen their case against the five integrated studios. Paramount also had a monopoly over Detroit movie theaters through subsidiary company United Detroit Theaters. This led to the Supreme Court decision "United States v. Paramount Pictures, Inc." (1948) holding that movie studios could not also own movie theater chains. This decision broke up Adolph Zukor's creation, with the theater chain being split into a new company, United Paramount Theaters, and effectively brought an end to the classic Hollywood studio system.
With the separation of production and exhibition forced by the U.S. Supreme Court, Paramount Pictures Inc. was split in two. Paramount Pictures Corporation was formed to be the production distribution company, with the 1,500-screen theater chain handed to the new United Paramount Theaters on December 31, 1949. Leonard Goldenson, who had headed the chain since 1938, remained as the new company's president. The Balaban and Katz theatre division was spun off with UPT; its trademark eventually became the property of the Balaban and Katz Historical Foundation. The Foundation has recently acquired ownership of the Famous Players Trademark. Cash-rich and controlling prime downtown real estate, Goldenson began looking for investments. Barred from film-making by prior antitrust rulings, he acquired the struggling ABC television network in February 1953, leading it first to financial health, and eventually, in the mid-1970s, to first place in the national Nielsen ratings, before selling out to Capital Cities in 1985 (Capital Cities would eventually sell out, in turn, to The Walt Disney Company in 1996). United Paramount Theaters was renamed ABC Theaters in 1965 and was sold to businessman Henry Plitt in 1977. The movie theater chain was renamed Plitt Theaters. In 1985, Cineplex Odeon Corporation merged with Plitt. In later years, Paramount's TV division would develop a strong relationship with ABC, providing many hit series to the network.
Paramount Pictures had been an early backer of television, launching experimental stations in 1939 in Los Angeles and Chicago. The Los Angeles station eventually became KTLA, the first commercial station on the West Coast. The Chicago station got a commercial license as WBKB in 1943, but was sold to UPT along with Balaban & Katz in 1948 and was eventually resold to CBS as WBBM-TV.
In 1938, Paramount bought a stake in television manufacturer DuMont Laboratories. Through this stake, it became a minority owner of the DuMont Television Network. Also Paramount launched its own network, Paramount Television Network, in 1948 through its television unit, Television Productions, Inc.
Paramount management planned to acquire additional owned-and-operated stations ("O&Os"); the company applied to the FCC for additional stations in San Francisco, Detroit, and Boston. The FCC, however, denied Paramount's applications. A few years earlier, the federal regulator had placed a five-station cap on all television networks: no network was allowed to own more than five VHF television stations. Paramount was hampered by its minority stake in the DuMont Television Network. Although both DuMont and Paramount executives stated that the companies were separate, the FCC ruled that Paramount's partial ownership of DuMont meant that DuMont and Paramount were in theory branches of the same company. Since DuMont owned three television stations and Paramount owned two, the federal agency ruled neither network could acquire additional television stations. The FCC requested that Paramount relinquish its stake in DuMont, but Paramount refused. According to television historian William Boddy, "Paramount's checkered antitrust history" helped convince the FCC that Paramount controlled DuMont. Both DuMont and Paramount Television Network suffered as a result, with neither company able to acquire five O&Os. Meanwhile, CBS, ABC, and NBC had each acquired the maximum of five stations by the mid-1950s.
When ABC accepted a merger offer from UPT in 1953, DuMont quickly realized that ABC now had more resources than it could possibly hope to match. It quickly reached an agreement in principle to merge with ABC. However, Paramount vetoed the offer due to antitrust concerns. For all intents and purposes, this was the end of DuMont, though it lingered on until 1956.
In 1951, Paramount bought a stake in International Telemeter, an experimental pay TV service which operated with a coin inserted into a box. The service began operating in Palm Springs, California on November 27, 1953, but due to pressure from the FCC, the service ended on May 15, 1954.
With the loss of the theater chain, Paramount Pictures went into a decline, cutting studio-backed production, releasing its contract players, and making production deals with independents. By the mid-1950s, all the great names were gone; only Cecil B. DeMille, associated with Paramount since 1913, kept making pictures in the grand old style. Despite Paramount's losses, DeMille would, however, give the studio some relief and create his most successful film at Paramount, a 1956 remake of his 1923 film "The Ten Commandments". DeMille died in 1959. Like some other studios, Paramount saw little value in its film library, and sold 764 of its pre-1948 films to MCA Inc./EMKA, Ltd. (known today as Universal Television) in February 1958.
By the early 1960s, Paramount's future was doubtful. The high-risk movie business was wobbly; the theater chain was long gone; investments in DuMont and in early pay-television came to nothing; and the Golden Age of Hollywood had just ended, even the flagship Paramount building in Times Square was sold to raise cash, as was KTLA (sold to Gene Autry in 1964 for a then-phenomenal $12.5 million). Their only remaining successful property at that point was Dot Records, which Paramount had acquired in 1957, and even its profits started declining by the middle of the 1960s. Founding father Adolph Zukor (born in 1873) was still chairman emeritus; he referred to chairman Barney Balaban (born 1888) as "the boy." Such aged leadership was incapable of keeping up with the changing times, and in 1966, a sinking Paramount was sold to Charles Bluhdorn's industrial conglomerate, Gulf + Western Industries Corporation. Bluhdorn immediately put his stamp on the studio, installing a virtually unknown producer named Robert Evans as head of production. Despite some rough times, Evans held the job for eight years, restoring Paramount's reputation for commercial success with "The Odd Couple", "Rosemary's Baby", "Love Story", "The Godfather", "Chinatown", and "3 Days of the Condor".
Gulf + Western Industries also bought the neighboring Desilu television studio (once the lot of RKO Pictures) from Lucille Ball in 1967. Using some of Desilu's established shows such as "", "", and "Mannix" as a foot in the door at the networks, the newly reincorporated Paramount Television eventually became known as a specialist in half-hour situation comedies.
In 1968, Paramount formed Films Distributing Corp to distribute sensitive film product, including "Sin With a Stranger", which was one of the first films to receive an X rating in the United States when the MPAA introduced their new rating system.
In 1970, Paramount teamed with Universal Studios to form Cinema International Corporation, a new company that would distribute films by the two studios outside the United States. Metro-Goldwyn-Mayer would become a partner in the mid-1970s. Both Paramount and CIC entered the video market with Paramount Home Video (now Paramount Home Entertainment) and CIC Video, respectively.
Robert Evans abandoned his position as head of production in 1974; his successor, Richard Sylbert, proved to be too literary and too tasteful for Gulf + Western's Bluhdorn. By 1976, a new, television-trained team was in place headed by Barry Diller and his "Killer-Dillers", as they were called by admirers or "Dillettes" as they were called by detractors. These associates, made up of Michael Eisner, Jeffrey Katzenberg, Dawn Steel and Don Simpson would each go on and head up major movie studios of their own later in their careers.
The Paramount specialty was now simpler. "High concept" pictures such as "Saturday Night Fever" and "Grease" hit big, hit hard and hit fast all over the world, and Diller's television background led him to propose one of his longest-standing ideas to the board: Paramount Television Service, a fourth commercial network. Paramount Pictures purchased the Hughes Television Network (HTN) including its satellite time in planning for PTVS in 1976. Paramount sold HTN to Madison Square Garden in 1979. But Diller believed strongly in the concept, and so took his fourth-network idea with him when he moved to 20th Century Fox in 1984, where Fox's then freshly installed proprietor, Rupert Murdoch was a more interested listener.
However, the television division would be playing catch-up for over a decade after Diller's departure in 1984 before launching its own television network – UPN – in 1995. Lasting eleven years before being merged with The WB network to become The CW in 2006, UPN would feature many of the shows it originally produced for other networks, and would take numerous gambles on series such as "" and "" that would have otherwise either gone direct-to-cable or become first-run syndication to independent stations across the country (as "" and "" were).
Paramount Pictures was not connected to either Paramount Records (1910s-1935) or ABC-Paramount Records (1955–66) until it purchased the rights to use the name (but not the latter's catalog) in the late 1960s. The Paramount name was used for soundtrack albums and some pop re-issues from the Dot Records catalog which Paramount had acquired in 1957. By 1970, Dot had become an all-country label and in 1974, Paramount sold all of its record holdings to ABC Records, which in turn was sold to MCA (now Universal Music Group) in 1979.
Paramount's successful run of pictures extended into the 1980s and 1990s, generating hits like "Airplane!", "American Gigolo", "Ordinary People", "An Officer and a Gentleman", "Flashdance", "Terms of Endearment", "Footloose", "Pretty in Pink", "Top Gun", "Crocodile Dundee", "Fatal Attraction", "Ghost", the "Friday the 13th" slasher series, as well as teaming up with Lucasfilm to create the "Indiana Jones" franchise. Other examples are the "Star Trek" film series and a string of films starring comedian Eddie Murphy like "Trading Places", "Coming to America" and "Beverly Hills Cop" and its sequels. While the emphasis was decidedly on the commercial, there were occasional less commercial but more artistic and intellectual efforts like "I'm Dancing as Fast as I Can", "Atlantic City", "Reds", "Witness", "Children of a Lesser God" and "The Accused". During this period, responsibility for running the studio passed from Eisner and Katzenberg to Frank Mancuso, Sr. (1984) and Ned Tanen (1984) to Stanley R. Jaffe (1991) and Sherry Lansing (1992). More so than most, Paramount's slate of films included many remakes and television spin-offs; while sometimes commercially successful, there have been few compelling films of the kind that once made Paramount the industry leader.
On August 25, 1983, Paramount Studios caught fire. Two or three sound stages and four outdoor sets were destroyed.
When Charles Bluhdorn died unexpectedly, his successor Martin Davis dumped all of G+W's industrial, mining, and sugar-growing subsidiaries and refocused the company, renaming it Paramount Communications in 1989. With the influx of cash from the sale of G+W's industrial properties in the mid-1980s, Paramount bought a string of television stations and KECO Entertainment's theme park operations, renaming them Paramount Parks. These parks included Paramount's Great America, Paramount Canada's Wonderland, Paramount's Carowinds, Paramount's Kings Dominion, and Paramount's Kings Island.
In 1993, Sumner Redstone's entertainment conglomerate Viacom made a bid for a merger with Paramount Communications; this quickly escalated into a bidding war with Barry Diller's QVC. But Viacom prevailed, ultimately paying $10 billion for the Paramount holdings. Viacom and Paramount had planned to merge as early as 1989.
Paramount is the last major film studio located in Hollywood proper. When Paramount moved to its present home in 1927, it was in the heart of the film community. Since then, former next-door neighbor RKO closed up shop in 1957 (Paramount ultimately absorbed their former lot); Warner Bros. (whose old Sunset Boulevard studio was sold to Paramount in 1949 as a home for KTLA) moved to Burbank in 1930; Columbia joined Warners in Burbank in 1973 then moved again to Culver City in 1989; and the Pickford-Fairbanks-Goldwyn-United Artists lot, after a lively history, has been turned into a post-production and music-scoring facility for Warners, known simply as "The Lot". For a time the semi-industrial neighborhood around Paramount was in decline, but has now come back. The recently refurbished studio has come to symbolize Hollywood for many visitors, and its studio tour is a popular attraction.
In 1983, Gulf and Western began a restructuring process that would transform the corporation from a bloated conglomerate consisting of subsidiaries from unrelated industries to a more focused entertainment and publishing company. The idea was to aid financial markets in measuring the company's success, which, in turn, would help place better value on its shares. Though its Paramount division did very well in recent years, Gulf and Western's success as a whole was translating poorly with investors. This process eventually led Davis to divest many of the company's subsidiaries. Its sugar plantations in Florida and the Dominican Republic were sold in 1985; the consumer and industrial products branch was sold off that same year. In 1989, Davis renamed the company Paramount Communications Incorporated after its primary asset, Paramount Pictures. In addition to the Paramount film, television, home video, and music publishing divisions, the company continued to own the Madison Square Garden properties (which also included MSG Network), a 50% stake in USA Networks (the other 50% was owned by MCA/Universal Studios) and Simon & Schuster, Prentice Hall, Pocket Books, Allyn & Bacon, Cineamerica (a joint venture with Warner Communications), and Canadian cinema chain Famous Players Theatres.
That same year, the company launched a $12.2 billion hostile bid to acquire Time Inc. in an attempt to end a stock-swap merger deal between Time and Warner Communications, which also renamed itself after a film studio it owned upon selling off its non-entertainment assets. (The original name of Warner Communications was Kinney National Company.) This caused Time to raise its bid for Warner to $14.9 billion in cash and stock. Gulf and Western responded by filing a lawsuit in a Delaware court to block the Time-Warner merger. The court ruled twice in favor of Time, forcing Gulf and Western to drop both the Time acquisition and the lawsuit, and allowing the formation of Time Warner.
Paramount used cash acquired from the sale of Gulf and Western's non-entertainment properties to take over the TVX Broadcast Group chain of television stations (which at that point consisted mainly of large-market stations which TVX had bought from Taft Broadcasting, plus two mid-market stations which TVX owned prior to the Taft purchase), and the KECO Entertainment chain of theme parks from Taft successor Great American Broadcasting. Both of these companies had their names changed to reflect new ownership: TVX became known as the Paramount Stations Group, while KECO was renamed to Paramount Parks.
Paramount Television launched Wilshire Court Productions in conjunction with USA Networks, before the latter was renamed NBCUniversal Cable, in 1989. Wilshire Court Productions (named for a side street in Los Angeles) produced television films that aired on the USA Networks, and later for other networks. USA Networks launched a second channel, the Sci-Fi Channel (now known as Syfy), in 1992. As its name implied, it focused on films and television series within the science fiction genre. Much of the initial programming was owned either by Paramount or Universal. Paramount bought one more television station in 1993: Cox Enterprises' WKBD-TV in Detroit, Michigan, at the time an affiliate of the Fox Broadcasting Company.
On July 7, 1994, Paramount Communications Inc. was sold to Viacom following the purchase of 50.1% of Paramount's shares for $9.75 billion. At the time, Paramount's holdings included Paramount Pictures, Madison Square Garden, the New York Rangers, the New York Knicks, and the Simon & Schuster publishing house. The deal had been planned as early as 1989, when the company was still known as Gulf and Western. Though Davis was named a member of the board of National Amusements, which controlled Viacom, he ceased to manage the company.
Under Viacom, the Paramount Stations Group continued to build with more station acquisitions, eventually leading to Viacom's acquisition of its former parent, the CBS network, in 1999. Around the same time, Viacom bought out Spelling Entertainment, incorporating its library into that of Paramount itself.
Viacom split into two companies in 2006, one retaining the Viacom name (which continues to own Paramount Pictures), while another was named CBS Corporation (which now controls Paramount Television Group, which was renamed CBS Paramount Television, now known as CBS Television Studios and worldwide distribution unit is now CBS Television Distribution and CBS Studios International, in 2006, Simon & Schuster [except for Prentice Hall and other educational units, which Viacom sold to Pearson PLC in 1998, and what's left of the original Paramount Stations Group, now known as CBS Television Stations). National Amusements retains majority control of the two.
Together, these two companies own many of the former media assets of Gulf and Western and its Paramount successor today. Meanwhile, the Madison Square Garden properties (including the Knicks and Rangers) were sold to Cablevision not long after the Viacom takeover. Cablevision owned the MSG properties until 2010, when they were spun off as their own company. CBS retained ownership of the Paramount Parks chain for a few months after becoming part of the new CBS Corporation, but sold the parks to Cedar Fair in the summer of 2006, and thus National Amusements got out of the theme park ownership business entirely. Over the next few years, Cedar Fair purged references to Viacom-owned properties from the former Paramount Parks, a task completed in 2010. Viacom also sold its stake in the USA Networks to Universal in 1997, and the channels came under the ownership of Universal's successor, NBCUniversal, which still retained those holdings as of late July 2013.
During this time period, Paramount Pictures went under the guidance of Jonathan Dolgen, chairman and Sherry Lansing, president. During their administration over Paramount, the studio had an extremely successful period of films with two of Paramount's ten highest-grossing films being produced during this period. The most successful of these films, "Titanic", a joint partnership with 20th Century Fox, and Lightstorm Entertainment became the highest-grossing film up to that time, grossing over $1.8 billion worldwide. Also during this time, three Paramount Pictures films won the Academy Award for Best Picture; "Titanic, Braveheart", and "Forrest Gump".
Paramount's most important property, however, was "Star Trek". Studio executives had begun to call it "the franchise" in the 1980s due to its reliable revenue, and other studios envied its "untouchable and unduplicatable" success. By 1998 "Star Trek" TV shows, movies, books, videotapes, and licensing provided so much of the studio's profit that "it is not possible to spend any reasonable amount of time at Paramount and not be aware of [its] presence"; filming for "Star Trek: Voyager" and "Star Trek: Deep Space Nine" required up to nine of the largest of the studio's 36 sound stages.
In 1995, Viacom and Chris-Craft Industries' United Television launched United Paramount Network (UPN) with "Star Trek: Voyager" as its flagship series, fulfilling Barry Diller's plan for a Paramount network from 25 years earlier. In 1999, Viacom bought out United Television's interests, and handed responsibility for the start-up network to the newly acquired CBS unit, which Viacom bought in 1999 – an ironic confluence of events as Paramount had once invested in CBS, and Viacom had once been the syndication arm of CBS as well. During this period the studio acquired some 30 TV stations to support the UPN network as well acquiring and merging in the assets of Republic Pictures, Spelling Television and Viacom Television, almost doubling the size of the studio's TV library. The TV division produced the dominant prime time show for the decade in "Frasier" as well as such long running hits as NCIS and "Becker" and the dominant prime time magazine show "Entertainment Tonight." Paramount also gained the ownership rights to the Rysher library, after Viacom acquired the rights from Cox Enterprises.
During this period, Paramount and its related subsidiaries and affiliates, operating under the name "Viacom Entertainment Group" also included the fourth largest group of theme parks in the United States and Canada which in addition to traditional rides and attractions launched numerous successful location-based entertainment units including a long running "Star Trek" attraction at the Las Vegas Hilton. Famous Music – the company's celebrated music publishing arm almost doubled in size and developed artists including Pink, Bush, Green Day as well as catalog favorites including Duke Ellington and Henry Mancini. The Paramount/Viacom licensing group under the leadership of Tom McGrath created the "Cheers" franchise bars and restaurants and a chain of restaurants borrowing from the studio's Academy Award-winning film "Forrest Gump" – "The Bubba Gump Shrimp Company". Through the combined efforts of Famous Music and the studio over ten "Broadway" musicals were created including Irving Berlin's "White Christmas", "Footloose, Saturday Night Fever", Andrew Lloyd Webber's "Sunset Boulevard" among others. The Company's international arm, United International Pictures (UIP), was the dominant distributor internationally for ten straight years representing Paramount, Universal and MGM. Simon and Schuster became part of the Viacom Entertainment Group emerging as the US' dominant trade book publisher.
In 2002, Paramount; along with Buena Vista Distribution, 20th Century Fox, Columbia TriStar Pictures Entertainment, MGM/UA Entertainment, Universal Studios, DreamWorks Pictures, Artisan Entertainment, Lions Gate Entertainment, and Warner Bros. formed the Digital Cinema Initiatives. Operating under a waiver from the antitrust law, the studios combined under the leadership of Paramount Chief Operating Officer Tom McGrath to develop technical standards for the eventual introduction of digital film projection – replacing the now 100-year-old film technology. DCI was created "to establish and document voluntary specifications for an open architecture for digital cinema that ensures a uniform and high level of technical performance, reliability and quality control." McGrath also headed up Paramount's initiative for the creation and launch of the Blu-ray Disc.
Reflecting in part the troubles of the broadcasting business, in 2006 Viacom wrote off over $18 billion from its radio acquisitions and, early that year, announced that it would split itself in two. The split was completed in January 2006.
With the announcement of the split of Viacom, Dolgen and Lansing were replaced by former television executives Brad Grey and Gail Berman. The Viacom Inc. board split the company into CBS Corporation and a separate company under the Viacom name. The board scheduled the division for the first quarter of 2006. Under the plan, CBS Corp. would comprise CBS and UPN networks, Viacom Television Stations Group, Infinity Broadcasting, Viacom Outdoor, Paramount Television, KingWorld, Showtime, Simon and Schuster, Paramount Parks, and CBS News. The revamped Viacom would include "MTV, VH1, Nickelodeon, BET and several other cable networks as well as the Paramount movie studio". Paramount's home entertainment unit continues to distribute the Paramount TV library through CBS DVD, as both Viacom and CBS Corporation are controlled by Sumner Redstone's National Amusements.
In 2009, CBS stopped using the Paramount name in its series and changed the name of the production arm to CBS Television Studios, eliminating the Paramount name from television, to distance itself from the latter.
On December 11, 2005, the Paramount Motion Pictures Group announced that it had purchased DreamWorks SKG (which was co-founded by former Paramount executive Jeffrey Katzenberg) in a deal worth $1.6 billion. The announcement was made by Brad Grey, chairman and CEO of Paramount Pictures who noted that enhancing Paramount's pipeline of pictures is a "key strategic objective in restoring Paramount's stature as a leader in filmed entertainment." The agreement does not include DreamWorks Animation SKG Inc., the most profitable part of the company that went public the previous year.
Grey also broke up the famous United International Pictures (UIP) international distribution company with 15 countries being taken over by Pararmount or Universal by December 31, 2006 with the joint venture continuing in 20 markets. In Australia, Brazil, France, Ireland, Mexico, New Zealand and the U.K., Paramount took over UIP. While in Austria, Belgium, Germany, Italy, the Netherlands, Russia, Spain and Switzerland, Universal took over and Paramount would build its own distribution operations there. In 2007 and 2008, Paramount may sub-distribute films via Universal's countries and vice versa. Paramount's international distribution unit would be headquartered in Los Angeles and have a European hub. In Italy, Paramount distributed through Universal. With Universal indicated that it was pulling out of the UIP Korea and started its own operation there in November 2016, Paramount agreed to have CJ Entertainment distribute there. UIP president and chief operating officer Andrew Cripps was hired as Paramount Pictures International head. Paramount Pictures International distributed films that made the 1 billion mark in July 2007; the fifth studio that year to do so and it its first year.
On October 6, 2008, DreamWorks executives announced that they were leaving Paramount and relaunching an independent DreamWorks. The DreamWorks trademarks remained with DreamWorks Animation when that company was spun off before the Paramount purchase, and DreamWorks Animation transferred the license to the name to the new company.
DreamWorks films, acquired by Paramount but still distributed internationally by Universal, are included in Paramount's market share. Grey also launched a Digital Entertainment division to take advantage of emerging digital distribution technologies. This led to Paramount becoming the second movie studio to sign a deal with Apple Inc. to sell its films through the iTunes Store.
Also, in 2007, Paramount sold another one of its "heritage" units, Famous Music, to Sony/ATV Music Publishing (best known for publishing many songs by The Beatles, and for being co-owned by Michael Jackson), ending a nearly-eight-decade run as a division of Paramount, being the studio's music publishing arm since the period when the entire company went by the name "Famous Players."
In early 2008, Paramount partnered with Los Angeles-based developer FanRocket to make short scenes taken from its film library available to users on Facebook. The application, called VooZoo, allows users to send movie clips to other Facebook users and to post clips on their profile pages. Paramount engineered a similar deal with Makena Technologies to allow users of vMTV and There.com to view and send movie clips.
In March 2010, Paramount founded Insurge Pictures, an independent distributor of "micro budget" films. The distributor planned ten movies with budgets of $100,000 each. The first release was "The Devil Inside", a movie with a budget of about US$1 million. In March 2015, following waning box office returns, Paramount shuttered Insurge Pictures and moved its operations to the main studio.
In July 2011, in the wake of critical and box office success of the animated feature, "Rango", and the departure of DreamWorks Animation upon completion of their distribution contract in 2012, Paramount announced the formation of a new division, devoted to the creation of animated productions. It marks Paramount's return to having its own animated division for the first time since 1967, when Paramount Cartoon Studios shut down (it was formerly Famous Studios until 1956).
In December 2013, Walt Disney Studios (via its parent company's purchase of Lucasfilm a year earlier) gained Paramount's remaining distribution and marketing rights to future "Indiana Jones" films. Paramount will permanently retain the distribution rights to the first four films, and will receive "financial participation" from any additional films.
In February 2016, Viacom CEO and newly appointed chairman Philippe Dauman announced that the conglomerate is in talks to find an investor to purchase a minority stake in Paramount. Sumner Redstone and his daughter Shari are reportedly opposed with the deal. On July 13, 2016, Wanda Group was in talks to acquire a 49% stake of Paramount. The talks with Wanda were dropped. On January 19, 2017, Shanghai Film Group Corp. and Huahua Media said they would finance at least 25% of all Paramount Pictures movies over a three-year period. Shanghai Film Group and Huahua Media, in the deal, would help distribute and market Paramount's features in China. At the time, the "Wall Street Journal" wrote that "nearly every major Hollywood studio has a co-financing deal with a Chinese company."
On March 27, 2017, Jim Gianopulos was named as a chairman and CEO of Paramount Pictures, replacing Brad Grey.
On July 2017, Paramount Players was formed by the studio with the hiring of Brian Robbins, founder of AwesomenessTV, Tollin/Robbins Productions and Varsity Pictures, as the division's president. The division was expected to produce films based on the Viacom Media Networks properties including MTV, Nickelodeon, BET and Comedy Central. On June 2017, Paramount Pictures signed a deal with 20th Century Fox for distribution of its films in Italy, which took effect on September. Prior to the deal, Paramount's films in Italy were distributed by Universal Pictures.
On December 7, 2017, it was reported that Paramount sold the international distribution rights of "Annihilation" to Netflix. Netflix subsequently bought the worldwide rights to "The Cloverfield Paradox" for $50 million. On November 16, 2018, Paramount signed a multi-picture film deal with Netflix as part of Viacom's growth strategy, making Paramount the first major film studio to do so. A sequel to Awesomeness Films' "To All the Boys I've Loved Before" is currently in development at the studio for Netflix.
In April 2018, Paramount posted its first quarterly profit since 2015. Bob Bakish, CEO of parent Viacom, said in a statement that turnaround efforts "have firmly taken hold as the studio improved margins and returned to profitability. This month's outstanding box-office performance of "A Quiet Place", the first film produced and released under the new team at Paramount, is a clear sign of our progress."
On September 29, 2016, National Amusements sent a letter to both CBS Corporation and Viacom, encouraging the two companies to merge back into one company. On December 12, the deal was called off. On May 30, 2019, CNBC reported that CBS and Viacom would explore merger discussions in mid-June 2019. Reports say that CBS and Viacom reportedly set August 8 as an informal deadline for reaching an agreement to recombine the two media companies. CBS announced to acquire Viacom as part of the re-merger for up to $15.4 billion. On August 2, 2019, the two companies agreed to merge back into one entity, which named ViacomCBS and the deal was closed on December 4, 2019.
In December 2019, ViacomCBS agreed to purchase a 49% stake in Miramax that was owned by beIN Media Group, with Paramount gaining the distribution of the studio's 700-film library as well as its future releases. Also, Paramount will produce television series based on Miramax's IPs. The deal officially closed on April 3, 2020.
In 2006, Paramount became the parent of DreamWorks Pictures. Soros Strategic Partners and Dune Entertainment II soon afterwards acquired controlling interest in live-action films released through DreamWorks, with the release of "Just Like Heaven" on September 16, 2005. The remaining live-action films released until March 2006 remained under direct Paramount control. However, Paramount still owns distribution and other ancillary rights to Soros and Dune films.
On February 8, 2010, Viacom repurchased Soros' controlling stake in DreamWorks' library of films released before 2005 for around $400 million. Even as DreamWorks switched distribution of live-action films not part of existing franchises to Walt Disney Studios Motion Pictures and later Universal Pictures, Paramount continues to own the films released before the merger, and the films that Paramount themselves distributed, including sequel rights such as that of "Little Fockers" (2010), distributed by Paramount and DreamWorks. It was a sequel to two existing DreamWorks films, "Meet the Parents" (2000) and "Meet the Fockers" (2004). (Paramount only owned the international distribution rights to "Little Fockers", whereas Universal Pictures handled domestic distribution).
Paramount also owned distribution rights to the DreamWorks Animation library of films made before 2013, and their previous distribution deal with future DWA titles expired at the end of 2012, with "Rise of the Guardians". 20th Century Fox took over distribution for post-2012 titles beginning with "The Croods" (2013) and ending with "" (2017). Universal Pictures subsequently took over distribution for DreamWorks Animation's films beginning with "" (2019) due to NBCUniversal's acquisition of the company in 2016, though Paramount's rights to the 2006-2012 DWA library would've expired 16 years after each film's initial theatrical release date. However, in July 2014, DreamWorks Animation purchased Paramount's distribution rights to the pre-2013 library, with 20th Century Fox distributing the library until January 2018, which Universal then assumed ownership of distribution rights.
Another asset of the former DreamWorks owned by Paramount is the pre-2008 DreamWorks Television library, which is currently distributed by Paramount's sister company CBS Television Distribution; it includes "Spin City", "High Incident", "Freaks and Geeks", "Undeclared" and "On the Lot".
Independent company Hollywood Classics represents Paramount with the theatrical distribution of all the films produced by the various motion picture divisions of CBS over the years, as a result of the Viacom/CBS merger.
Paramount has outright video distribution to the aforementioned CBS library with some exceptions; less-demanded content is usually released manufactured-on-demand by CBS themselves or licensed to Visual Entertainment Inc. Until 2009, the video rights to "My Fair Lady" were with original theatrical distributor Warner Bros., under license from CBS (the video license to that film has now reverted to Paramount).
In March 2012, Paramount licensed their name and logo to a luxury hotel investment group which subsequently named the company Paramount Hotels and Resorts. The investors plan to build 50 hotels throughout the world based on the themes of Hollywood and the California lifestyle. Among the features are private screening rooms and the Paramount library available in the hotel rooms. On April 2013, Paramount Hotels and Dubai-based DAMAC Properties announced the building of the first resort: "DAMAC Towers by Paramount."
The distinctively pyramidal Paramount mountain has been the company's logo since its inception and is the oldest surviving Hollywood film logo. In the sound era, the logo was accompanied by a fanfare called "Paramount on Parade" after the film of the same name, released in 1930. The words to the fanfare, originally sung in the 1930 film, were "Proud of the crowd that will never be loud, it's Paramount on Parade."
Legend has it that the mountain is based on a doodle made by W. W. Hodkinson during a meeting with Adolph Zukor. It is said to be based on the memories of his childhood in Utah. Some claim that Utah's Ben Lomond is the mountain Hodkinson doodled, and that Peru's Artesonraju is the mountain in the live-action logo, while others claim that the Italian side of Monviso inspired the logo. Some editions of the logo bear a striking resemblance to the Pfeifferhorn, another Wasatch Range peak, and to the Matterhorn on the border between Switzerland and Italy. Mount Huntington in Alaska also bears a striking resemblance.
The motion picture logo has gone through many changes over the years:
Paramount Studios offers tours of their studios, including a Studio Tour, a VIP Tour and an After Dark Tour. The 2-hour Studio Tour offers a behind-the-scenes look at the current operations of the studio. Most of the buildings on the tour are named for historical Paramount executives or the artists that worked at Paramount over the years. Many of the stars' dressing rooms have been converted into working offices. The stages where "Samson and Delilah", "Sunset Blvd.", "White Christmas", "Rear Window", "Sabrina", "Breakfast at Tiffany's", and many other classic films were shot are still in use today. The studio's backlot set, "New York Street", features numerous blocks of façades that depict a number of New York locales: "Washington Square" (where some scenes in "The Heiress", starring Olivia de Havilland, were shot), "Brooklyn," "Financial District," and others. The 4.5-hour VIP tour takes you to additional areas not covered by the standard tour, and includes meetings with archivists and tradesman. The After Dark Tour involves a tour of the Hollywood Forever Cemetery.
A few years after the ruling of the "United States v. Paramount Pictures, Inc." case in 1948, Music Corporation of America (MCA) approached Paramount offering $50 million for 750 sound feature films released prior to December 1, 1949 with payment to be spread over a period of several years. Paramount saw this as a bargain since the fleeting movie studio saw very little value in its library of old films at the time. To address any antitrust concerns, MCA set up EMKA, Ltd. as a dummy corporation to sell these films to television. EMKA's/Universal Television's library includes the five Paramount Marx Brothers films, most of the Bob Hope–Bing Crosby "Road to..." pictures, and other classics such as "Trouble in Paradise", "Shanghai Express", "She Done Him Wrong", "Sullivan's Travels", "The Palm Beach Story", "For Whom the Bell Tolls", "Double Imdemnity", "The Lost Weekend", and "The Heiress".
The studio has produced many critically acclaimed films such as "Titanic", "Footloose", "Breakfast at Tiffany's", "Braveheart", "Ghost", "The Truman Show", "Mean Girls", "Psycho", "Rocketman", "Ferris Bueller's Day Off", "The Curious Case of Benjamin Button", "Days of Thunder", "Rosemary's Baby", "Nebraska", "Sunset Boulevard", "Forrest Gump", "Super 8", "Coming to America", "World War Z", "Babel", "The Conversation", "The Fighter", "Interstellar", "", "Terms of Endearment", "The Wolf of Wall Street" and "A Quiet Place"; as well as commercially successful franchises and/or properties such as: the "Godfather" films, "Star Trek", "", "SpongeBob SquarePants", the "Grease" films, "Sonic the Hedgehog", the "Top Gun" films, "The Italian Job", the "Transformers" films, the "Teenage Mutant Ninja Turtles" films, the "Tomb Raider" films, the "Friday the 13th" films, the "Cloverfield" films, the "G.I. Joe" films, the "Beverly Hills Cop" films, the "Terminator" films, the "Pet Sematary" films, the "Without a Paddle" films, "Jackass", the "Odd Couple" films, "South Park", the "Crocodile Dundee" films, the "Charolette's Web" films, the "Wayne's World" films, "Beavis & Butthead", "Jimmy Neutron", the "War of the Worlds" films, the "Naked Gun" films, the "Anchorman" films, "Dora the Explorer", the "Addams Family" films, "Rugrats", the "Zoolander" films, "Æon Flux", the "Ring" films, the "Bad News Bears" films, "The Wild Thornberrys", and the "Paranormal Activity" films; as well as the first four films of the Marvel Cinematic Universe, the "Indiana Jones" films, and various DreamWorks Animation properties (such as "Shrek", the "Madagascar" sequels, the first two "Kung Fu Panda" films, and the first "How to Train Your Dragon") before both studios were respectively acquired by Disney (via Marvel Studios and Lucasfilm) and Universal Studios.
‡—Includes theatrical reissue(s).
On July 31, 2018, Paramount was targeted by the National Hispanic Media Coalition and the National Latino Media Council, which have both claimed that the studio has the worst track record of hiring Latino and Hispanic talent both in front of and behind the camera (the last Paramount film directed by a Spanish director was "Rings" in 2017). In response to the controversy, Paramount released the statement: "We recently met with NHMC in a good faith effort to see how we could partner as we further drive Paramount's culture of diversity, inclusion, and belonging. Under our new leadership team, we continue to make progress — including ensuring representation in front of and behind the camera in upcoming films such as "Dora the Explorer", "Instant Family," "Bumblebee," and "Limited Partners" – and welcome the opportunity to build and strengthen relationships with the Latino creative community further."
The NHMC protested at the Paramount Pictures lot on August 25. More than 60 protesters attended, while chanting "Latinos excluded, time to be included!". NHMC president and CEO Alex Nogales vowed to continue the boycott until the studio signed a memorandum of understanding.
On October 17, the NHMC protested at the Paramount film lot for the second time in two months, with 75 protesters attending. The leaders delivered a petition signed by 12,307 people and addressed it to Jim Gianopulos. | https://en.wikipedia.org/wiki?curid=22918 |
Psychology
Psychology is the science of behavior and mind. Psychology includes the study of conscious and unconscious phenomena, as well as feeling and thought. It is an academic discipline of immense scope. Psychologists seek an understanding of the emergent properties of brains, and all the variety of phenomena linked to those emergent properties, joining this way the broader neuro-scientific group of researchers. As a social science it aims to understand individuals and groups by establishing general principles and researching specific cases.
In this field, a professional practitioner or researcher is called a psychologist and can be classified as a social, behavioral, or cognitive scientist. Psychologists attempt to understand the role of mental functions in individual and social behavior, while also exploring the physiological and biological processes that underlie cognitive functions and behaviors.
Psychologists explore behavior and mental processes, including perception, cognition, attention, emotion, intelligence, subjective experiences, motivation, brain functioning, and personality. This extends to interaction between people, such as interpersonal relationships, including psychological resilience, family resilience, and other areas. Psychologists of diverse orientations also consider the unconscious mind. Psychologists employ empirical methods to infer causal and correlational relationships between psychosocial variables. In addition, or in opposition, to employing empirical and deductive methods, some—especially clinical and counseling psychologists—at times rely upon symbolic interpretation and other inductive techniques. Psychology has been described as a "hub science" in that medicine tends to draw psychological research via neurology and psychiatry, whereas social sciences most commonly draws directly from sub-disciplines within psychology.
While psychological knowledge is often applied to the assessment and treatment of mental health problems, it is also directed towards understanding and solving problems in several spheres of human activity. By many accounts psychology ultimately aims to benefit society. The majority of psychologists are involved in some kind of therapeutic role, practicing in clinical, counseling, or school settings. Many do scientific research on a wide range of topics related to mental processes and behavior, and typically work in university psychology departments or teach in other academic settings (e.g., medical schools, hospitals). Some are employed in industrial and organizational settings, or in other areas such as human development and aging, sports, health, and the media, as well as in forensic investigation and other aspects of law.
The word "psychology" derives from Greek roots meaning study of the psyche, or soul (ψυχή "psychē", "breath, spirit, soul" and -λογία "-logia", "study of" or "research"). The Latin word "psychologia" was first used by the Croatian humanist and Latinist Marko Marulić in his book, "Psichiologia de ratione animae humanae" in the late 15th century or early 16th century. The earliest known reference to the word "psychology" in English was by Steven Blankaart in 1694 in "The Physical Dictionary" which refers to "Anatomy, which treats the Body, and Psychology, which treats of the Soul."
In 1890, William James defined "psychology" as "the science of mental life, both of its phenomena and their conditions". This definition enjoyed widespread currency for decades. However, this meaning was contested, notably by radical behaviorists such as John B. Watson, who in his 1913 manifesto defined the discipline of psychology as the acquisition of information useful to the control of behavior. Also since James defined it, the term more strongly connotes techniques of scientific experimentation. Folk psychology refers to the understanding of ordinary people, as contrasted with that of psychology professionals.
The ancient civilizations of Egypt, Greece, China, India, and Persia all engaged in the philosophical study of psychology. In Ancient Egypt the Ebers Papyrus mentioned depression and thought disorders. Historians note that Greek philosophers, including Thales, Plato, and Aristotle (especially in his "De Anima" treatise), addressed the workings of the mind. As early as the 4th century BC, Greek physician Hippocrates theorized that mental disorders had physical rather than supernatural causes.
In China, psychological understanding grew from the philosophical works of Laozi and Confucius, and later from the doctrines of Buddhism. This body of knowledge involves insights drawn from introspection and observation, as well as techniques for focused thinking and acting. It frames the universe as a division of, and interaction between, physical reality and mental reality, with an emphasis on purifying the mind in order to increase virtue and power. An ancient text known as "The Yellow Emperor's Classic of Internal Medicine" identifies the brain as the nexus of wisdom and sensation, includes theories of personality based on yin–yang balance, and analyzes mental disorder in terms of physiological and social disequilibria. Chinese scholarship focused on the brain advanced in the Qing Dynasty with the work of Western-educated Fang Yizhi (1611–1671), Liu Zhi (1660–1730), and Wang Qingren (1768–1831). Wang Qingren emphasized the importance of the brain as the center of the nervous system, linked mental disorder with brain diseases, investigated the causes of dreams and insomnia, and advanced a theory of hemispheric lateralization in brain function.
Distinctions in types of awareness appear in the ancient thought of India, influenced by Hinduism. A central idea of the Upanishads is the distinction between a person's transient mundane self and their eternal unchanging soul. Divergent Hindu doctrines, and Buddhism, have challenged this hierarchy of selves, but have all emphasized the importance of reaching higher awareness. Yoga is a range of techniques used in pursuit of this goal. Much of the Sanskrit corpus was suppressed under the British East India Company followed by the British Raj in the 1800s. However, Indian doctrines influenced Western thinking via the Theosophical Society, a New Age group which became popular among Euro-American intellectuals.
Psychology was a popular topic in Enlightenment Europe. In Germany, Gottfried Wilhelm Leibniz (1646–1716) applied his principles of calculus to the mind, arguing that mental activity took place on an indivisible continuum—most notably, that among an infinity of human perceptions and desires, the difference between conscious and unconscious awareness is only a matter of degree. Christian Wolff identified psychology as its own science, writing "Psychologia empirica" in 1732 and "Psychologia rationalis" in 1734. This notion advanced further under Immanuel Kant, who established the idea of anthropology, with psychology as an important subdivision. However, Kant explicitly and notoriously rejected the idea of experimental psychology, writing that "the empirical doctrine of the soul can also never approach chemistry even as a systematic art of analysis or experimental doctrine, for in it the manifold of inner observation can be separated only by mere division in thought, and cannot then be held separate and recombined at will (but still less does another thinking subject suffer himself to be experimented upon to suit our purpose), and even observation by itself already changes and displaces the state of the observed object." In 1783, Ferdinand Ueberwasser (1752-1812) designated himself "Professor of Empirical Psychology and Logic" and gave lectures on scientific psychology, though these developments were soon overshadowed by the Napoleonic Wars, after which the Old University of Münster was discontinued by Prussian authorities. Having consulted philosophers Hegel and Herbart, however, in 1825 the Prussian state established psychology as a mandatory discipline in its rapidly expanding and highly influential educational system. However, this discipline did not yet embrace experimentation. In England, early psychology involved phrenology and the response to social problems including alcoholism, violence, and the country's well-populated mental asylums.
Gustav Fechner began conducting psychophysics research in Leipzig in the 1830s, articulating the principle (Weber–Fechner law) that human perception of a stimulus varies logarithmically according to its intensity. Fechner's 1860 "Elements of Psychophysics" challenged Kant's stricture against quantitative study of the mind. In Heidelberg, Hermann von Helmholtz conducted parallel research on sensory perception, and trained physiologist Wilhelm Wundt. Wundt, in turn, came to Leipzig University, establishing the psychological laboratory which brought experimental psychology to the world. Wundt focused on breaking down mental processes into the most basic components, motivated in part by an analogy to recent advances in chemistry, and its successful investigation of the elements and structure of material. Paul Flechsig and Emil Kraepelin soon created another influential psychology laboratory at Leipzig, this one focused on more on experimental psychiatry.
Psychologists in Germany, Denmark, Austria, England, and the United States soon followed Wundt in setting up laboratories. G. Stanley Hall who studied with Wundt, formed a psychology lab at Johns Hopkins University in Maryland, which became internationally influential. Hall, in turn, trained Yujiro Motora, who brought experimental psychology, emphasizing psychophysics, to the Imperial University of Tokyo. Wundt's assistant, Hugo Münsterberg, taught psychology at Harvard to students such as Narendra Nath Sen Gupta—who, in 1905, founded a psychology department and laboratory at the University of Calcutta. Wundt students Walter Dill Scott, Lightner Witmer, and James McKeen Cattell worked on developing tests for mental ability. Catell, who also studied with eugenicist Francis Galton, went on to found the Psychological Corporation. Wittmer focused on mental testing of children; Scott, on selection of employees.
Another student of Wundt, Edward Titchener, created the psychology program at Cornell University and advanced a doctrine of "structuralist" psychology. Structuralism sought to analyze and classify different aspects of the mind, primarily through the method of introspection. William James, John Dewey and Harvey Carr advanced a more expansive doctrine called functionalism, attuned more to human–environment actions. In 1890, James wrote an influential book, "The Principles of Psychology", which expanded on the realm of structuralism, memorably described the human "stream of consciousness", and interested many American students in the emerging discipline. Dewey integrated psychology with social issues, most notably by promoting the cause progressive education to assimilate immigrants and inculcate moral values in children.
A different strain of experimentalism, with more connection to physiology, emerged in South America, under the leadership of Horacio G. Piñero at the University of Buenos Aires. Russia, too, placed greater emphasis on the biological basis for psychology, beginning with Ivan Sechenov's 1873 essay, "Who Is to Develop Psychology and How?" Sechenov advanced the idea of brain reflexes and aggressively promoted a deterministic viewpoint on human behavior.
Wolfgang Kohler, Max Wertheimer and Kurt Koffka co-founded the school of Gestalt psychology (not to be confused with the Gestalt therapy of Fritz Perls). This approach is based upon the idea that individuals experience things as unified wholes. Rather than breaking down thoughts and behavior into smaller elements, as in structuralism, the Gestaltists maintained that whole of experience is important, and differs from the sum of its parts. Other 19th-century contributors to the field include the German psychologist Hermann Ebbinghaus, a pioneer in the experimental study of memory, who developed quantitative models of learning and forgetting at the University of Berlin, and the Russian-Soviet physiologist Ivan Pavlov, who discovered in dogs a learning process that was later termed "classical conditioning" and applied to human beings.
One of the earliest psychology societies was "La Société de Psychologie Physiologique" in France, which lasted 1885–1893. The first meeting of the International Congress of Psychology sponsored by the International Union of Psychological Science took place in Paris, in August 1889, amidst the World's Fair celebrating the centennial of the French Revolution. William James was one of three Americans among the four hundred attendees. The American Psychological Association (APA) was founded soon after, in 1892. The International Congress continued to be held, at different locations in Europe, with wider international participation. The Sixth Congress, Geneva 1909, included presentations in Russian, Chinese, and Japanese, as well as Esperanto. After a hiatus for World War I, the Seventh Congress met in Oxford, with substantially greater participation from the war-victorious Anglo-Americans. In 1929, the Congress took place at Yale University in New Haven, Connecticut, attended by hundreds of members of the APA. Tokyo Imperial University led the way in bringing new psychology to the East, and from Japan these ideas diffused into China.
American psychology gained status during World War I, during which a standing committee headed by Robert Yerkes administered mental tests ("Army Alpha" and "Army Beta") to almost 1.8 million soldiers. Subsequent funding for behavioral research came in large part from the Rockefeller family, via the Social Science Research Council. Rockefeller charities funded the National Committee on Mental Hygiene, which promoted the concept of mental illness and lobbied for psychological supervision of child development. Through the Bureau of Social Hygiene and later funding of Alfred Kinsey, Rockefeller foundations established sex research as a viable discipline in the U.S. Under the influence of the Carnegie-funded Eugenics Record Office, the Draper-funded Pioneer Fund, and other institutions, the eugenics movement also had a significant impact on American psychology; in the 1910s and 1920s, eugenics became a standard topic in psychology classes.
During World War II and the Cold War, the U.S. military and intelligence agencies established themselves as leading funders of psychology—through the armed forces and in the new Office of Strategic Services intelligence agency. University of Michigan psychologist Dorwin Cartwright reported that university researchers began large-scale propaganda research in 1939–1941, and "the last few months of the war saw a social psychologist become chiefly responsible for determining the week-by-week-propaganda policy for the United States Government." Cartwright also wrote that psychologists had significant roles in managing the domestic economy. The Army rolled out its new General Classification Test and engaged in massive studies of troop morale. In the 1950s, the Rockefeller Foundation and Ford Foundation collaborated with the Central Intelligence Agency (CIA) to fund research on psychological warfare. In 1965, public controversy called attention to the Army's Project Camelot—the "Manhattan Project" of social science—an effort which enlisted psychologists and anthropologists to analyze foreign countries for strategic purposes.
In Germany after World War I, psychology held institutional power through the military, and subsequently expanded along with the rest of the military under the Third Reich. Under the direction of Hermann Göring's cousin Matthias Göring, the Berlin Psychoanalytic Institute was renamed the Göring Institute. Freudian psychoanalysts were expelled and persecuted under the anti-Jewish policies of the Nazi Party, and all psychologists had to distance themselves from Freud and Adler. The Göring Institute was well-financed throughout the war with a mandate to create a "New German Psychotherapy". This psychotherapy aimed to align suitable Germans with the overall goals of the Reich; as described by one physician: "Despite the importance of analysis, spiritual guidance and the active cooperation of the patient represent the best way to overcome individual mental problems and to subordinate them to the requirements of the "Volk" and the "Gemeinschaft"." Psychologists were to provide "Seelenführung", leadership of the mind, to integrate people into the new vision of a German community. Harald Schultz-Hencke melded psychology with the Nazi theory of biology and racial origins, criticizing psychoanalysis as a study of the weak and deformed. Johannes Heinrich Schultz, a German psychologist recognized for developing the technique of autogenic training, prominently advocated sterilization and euthanasia of men considered genetically undesirable, and devised techniques for facilitating this process. After the war, some new institutions were created and some psychologists were discredited due to Nazi affiliation. Alexander Mitscherlich founded a prominent applied psychoanalysis journal called "Psyche" and with funding from the Rockefeller Foundation established the first clinical psychosomatic medicine division at Heidelberg University. In 1970, psychology was integrated into the required studies of medical students.
After the Russian Revolution, psychology was heavily promoted by the Bolsheviks as a way to engineer the "New Man" of socialism. Thus, university psychology departments trained large numbers of students, for whom positions were made available at schools, workplaces, cultural institutions, and in the military. An especial focus was pedology, the study of child development, regarding which Lev Vygotsky became a prominent writer. The Bolsheviks also promoted free love and embraced the doctrine of psychoanalysis as an antidote to sexual repression. Although pedology and intelligence testing fell out of favor in 1936, psychology maintained its privileged position as an instrument of the Soviet Union. Stalinist purges took a heavy toll and instilled a climate of fear in the profession, as elsewhere in Soviet society. Following World War II, Jewish psychologists past and present (including Lev Vygotsky, A.R. Luria, and Aron Zalkind) were denounced; Ivan Pavlov (posthumously) and Stalin himself were aggrandized as heroes of Soviet psychology. Soviet academics was speedily liberalized during the Khrushchev Thaw, and cybernetics, linguistics, genetics, and other topics became acceptable again. There emerged a new field called "engineering psychology" which studied mental aspects of complex jobs (such as pilot and cosmonaut). Interdisciplinary studies became popular and scholars such as Georgy Shchedrovitsky developed systems theory approaches to human behavior.
Twentieth-century Chinese psychology originally modeled the U.S., with translations from American authors like William James, the establishment of university psychology departments and journals, and the establishment of groups including the Chinese Association of Psychological Testing (1930) and the Chinese Psychological Society (1937). Chinese psychologists were encouraged to focus on education and language learning, with the aspiration that education would enable modernization and nationalization. John Dewey, who lectured to Chinese audiences in 1918–1920, had a significant influence on this doctrine. Chancellor T'sai Yuan-p'ei introduced him at Peking University as a greater thinker than Confucius. Kuo Zing-yang who received a PhD at the University of California, Berkeley, became President of Zhejiang University and popularized behaviorism. After the Chinese Communist Party gained control of the country, the Stalinist Soviet Union became the leading influence, with Marxism–Leninism the leading social doctrine and Pavlovian conditioning the approved concept of behavior change. Chinese psychologists elaborated on Lenin's model of a "reflective" consciousness, envisioning an "active consciousness" () able to transcend material conditions through hard work and ideological struggle. They developed a concept of "recognition" () which referred the interface between individual perceptions and the socially accepted worldview (failure to correspond with party doctrine was "incorrect recognition"). Psychology education was centralized under the Chinese Academy of Sciences, supervised by the State Council. In 1951, the Academy created a Psychology Research Office, which in 1956 became the Institute of Psychology. Most leading psychologists were educated in the United States, and the first concern of the Academy was re-education of these psychologists in the Soviet doctrines. Child psychology and pedagogy for nationally cohesive education remained a central goal of the discipline.
In 1920, Édouard Claparède and Pierre Bovet created a new applied psychology organization called the International Congress of Psychotechnics Applied to Vocational Guidance, later called the International Congress of Psychotechnics and then the International Association of Applied Psychology. The IAAP is considered the oldest international psychology association. Today, at least 65 international groups deal with specialized aspects of psychology. In response to male predominance in the field, female psychologists in the U.S. formed National Council of Women Psychologists in 1941. This organization became the International Council of Women Psychologists after World War II, and the International Council of Psychologists in 1959. Several associations including the Association of Black Psychologists and the Asian American Psychological Association have arisen to promote non-European racial groups in the profession.
The world federation of national psychological societies is the International Union of Psychological Science (IUPsyS), founded in 1951 under the auspices of UNESCO, the United Nations cultural and scientific authority. Psychology departments have since proliferated around the world, based primarily on the Euro-American model. Since 1966, the Union has published the "International Journal of Psychology". IAAP and IUPsyS agreed in 1976 each to hold a congress every four years, on a staggered basis.
The International Union recognizes 66 national psychology associations and at least 15 others exist. The American Psychological Association is the oldest and largest. Its membership has increased from 5,000 in 1945 to 100,000 in the present day. The APA includes 54 divisions, which since 1960 have steadily proliferated to include more specialties. Some of these divisions, such as the Society for the Psychological Study of Social Issues and the American Psychology–Law Society, began as autonomous groups.
The Interamerican Society of Psychology, founded in 1951, aspires to promote psychology and coordinate psychologists across the Western Hemisphere. It holds the Interamerican Congress of Psychology and had 1,000 members in year 2000. The European Federation of Professional Psychology Associations, founded in 1981, represents 30 national associations with a total of 100,000 individual members. At least 30 other international groups organize psychologists in different regions.
In some places, governments legally regulate who can provide psychological services or represent themselves as a "psychologist". The APA defines a psychologist as someone with a doctoral degree in psychology.
Early practitioners of experimental psychology distinguished themselves from parapsychology, which in the late nineteenth century enjoyed great popularity (including the interest of scholars such as William James), and indeed constituted the bulk of what people called "psychology". Parapsychology, hypnotism, and psychism were major topics of the early International Congresses. But students of these fields were eventually ostractized, and more or less banished from the Congress in 1900–1905. Parapsychology persisted for a time at Imperial University, with publications such as "Clairvoyance and Thoughtography" by Tomokichi Fukurai, but here too it was mostly shunned by 1913.
As a discipline, psychology has long sought to fend off accusations that it is a "soft" science. Philosopher of science Thomas Kuhn's 1962 critique implied psychology overall was in a pre-paradigm state, lacking the agreement on overarching theory found in mature sciences such as chemistry and physics. Because some areas of psychology rely on research methods such as surveys and questionnaires, critics asserted that psychology is not an objective science. Skeptics have suggested that personality, thinking, and emotion, cannot be directly measured and are often inferred from subjective self-reports, which may be problematic. Experimental psychologists have devised a variety of ways to indirectly measure these elusive phenomenological entities.
Divisions still exist within the field, with some psychologists more oriented towards the unique experiences of individual humans, which cannot be understood only as data points within a larger population. Critics inside and outside the field have argued that mainstream psychology has become increasingly dominated by a "cult of empiricism" which limits the scope of its study by using only methods derived from the physical sciences. Feminist critiques along these lines have argued that claims to scientific objectivity obscure the values and agenda of (historically mostly male) researchers. Jean Grimshaw, for example, argues that mainstream psychological research has advanced a patriarchal agenda through its efforts to control behavior.
Psychologists generally consider the organism the basis of the mind, and therefore a vitally related area of study. Psychiatrists and neuropsychologists work at the interface of mind and body.
Biological psychology, also known as physiological psychology, or neuropsychology is the study of the biological substrates of behavior and mental processes. Key research topics in this field include comparative psychology, which studies humans in relation to other animals, and perception which involves the physical mechanics of sensation as well as neural and mental processing. For centuries, a leading question in biological psychology has been whether and how mental functions might be localized in the brain. From Phineas Gage to H.M. and Clive Wearing, individual people with mental issues traceable to physical damage have inspired new discoveries in this area. Modern neuropsychology could be said to originate in the 1870s, when in France Paul Broca traced production of speech to the left frontal gyrus, thereby also demonstrating hemispheric lateralization of brain function. Soon after, Carl Wernicke identified a related area necessary for the understanding of speech.
The contemporary field of behavioral neuroscience focuses on physical causes underpinning behavior. For example, physiological psychologists use animal models, typically rats, to study the neural, genetic, and cellular mechanisms that underlie specific behaviors such as learning and memory and fear responses. Cognitive neuroscientists investigate the neural correlates of psychological processes in humans using neural imaging tools, and neuropsychologists conduct psychological assessments to determine, for instance, specific aspects and extent of cognitive deficit caused by brain damage or disease. The biopsychosocial model is an integrated perspective toward understanding consciousness, behavior, and social interaction. It assumes that any given behavior or mental process affects and is affected by dynamically interrelated biological, psychological, and social factors.
Evolutionary psychology examines cognition and personality traits from an evolutionary perspective. This perspective suggests that psychological adaptations evolved to solve recurrent problems in human ancestral environments. Evolutionary psychology offers complementary explanations for the mostly proximate or developmental explanations developed by other areas of psychology: that is, it focuses mostly on ultimate or "why?" questions, rather than proximate or "how?" questions. "How?" questions are more directly tackled by behavioral genetics research, which aims to understand how genes and environment impact behavior.
The search for biological origins of psychological phenomena has long involved debates about the importance of race, and especially the relationship between race and intelligence. The idea of white supremacy and indeed the modern concept of race itself arose during the process of world conquest by Europeans. Carl von Linnaeus's four-fold classification of humans classifies Europeans as intelligent and severe, Americans as contented and free, Asians as ritualistic, and Africans as lazy and capricious. Race was also used to justify the construction of socially specific mental disorders such as "drapetomania" and "dysaesthesia aethiopica"—the behavior of uncooperative African slaves. After the creation of experimental psychology, "ethnical psychology" emerged as a subdiscipline, based on the assumption that studying primitive races would provide an important link between animal behavior and the psychology of more evolved humans.
Psychologists take human behavior as a main area of study. Much of the research in this area began with tests on mammals, based on the idea that humans exhibit similar fundamental tendencies. Behavioral research ever aspires to improve the effectiveness of techniques for behavior modification.
Early behavioral researchers studied stimulus–response pairings, now known as classical conditioning. They demonstrated that behaviors could be linked through repeated association with stimuli eliciting pain or pleasure. Ivan Pavlov—known best for inducing dogs to salivate in the presence of a stimulus previously linked with food—became a leading figure in the Soviet Union and inspired followers to use his methods on humans. In the United States, Edward Lee Thorndike initiated "connectionism" studies by trapping animals in "puzzle boxes" and rewarding them for escaping. Thorndike wrote in 1911: "There can be no moral warrant for studying man's nature unless the study will enable us to control his acts." From 1910–1913 the American Psychological Association went through a sea change of opinion, away from mentalism and towards "behavioralism", and in 1913 John B. Watson coined the term behaviorism for this school of thought. Watson's famous Little Albert experiment in 1920 demonstrated that repeated use of upsetting loud noises could instill phobias (aversions to other stimuli) in an infant human. Karl Lashley, a close collaborator with Watson, examined biological manifestations of learning in the brain.
Embraced and extended by Clark L. Hull, Edwin Guthrie, and others, behaviorism became a widely used research paradigm. A new method of "instrumental" or "operant" conditioning added the concepts of reinforcement and punishment to the model of behavior change. Radical behaviorists avoided discussing the inner workings of the mind, especially the unconscious mind, which they considered impossible to assess scientifically. Operant conditioning was first described by Miller and Kanorski and popularized in the U.S. by B.F. Skinner, who emerged as a leading intellectual of the behaviorist movement.
Noam Chomsky delivered an influential critique of radical behaviorism on the grounds that it could not adequately explain the complex mental process of language acquisition. Martin Seligman and colleagues discovered that the conditioning of dogs led to outcomes ("learned helplessness") that opposed the predictions of behaviorism. Skinner's behaviorism did not die, perhaps in part because it generated successful practical applications. Edward C. Tolman advanced a hybrid "cognitive behaviorial" model, most notably with his 1948 publication discussing the cognitive maps used by rats to guess at the location of food at the end of a modified maze.
The Association for Behavior Analysis International was founded in 1974 and by 2003 had members from 42 countries. The field has been especially influential in Latin America, where it has a regional organization known as ALAMOC: "La Asociación Latinoamericana de Análisis y Modificación del Comportamiento". Behaviorism also gained a strong foothold in Japan, where it gave rise to the Japanese Society of Animal Psychology (1933), the Japanese Association of Special Education (1963), the Japanese Society of Biofeedback Research (1973), the Japanese Association for Behavior Therapy (1976), the Japanese Association for Behavior Analysis (1979), and the Japanese Association for Behavioral Science Research (1994). Today the field of behaviorism is also commonly referred to as behavior modification or behavior analysis.
Green Red BluePurple Blue Purple
Blue Purple RedGreen Purple Green
The Stroop effect refers to the fact that naming the color of the first set of words is easier and quicker than the second.
Cognitive psychology studies cognition, the mental processes underlying mental activity. Perception, attention, reasoning, thinking, problem solving, memory, learning, language, and emotion are areas of research. Classical cognitive psychology is associated with a school of thought known as cognitivism, whose adherents argue for an information processing model of mental function, informed by functionalism and experimental psychology.
Starting in the 1950s, the experimental techniques developed by Wundt, James, Ebbinghaus, and others re-emerged as experimental psychology became increasingly cognitivist—concerned with information and its processing—and, eventually, constituted a part of the wider cognitive science. Some called this development the cognitive revolution because it rejected the anti-mentalist dogma of behaviorism as well as the strictures of psychoanalysis.
Social learning theorists, such as Albert Bandura, argued that the child's environment could make contributions of its own to the behaviors of an observant subject.
Technological advances also renewed interest in mental states and representations. English neuroscientist Charles Sherrington and Canadian psychologist Donald O. Hebb used experimental methods to link psychological phenomena with the structure and function of the brain. The rise of computer science, cybernetics and artificial intelligence suggested the value of comparatively studying information processing in humans and machines. Research in cognition had proven practical since World War II, when it aided in the understanding of weapons operation.
A popular and representative topic in this area is cognitive bias, or irrational thought. Psychologists (and economists) have classified and described a sizeable catalogue of biases which recur frequently in human thought. The availability heuristic, for example, is the tendency to overestimate the importance of something which happens to come readily to mind.
Elements of behaviorism and cognitive psychology were synthesized to form cognitive behavioral therapy, a form of psychotherapy modified from techniques developed by American psychologist Albert Ellis and American psychiatrist Aaron T. Beck.
On a broader level, cognitive science is an interdisciplinary enterprise of cognitive psychologists, cognitive neuroscientists, researchers in artificial intelligence, linguists, human–computer interaction, computational neuroscience, logicians and social scientists. The discipline of cognitive science covers cognitive psychology as well as philosophy of mind, computer science, and neuroscience. Computer simulations are sometimes used to model phenomena of interest.
Social psychology is the study of how humans think about each other and how they relate to each other. Social psychologists study such topics as the influence of others on an individual's behavior (e.g. conformity, persuasion), and the formation of beliefs, attitudes, and stereotypes about other people. Social cognition fuses elements of social and cognitive psychology in order to understand how people process, remember, or distort social information. The study of group dynamics reveals information about the nature and potential optimization of leadership, communication, and other phenomena that emerge at least at the microsocial level. In recent years, many social psychologists have become increasingly interested in implicit measures, mediational models, and the interaction of both person and social variables in accounting for behavior. The study of human society is therefore a potentially valuable source of information about the causes of psychiatric disorder. Some sociological concepts applied to psychiatric disorders are the social role, sick role, social class, life event, culture, migration, social, and total institution.
Psychoanalysis comprises a method of investigating the mind and interpreting experience; a systematized set of theories about human behavior; and a form of psychotherapy to treat psychological or emotional distress, especially conflict originating in the unconscious mind. This school of thought originated in the 1890s with Austrian medical doctors including Josef Breuer (physician), Alfred Adler (physician), Otto Rank (psychoanalyst), and most prominently Sigmund Freud (neurologist). Freud's psychoanalytic theory was largely based on interpretive methods, introspection and clinical observations. It became very well known, largely because it tackled subjects such as sexuality, repression, and the unconscious. These subjects were largely taboo at the time, and Freud provided a catalyst for their open discussion in polite society. Clinically, Freud helped to pioneer the method of free association and a therapeutic interest in dream interpretation.
Swiss psychiatrist Carl Jung, influenced by Freud, elaborated a theory of the collective unconscious—a primordial force present in all humans, featuring archetypes which exerted a profound influence on the mind. Jung's competing vision formed the basis for analytical psychology, which later led to the archetypal and process-oriented schools. Other well-known psychoanalytic scholars of the mid-20th century include Erik Erikson, Melanie Klein, D.W. Winnicott, Karen Horney, Erich Fromm, John Bowlby, and Sigmund Freud's daughter, Anna Freud. Throughout the 20th century, psychoanalysis evolved into diverse schools of thought which could be called Neo-Freudian. Among these schools are ego psychology, object relations, and interpersonal, Lacanian, and relational psychoanalysis.
Psychologists such as Hans Eysenck and philosophers including Karl Popper criticized psychoanalysis. Popper argued that psychoanalysis had been misrepresented as a scientific discipline, whereas Eysenck said that psychoanalytic tenets had been contradicted by experimental data. By the end of 20th century, psychology departments in American universities mostly marginalized Freudian theory, dismissing it as a "desiccated and dead" historical artifact. However, researchers in the emerging field of neuro-psychoanalysis today defend some of Freud's ideas on scientific grounds, while scholars of the humanities maintain that Freud was not a "scientist at all, but ... an interpreter".
Humanistic psychology developed in the 1950s as a movement within academic psychology, in reaction to both behaviorism and psychoanalysis. The humanistic approach sought to glimpse the whole person, not just fragmented parts of the personality or isolated cognitions. Humanism focused on uniquely human issues, such as free will, personal growth, self-actualization, self-identity, death, aloneness, freedom, and meaning. It emphasized subjective meaning, rejection of determinism, and concern for positive growth rather than pathology. Some founders of the humanistic school of thought were American psychologists Abraham Maslow, who formulated a hierarchy of human needs, and Carl Rogers, who created and developed client-centered therapy. Later, positive psychology opened up humanistic themes to scientific modes of exploration.
The "American Association for Humanistic Psychology", formed in 1963, declared:
Humanistic psychology is primarily an orientation toward the whole of psychology rather than a distinct area or school. It stands for respect for the worth of persons, respect for differences of approach, open-mindedness as to acceptable methods, and interest in exploration of new aspects of human behavior. As a "third force" in contemporary psychology, it is concerned with topics having little place in existing theories and systems: e.g., love, creativity, self, growth, organism, basic need-gratification, self-actualization, higher values, being, becoming, spontaneity, play, humor, affection, naturalness, warmth, ego-transcendence, objectivity, autonomy, responsibility, meaning, fair-play, transcendental experience, peak experience, courage, and related concepts.
In the 1950s and 1960s, influenced by philosophers Søren Kierkegaard and Martin Heidegger and, psychoanalytically trained American psychologist Rollo May pioneered an existential branch of psychology, which included existential psychotherapy: a method based on the belief that inner conflict within a person is due to that individual's confrontation with the givens of existence. Swiss psychoanalyst Ludwig Binswanger and American psychologist George Kelly may also be said to belong to the existential school. Existential psychologists differed from more "humanistic" psychologists in their relatively neutral view of human nature and their relatively positive assessment of anxiety. Existential psychologists emphasized the humanistic themes of death, free will, and meaning, suggesting that meaning can be shaped by myths, or narrative patterns, and that it can be encouraged by an acceptance of the free will requisite to an authentic, albeit often anxious, regard for death and other future prospects.
Austrian existential psychiatrist and Holocaust survivor Viktor Frankl drew evidence of meaning's therapeutic power from reflections garnered from his own internment. He created a variation of existential psychotherapy called logotherapy, a type of existentialist analysis that focuses on a "will to meaning" (in one's life), as opposed to Adler's Nietzschean doctrine of "will to power" or Freud's "will to pleasure".
Personality psychology is concerned with enduring patterns of behavior, thought, and emotion—commonly referred to as personality—in individuals. Theories of personality vary across different psychological schools and orientations. They carry different assumptions about such issues as the role of the unconscious and the importance of childhood experience. According to Freud, personality is based on the dynamic interactions of the id, ego, and super-ego. In order to develop a taxonomy of personality constructs, trait theorists, in contrast, attempt to describe the personality sphere in terms of a discrete number of key traits using the statistical data-reduction method of factor analysis. Although the number of proposed traits has varied widely, an early biologically-based model proposed by Hans Eysenck, the 3rd mostly highly cited psychologist of the 20th Century (after Freud, and Piaget respectively), suggested that at least three major trait constructs are necessary to describe human personality structure: extraversion–introversion, neuroticism-stability, and psychoticism-normality. Raymond Cattell, the 7th most highly cited psychologist of the 20th Century (based on the scientific peer-reviewed journal literature) empirically derived a theory of 16 personality factors at the primary-factor level, and up to 8 broader second-stratum factors (at the Eysenckian level of analysis), rather than the "Big Five" dimensions. Dimensional models of personality are receiving increasing support, and a version of dimensional assessment has been included in the DSM-V. However, despite a plethora of research into the various versions of the "Big Five" personality dimensions, it appears necessary to move on from static conceptualizations of personality structure to a more dynamic orientation, whereby it is acknowledged that personality constructs are subject to learning and change across the lifespan.
An early example of personality assessment was the Woodworth Personal Data Sheet, constructed during World War I. The popular, although psychometrically inadequate Myers–Briggs Type Indicator sought to assess individuals' "personality types" according to the personality theories of Carl Jung. Behaviorist resistance to introspection led to the development of the Strong Vocational Interest Blank and Minnesota Multiphasic Personality Inventory (MMPI), in an attempt to ask empirical questions that focused less on the psychodynamics of the respondent. However, the MMPI has been subjected to critical scrutiny, given that it adhered to archaic psychiatric nosology, and since it required individuals to provide subjective, introspective responses to the hundreds of items pertaining to psychopathology.
Study of the unconscious mind, a part of the psyche outside the awareness of the individual which nevertheless influenced thoughts and behavior was a hallmark of early psychology. In one of the first psychology experiments conducted in the United States, C.S. Peirce and Joseph Jastrow found in 1884 that subjects could choose the minutely heavier of two weights even if consciously uncertain of the difference. Freud popularized this concept, with terms like Freudian slip entering popular culture, to mean an uncensored intrusion of unconscious thought into one's speech and action. His 1901 text "The Psychopathology of Everyday Life" catalogues hundreds of everyday events which Freud explains in terms of unconscious influence. Pierre Janet advanced the idea of a subconscious mind, which could contain autonomous mental elements unavailable to the scrutiny of the subject.
Behaviorism notwithstanding, the unconscious mind has maintained its importance in psychology. Cognitive psychologists have used a "filter" model of attention, according to which much information processing takes place below the threshold of consciousness, and only certain processes, limited by nature and by simultaneous quantity, make their way through the filter. Copious research has shown that subconscious "priming" of certain ideas can covertly influence thoughts and behavior. A significant hurdle in this research is proving that a subject's conscious mind has not grasped a certain stimulus, due to the unreliability of self-reporting. For this reason, some psychologists prefer to distinguish between "implicit" and "explicit" memory. In another approach, one can also describe a subliminal stimulus as meeting an "objective" but not a "subjective" threshold.
The automaticity model, which became widespread following exposition by John Bargh and others in the 1980s, describes sophisticated processes for executing goals which can be selected and performed over an extended duration without conscious awareness. Some experimental data suggests that the brain begins to consider taking actions before the mind becomes aware of them. This influence of unconscious forces on people's choices naturally bears on philosophical questions free will. John Bargh, Daniel Wegner, and Ellen Langer are some prominent contemporary psychologists who describe free will as an illusion.
Psychologists such as William James initially used the term "motivation" to refer to intention, in a sense similar to the concept of "will" in European philosophy. With the steady rise of Darwinian and Freudian thinking, instinct also came to be seen as a primary source of motivation. According to drive theory, the forces of instinct combine into a single source of energy which exerts a constant influence. Psychoanalysis, like biology, regarded these forces as physical demands made by the organism on the nervous system. However, they believed that these forces, especially the sexual instincts, could become entangled and transmuted within the psyche. Classical psychoanalysis conceives of a struggle between the pleasure principle and the reality principle, roughly corresponding to id and ego. Later, in "Beyond the Pleasure Principle", Freud introduced the concept of the "death drive", a compulsion towards aggression, destruction, and psychic repetition of traumatic events. Meanwhile, behaviorist researchers used simple dichotomous models (pleasure/pain, reward/punishment) and well-established principles such as the idea that a thirsty creature will take pleasure in drinking. Clark Hull formalized the latter idea with his drive reduction model.
Hunger, thirst, fear, sexual desire, and thermoregulation all seem to constitute fundamental motivations for animals. Humans also seem to exhibit a more complex set of motivations—though theoretically these could be explained as resulting from primordial instincts—including desires for belonging, self-image, self-consistency, truth, love, and control.
Motivation can be modulated or manipulated in many different ways. Researchers have found that eating, for example, depends not only on the organism's fundamental need for homeostasis—an important factor causing the experience of hunger—but also on circadian rhythms, food availability, food palatability, and cost. Abstract motivations are also malleable, as evidenced by such phenomena as "goal contagion": the adoption of goals, sometimes unconsciously, based on inferences about the goals of others. Vohs and Baumeister suggest that contrary to the need-desire-fulfilment cycle of animal instincts, human motivations sometimes obey a "getting begets wanting" rule: the more you get a reward such as self-esteem, love, drugs, or money, the more you want it. They suggest that this principle can even apply to food, drink, sex, and sleep.
Mainly focusing on the development of the human mind through the life span, developmental psychology seeks to understand how people come to perceive, understand, and act within the world and how these processes change as they age. This may focus on cognitive, affective, moral, social, or neural development. Researchers who study children use a number of unique research methods to make observations in natural settings or to engage them in experimental tasks. Such tasks often resemble specially designed games and activities that are both enjoyable for the child and scientifically useful, and researchers have even devised clever methods to study the mental processes of infants. In addition to studying children, developmental psychologists also study aging and processes throughout the life span, especially at other times of rapid change (such as adolescence and old age). Developmental psychologists draw on the full range of psychological theories to inform their research.
All researched psychological traits are influenced by both genes and environment, to varying degrees. These two sources of influence are often confounded in observational research of individuals or families. An example is the transmission of depression from a depressed mother to her offspring. Theory may hold that the offspring, by virtue of having a depressed mother in his or her (the offspring's) environment, is at risk for developing depression. However, risk for depression is also influenced to some extent by genes. The mother may both carry genes that contribute to her depression but will also have passed those genes on to her offspring thus increasing the offspring's risk for depression. Genes and environment in this simple transmission model are completely confounded. Experimental and quasi-experimental behavioral genetic research uses genetic methodologies to disentangle this confound and understand the nature and origins of individual differences in behavior. Traditionally this research has been conducted using twin studies and adoption studies, two designs where genetic and environmental influences can be partially un-confounded. More recently, the availability of microarray molecular genetic or genome sequencing technologies allows researchers to measure participant DNA variation directly, and test whether individual genetic variants within genes are associated with psychological traits and psychopathology through methods including genome-wide association studies. One goal of such research is similar to that in positional cloning and its success in Huntington's: once a causal gene is discovered biological research can be conducted to understand how that gene influences the phenotype. One major result of genetic association studies is the general finding that psychological traits and psychopathology, as well as complex medical diseases, are highly polygenic, where a large number (on the order of hundreds to thousands) of genetic variants, each of small effect, contribute to individual differences in the behavioral trait or propensity to the disorder. Active research continues to understand the genetic and environmental bases of behavior and their interaction.
Psychology encompasses many subfields and includes different approaches to the study of mental processes and behavior:
Psychological testing has ancient origins, such as examinations for the Chinese civil service dating back to 2200 BC. Written exams began during the Han dynasty (202 BC – AD 200). By 1370, the Chinese system required a stratified series of tests, involving essay writing and knowledge of diverse topics. The system was ended in 1906. In Europe, mental assessment took a more physiological approach, with theories of physiognomy—judgment of character based on the face—described by Aristotle in 4th century BC Greece. Physiognomy remained current through the Enlightenment, and added the doctrine of phrenology: a study of mind and intelligence based on simple assessment of neuroanatomy.
When experimental psychology came to Britain, Francis Galton was a leading practitioner, and, with his procedures for measuring reaction time and sensation, is considered an inventor of modern mental testing (also known as "psychometrics"). James McKeen Cattell, a student of Wundt and Galton, brought the concept to the United States, and in fact coined the term "mental test". In 1901, Cattell's student Clark Wissler published discouraging results, suggesting that mental testing of Columbia and Barnard students failed to predict their academic performance. In response to 1904 orders from the Minister of Public Instruction, French psychologists Alfred Binet and Théodore Simon elaborated a new test of intelligence in 1905–1911, using a range of questions diverse in their nature and difficulty. Binet and Simon introduced the concept of mental age and referred to the lowest scorers on their test as "idiots". Henry H. Goddard put the Binet-Simon scale to work and introduced classifications of mental level such as "imbecile" and "feebleminded". In 1916 (after Binet's death), Stanford professor Lewis M. Terman modified the Binet-Simon scale (renamed the Stanford–Binet scale) and introduced the intelligence quotient as a score report. From this test, Terman concluded that mental retardation "represents the level of intelligence which is very, very common among Spanish-Indians and Mexican families of the Southwest and also among negroes. Their dullness seems to be racial."
Following the Army Alpha and Army Beta tests for soldiers in World War I, mental testing became popular in the US, where it was soon applied to school children. The federally created National Intelligence Test was administered to 7 million children in the 1920s, and in 1926 the College Entrance Examination Board created the Scholastic Aptitude Test to standardize college admissions. The results of intelligence tests were used to argue for segregated schools and economic functions—i.e. the preferential training of Black Americans for manual labor. These practices were criticized by black intellectuals such a Horace Mann Bond and Allison Davis. Eugenicists used mental testing to justify and organize compulsory sterilization of individuals classified as mentally retarded. In the United States, tens of thousands of men and women were sterilized. Setting a precedent which has never been overturned, the U.S. Supreme Court affirmed the constitutionality of this practice in the 1907 case "Buck v. Bell".
Today mental testing is a routine phenomenon for people of all ages in Western societies. Modern testing aspires to criteria including standardization of procedure, consistency of results, output of an interpretable score, statistical norms describing population outcomes, and, ideally, effective prediction of behavior and life outcomes outside of testing situations.
The provision of psychological health services is generally called "clinical psychology" in the U.S. The definitions of this term are various and may include school psychology and counseling psychology. Practitioners typically includes people who have graduated from doctoral programs in clinical psychology but may also include others. In Canada, the above groups usually fall within the larger category of "professional psychology". In Canada and the US, practitioners get bachelor's degrees and doctorates, then spend one year in an internship and one year in postdoctoral education. In Mexico and most other Latin American and European countries, psychologists do not get bachelor's and doctorate degrees; instead, they take a three-year professional course following high school. Clinical psychology is at present the largest specialization within psychology. It includes the study and application of psychology for the purpose of understanding, preventing, and relieving psychologically based distress, dysfunction or mental illness and to promote subjective well-being and personal development. Central to its practice are psychological assessment and psychotherapy although clinical psychologists may also engage in research, teaching, consultation, forensic testimony, and program development and administration.
Credit for the first psychology clinic in the United States typically goes to Lightner Witmer, who established his practice in Philadelphia in 1896. Another modern psychotherapist was Morton Prince. For the most part, in the first part of the twentieth century, most mental health care in the United States was performed by specialized medical doctors called psychiatrists. Psychology entered the field with its refinements of mental testing, which promised to improve diagnosis of mental problems. For their part, some psychiatrists became interested in using psychoanalysis and other forms of psychodynamic psychotherapy to understand and treat the mentally ill. In this type of treatment, a specially trained therapist develops a close relationship with the patient, who discusses wishes, dreams, social relationships, and other aspects of mental life. The therapist seeks to uncover repressed material and to understand why the patient creates defenses against certain thoughts and feelings. An important aspect of the therapeutic relationship is transference, in which deep unconscious feelings in a patient reorient themselves and become manifest in relation to the therapist.
Psychiatric psychotherapy blurred the distinction between psychiatry and psychology, and this trend continued with the rise of community mental health facilities and behavioral therapy, a thoroughly non-psychodynamic model which used behaviorist learning theory to change the actions of patients. A key aspect of behavior therapy is empirical evaluation of the treatment's effectiveness. In the 1970s, cognitive-behavior therapy arose, using similar methods and now including the cognitive constructs which had gained popularity in theoretical psychology. A key practice in behavioral and cognitive-behavioral therapy is exposing patients to things they fear, based on the premise that their responses (fear, panic, anxiety) can be deconditioned.
Mental health care today involves psychologists and social workers in increasing numbers. In 1977, National Institute of Mental Health director Bertram Brown described this shift as a source of "intense competition and role confusion". Graduate programs issuing doctorates in psychology (PhD or PsyD) emerged in the 1950s and underwent rapid increase through the 1980s. This degree is intended to train practitioners who might conduct scientific research.
Some clinical psychologists may focus on the clinical management of patients with brain injury—this area is known as clinical neuropsychology. In many countries, clinical psychology is a regulated mental health profession. The emerging field of "disaster psychology" (see crisis intervention) involves professionals who respond to large-scale traumatic events.
The work performed by clinical psychologists tends to be influenced by various therapeutic approaches, all of which involve a formal relationship between professional and client (usually an individual, couple, family, or small group). Typically, these approaches encourage new ways of thinking, feeling, or behaving. Four major theoretical perspectives are psychodynamic, cognitive behavioral, existential–humanistic, and systems or family therapy. There has been a growing movement to integrate the various therapeutic approaches, especially with an increased understanding of issues regarding culture, gender, spirituality, and sexual orientation. With the advent of more robust research findings regarding psychotherapy, there is evidence that most of the major therapies have equal effectiveness, with the key common element being a strong therapeutic alliance. Because of this, more training programs and psychologists are now adopting an eclectic therapeutic orientation.
Diagnosis in clinical psychology usually follows the "Diagnostic and Statistical Manual of Mental Disorders" (DSM), a handbook first published by the American Psychiatric Association in 1952. New editions over time have increased in size and focused more on medical language. The study of mental illnesses is called abnormal psychology.
Educational psychology is the study of how humans learn in educational settings, the effectiveness of educational interventions, the psychology of teaching, and the social psychology of schools as organizations. The work of child psychologists such as Lev Vygotsky, Jean Piaget, and Jerome Bruner has been influential in creating teaching methods and educational practices. Educational psychology is often included in teacher education programs in places such as North America, Australia, and New Zealand.
School psychology combines principles from educational psychology and clinical psychology to understand and treat students with learning disabilities; to foster the intellectual growth of gifted students; to facilitate prosocial behaviors in adolescents; and otherwise to promote safe, supportive, and effective learning environments. School psychologists are trained in educational and behavioral assessment, intervention, prevention, and consultation, and many have extensive training in research.
Industrialists soon brought the nascent field of psychology to bear on the study of scientific management techniques for improving workplace efficiency. This field was at first called "economic psychology" or "business psychology"; later, "industrial psychology", "employment psychology", or "psychotechnology". An important early study examined workers at Western Electric's Hawthorne plant in Cicero, Illinois from 1924–1932. With funding from the Laura Spelman Rockefeller Fund and guidance from Australian psychologist Elton Mayo, Western Electric experimented on thousands of factory workers to assess their responses to illumination, breaks, food, and wages. The researchers came to focus on workers' responses to observation itself, and the term Hawthorne effect is now used to describe the fact that people work harder when they think they're being watched.
The name industrial and organizational psychology (I–O) arose in the 1960s and became enshrined as the Society for Industrial and Organizational Psychology, Division 14 of the American Psychological Association, in 1973. The goal is to optimize human potential in the workplace. Personnel psychology, a subfield of I–O psychology, applies the methods and principles of psychology in selecting and evaluating workers. I–O psychology's other subfield, organizational psychology, examines the effects of work environments and management styles on worker motivation, job satisfaction, and productivity. The majority of I–O psychologists work outside of academia, for private and public organizations and as consultants. A psychology consultant working in business today might expect to provide executives with information and ideas about their industry, their target markets, and the organization of their company.
One role for psychologists in the military is to evaluate and counsel soldiers and other personnel. In the U.S., this function began during World War I, when Robert Yerkes established the School of Military Psychology at Fort Oglethorpe in Georgia, to provide psychological training for military staff military. Today, U.S Army psychology includes psychological screening, clinical psychotherapy, suicide prevention, and treatment for post-traumatic stress, as well as other aspects of health and workplace psychology such as smoking cessation.
Psychologists may also work on a diverse set of campaigns known broadly as psychological warfare. Psychological warfare chiefly involves the use of propaganda to influence enemy soldiers and civilians. In the case of so-called black propaganda the propaganda is designed to seem like it originates from a different source. The CIA's MKULTRA program involved more individualized efforts at mind control, involving techniques such as hypnosis, torture, and covert involuntary administration of LSD. The U.S. military used the name Psychological Operations (PSYOP) until 2010, when these were reclassified as Military Information Support Operations (MISO), part of Information Operations (IO). Psychologists are sometimes involved in assisting the interrogation and torture of suspects, though this has sometimes been denied by those involved and sometimes opposed by others.
Medical facilities increasingly employ psychologists to perform various roles. A prominent aspect of health psychology is the psychoeducation of patients: instructing them in how to follow a medical regimen. Health psychologists can also educate doctors and conduct research on patient compliance.
Psychologists in the field of public health use a wide variety of interventions to influence human behavior. These range from public relations campaigns and outreach to governmental laws and policies. Psychologists study the composite influence of all these different tools in an effort to influence whole populations of people.
Black American psychologists Kenneth and Mamie Clark studied the psychological impact of segregation and testified with their findings in the desegregation case "Brown v. Board of Education" (1954).
Positive psychology is the study of factors which contribute to human happiness and well-being, focusing more on people who are currently healthy. In 2010, "Clinical Psychological Review" published a special issue devoted to positive psychological interventions, such as gratitude journaling and the physical expression of gratitude. Positive psychological interventions have been limited in scope, but their effects are thought to be superior to that of placebos, especially with regard to helping people with body image problems.
Quantitative psychological research lends itself to the statistical testing of hypotheses. Although the field makes abundant use of randomized and controlled experiments in laboratory settings, such research can only assess a limited range of short-term phenomena. Thus, psychologists also rely on creative statistical methods to glean knowledge from clinical trials and population data. These include the Pearson product–moment correlation coefficient, the analysis of variance, multiple linear regression, logistic regression, structural equation modeling, and hierarchical linear modeling. The measurement and operationalization of important constructs is an essential part of these research designs
A true experiment with random allocation of subjects to conditions allows researchers to make strong inferences about causal relationships. In an experiment, the researcher alters parameters of influence, called independent variables, and measures resulting changes of interest, called dependent variables. Prototypical experimental research is conducted in a laboratory with a carefully controlled environment.
Repeated-measures experiments are those which take place through intervention on multiple occasions. In research on the effectiveness of psychotherapy, experimenters often compare a given treatment with placebo treatments, or compare different treatments against each other. Treatment type is the independent variable. The dependent variables are outcomes, ideally assessed in several ways by different professionals. Using crossover design, researchers can further increase the strength of their results by testing both of two treatments on two groups of subjects.
Quasi-experimental design refers especially to situations precluding random assignment to different conditions. Researchers can use common sense to consider how much the nonrandom assignment threatens the study's validity. For example, in research on the best way to affect reading achievement in the first three grades of school, school administrators may not permit educational psychologists to randomly assign children to phonics and whole language classrooms, in which case the psychologists must work with preexisting classroom assignments. Psychologists will compare the achievement of children attending phonics and whole language classes.
Experimental researchers typically use a statistical hypothesis testing model which involves making predictions before conducting the experiment, then assessing how well the data supports the predictions. (These predictions may originate from a more abstract scientific hypothesis about how the phenomenon under study actually works.) Analysis of variance (ANOVA) statistical techniques are used to distinguish unique results of the experiment from the null hypothesis that variations result from random fluctuations in data. In psychology, the widely used standard ascribes statistical significance to results which have less than 5% probability of being explained by random variation.
Statistical surveys are used in psychology for measuring attitudes and traits, monitoring changes in mood, checking the validity of experimental manipulations, and for other psychological topics. Most commonly, psychologists use paper-and-pencil surveys. However, surveys are also conducted over the phone or through e-mail. Web-based surveys are increasingly used to conveniently reach many subjects.
Neuropsychological tests, such as the Wechsler scales and Wisconsin Card Sorting Test, are mostly questionnaires or simple tasks used which assess a specific type of mental function in the respondent. These can be used in experiments, as in the case of lesion experiments evaluating the results of damage to a specific part of the brain.
Observational studies analyze uncontrolled data in search of correlations; multivariate statistics are typically used to interpret the more complex situation. Cross-sectional observational studies use data from a single point in time, whereas longitudinal studies are used to study trends across the life span. Longitudinal studies track the same people, and therefore detect more individual, rather than cultural, differences. However, they suffer from lack of controls and from confounding factors such as "selective attrition" (the bias introduced when a certain type of subject disproportionately leaves a study).
Exploratory data analysis refers to a variety of practices which researchers can use to visualize and analyze existing sets of data. In Peirce's three modes of inference, exploratory data analysis corresponds to abduction, or hypothesis formation. Meta-analysis is the technique of integrating the results from multiple studies and interpreting the statistical properties of the pooled dataset.
A classic and popular tool used to relate mental and neural activity is the electroencephalogram (EEG), a technique using amplified electrodes on a person's scalp to measure voltage changes in different parts of the brain. Hans Berger, the first researcher to use EEG on an unopened skull, quickly found that brains exhibit signature "brain waves": electric oscillations which correspond to different states of consciousness. Researchers subsequently refined statistical methods for synthesizing the electrode data, and identified unique brain wave patterns such as the delta wave observed during non-REM sleep.
Newer functional neuroimaging techniques include functional magnetic resonance imaging and positron emission tomography, both of which track the flow of blood through the brain. These technologies provide more localized information about activity in the brain and create representations of the brain with widespread appeal. They also provide insight which avoids the classic problems of subjective self-reporting. It remains challenging to draw hard conclusions about where in the brain specific thoughts originate—or even how usefully such localization corresponds with reality. However, neuroimaging has delivered unmistakable results showing the existence of correlations between mind and brain. Some of these draw on a systemic neural network model rather than a localized function model.
Psychiatric interventions such as transcranial magnetic stimulation and drugs also provide information about brain–mind interactions. Psychopharmacology is the study of drug-induced mental effects.
Computational modeling is a tool used in mathematical psychology and cognitive psychology to simulate behavior. This method has several advantages. Since modern computers process information quickly, simulations can be run in a short time, allowing for high statistical power. Modeling also allows psychologists to visualize hypotheses about the functional organization of mental events that couldn't be directly observed in a human. Computational neuroscience uses mathematical models to simulate the brain. Another method is symbolic modeling, which represents many mental objects using variables and rules. Other types of modeling include dynamic systems and stochastic modeling.
Animal experiments aid in investigating many aspects of human psychology, including perception, emotion, learning, memory, and thought, to name a few. In the 1890s, Russian physiologist Ivan Pavlov famously used dogs to demonstrate classical conditioning. Non-human primates, cats, dogs, pigeons, rats, and other rodents are often used in psychological experiments. Ideally, controlled experiments introduce only one independent variable at a time, in order to ascertain its unique effects upon dependent variables. These conditions are approximated best in laboratory settings. In contrast, human environments and genetic backgrounds vary so widely, and depend upon so many factors, that it is difficult to control important variables for human subjects. There are pitfalls in generalizing findings from animal studies to humans through animal models.
Comparative psychology refers to the scientific study of the behavior and mental processes of non-human animals, especially as these relate to the phylogenetic history, adaptive significance, and development of behavior. Research in this area explores the behavior of many species, from insects to primates. It is closely related to other disciplines that study animal behavior such as ethology. Research in comparative psychology sometimes appears to shed light on human behavior, but some attempts to connect the two have been quite controversial, for example the Sociobiology of E.O. Wilson. Animal models are often used to study neural processes related to human behavior, e.g. in cognitive neuroscience.
Research designed to answer questions about the current state of affairs such as the thoughts, feelings, and behaviors of individuals is known as "descriptive research". Descriptive research can be qualitative or quantitative in orientation. "Qualitative research" is descriptive research that is focused on observing and describing events as they occur, with the goal of capturing all of the richness of everyday behavior and with the hope of discovering and understanding phenomena that might have been missed if only more cursory examinations have been made.
Qualitative psychological research methods include interviews, first-hand observation, and participant observation. Creswell (2003) identifies five main possibilities for qualitative research, including narrative, phenomenology, ethnography, case study, and grounded theory. Qualitative researchers sometimes aim to enrich interpretations or critiques of symbols, subjective experiences, or social structures. Sometimes hermeneutic and critical aims can give rise to quantitative research, as in Erich Fromm's study of Nazi voting or Stanley Milgram's studies of obedience to authority.
Just as Jane Goodall studied chimpanzee social and family life by careful observation of chimpanzee behavior in the field, psychologists conduct naturalistic observation of ongoing human social, professional, and family life. Sometimes the participants are aware they are being observed, and other times the participants do not know they are being observed. Strict ethical guidelines must be followed when covert observation is being carried out.
Program Evaluation is a systematic method for collecting, analyzing, and using information to answer questions about projects, policies and programs, particularly about their effectiveness and efficiency. In both the public and private sectors, stakeholders often want to know whether the programs they are funding, implementing, voting for, receiving or objecting to are producing the intended effect. While program evaluation first focuses around this definition, important considerations often include how much the program costs per participant, how the program could be improved, whether the program is worthwhile, whether there are better alternatives, if there are unintended outcomes, and whether the program goals are appropriate and useful.
The field of metascience has revealed significant problems with the methodology of psychological research. Psychological research suffers from high bias, low reproducibility, and widespread misuse of statistics. These finding have led to calls for reform from within and from outside the scientific community.
In 1959, statistician Theodore Sterling examined the results of psychological studies and discovered that 97% of them supported their initial hypotheses, implying a possible publication bias. Similarly, Fanelli (2010) found that 91.5% of psychiatry/psychology studies confirmed the effects they were looking for, and concluded that the odds of this happening (a positive result) was around five times higher than in fields such as space- or geosciences. Fanelli argues that this is because researchers in "softer" sciences have fewer constraints to their conscious and unconscious biases.
Over the subsequent few years, a replication crisis in psychology was identified, where it was publicly noted that many notable findings in the field had not been replicated and with some researchers being accused of outright fraud in their results. More systematic efforts to assess the extent of the problem, such as the Reproducibility Project of the Center for Open Science, found that as many as two-thirds of highly publicized findings in psychology had failed to be replicated, with reproducibility being generally stronger in studies and journals representing cognitive psychology than social psychology topics, and the subfields of differential psychology (including general intelligence and Big Five personality traits research), behavioral genetics (except for candidate gene and candidate gene-by-environment interaction research on behavior and mental illness), and the related field of behavioral economics being largely unaffected by the replication crisis. Other subfields of psychology that have been implicated by the replication crisis are clinical psychology, developmental psychology (particularly cognitive and personality development), and a field closely related to psychology that has also been implicated is educational research.
Focus on the replication crisis has led to other renewed efforts in the discipline to re-test important findings, and in response to concerns about publication bias and "p"-hacking, more than 140 psychology journals have adopted result-blind peer review where studies are accepted not on the basis of their findings and after the studies are completed, but before the studies are conducted and upon the basis of the methodological rigor of their experimental designs and the theoretical justifications for their statistical analysis techniques before data collection or analysis is done. In addition, large-scale collaborations between researchers working in multiple labs in different countries and that regularly make their data openly available for different researchers to assess have become much more common in the field. Early analysis of such reforms has estimated that 61 percent of result-blind studies have led to null results, in contrast to an estimated 5 to 20 percent in earlier research.
Some critics view statistical hypothesis testing as misplaced. Psychologist and statistician Jacob Cohen wrote in 1994 that psychologists routinely confuse statistical significance with practical importance, enthusiastically reporting great certainty in unimportant facts. Some psychologists have responded with an increased use of effect size statistics, rather than sole reliance on p-values.
In 2008, Arnett pointed out that most articles in American Psychological Association journals were about US populations when U.S. citizens are only 5% of the world's population. He complained that psychologists had no basis for assuming psychological processes to be universal and generalizing research findings to the rest of the global population. In 2010, Henrich, Heine, and Norenzayan reported a systemic bias in conducting psychology studies with participants from "WEIRD" (western, educated, industrialized, rich and democratic) societies. Although only 1/8 people worldwide live in regions that fall into the WEIRD classification, the researchers claimed that 60–90% of psychology studies are performed on participants from these areas. The article gave examples of results that differ significantly between people from WEIRD and tribal cultures, including the Müller-Lyer illusion. Arnett (2008), Altmaier and Hall (2008), and Morgan-Consoli et al. (2018) saw the Western bias in research and theory as a serious problem considering psychologists are increasingly applying psychological principles developed in WEIRD regions in their research, clinical work, and consultation with populations around the world. In 2018, Rad, Martingano & Ginges showed that nearly a decade after Henrich et al.'s paper, over 80% of the samples used in studies published in the journal, Psychological Science, were from the WEIRD population. Moreover, their analysis showed that several studies did not fully disclose the origin of their samples, and the authors offer a set of recommendations to editors and reviewers to reduce the WEIRD bias.
Some observers perceive a gap between scientific theory and its application—in particular, the application of unsupported or unsound clinical practices. Critics say there has been an increase in the number of mental health training programs that do not instill scientific competence. Practices such as "facilitated communication for infantile autism"; memory-recovery techniques including body work; and other therapies, such as rebirthing and reparenting, may be dubious or even dangerous, despite their popularity. In 1984, Allen Neuringer made a similar point regarding the experimental analysis of behavior. Psychologists, sometimes divided along the lines of laboratory vs. clinic, continue to debate these issues.
Ethical standards in the discipline have changed over time. Some famous past studies are today considered unethical and in violation of established codes the Canadian Code of Conduct for Research Involving Humans, and the Belmont Report).
The most important contemporary standards are informed and voluntary consent. After World War II, the Nuremberg Code was established because of Nazi abuses of experimental subjects. Later, most countries (and scientific journals) adopted the Declaration of Helsinki. In the U.S., the National Institutes of Health established the Institutional Review Board in 1966, and in 1974 adopted the National Research Act (HR 7724). All of these measures encouraged researchers to obtain informed consent from human participants in experimental studies. A number of influential studies led to the establishment of this rule; such studies included the MIT and Fernald School radioisotope studies, the Thalidomide tragedy, the Willowbrook hepatitis study, and Stanley Milgram's studies of obedience to authority.
University psychology departments have ethics committees dedicated to the rights and well-being of research subjects. Researchers in psychology must gain approval of their research projects before conducting any experiment to protect the interests of human participants and laboratory animals.
The ethics code of the American Psychological Association originated in 1951 as "Ethical Standards of Psychologists". This code has guided the formation of licensing laws in most American states. It has changed multiple times over the decades since its adoption. In 1989, the APA revised its policies on advertising and referral fees to negotiate the end of an investigation by the Federal Trade Commission. The 1992 incarnation was the first to distinguish between "aspirational" ethical standards and "enforceable" ones. Members of the public have a five-year window to file ethics complaints about APA members with the APA ethics committee; members of the APA have a three-year window.
Some of the ethical issues considered most important are the requirement to practice only within the area of competence, to maintain confidentiality with the patients, and to avoid sexual relations with them. Another important principle is informed consent, the idea that a patient or research subject must understand and freely choose a procedure they are undergoing. Some of the most common complaints against clinical psychologists include sexual misconduct, and involvement in child custody evaluations.
Current ethical guidelines state that using non-human animals for scientific purposes is only acceptable when the harm (physical or psychological) done to animals is outweighed by the benefits of the research. Keeping this in mind, psychologists can use certain research techniques on animals that could not be used on humans. | https://en.wikipedia.org/wiki?curid=22921 |
PhpWiki
PhpWiki is a web-based wiki software application.
It began as a clone of WikiWikiWeb and was the first wiki written in PHP.
PhpWiki has been used to edit and format paper books for publication.
The first version, by Steve Wainstead, was released in December 1999. It was the first Wiki written in PHP to be publicly released. This version required PHP 3.x and only supported DBM files.
It was a feature-for-feature reimplementation of the original WikiWikiWeb at c2.com.
In early 2000 Arno Hollosi added a second database library to allow running PhpWiki on MySQL.
From then on more features were added and contributions to the software increased, adding features such as a templating system, color diffs, rewrites of the rendering engine and much more. Arno was interested in running a wiki for the game Go.
Jeff Dairiki was the next major contributor, and soon headed the project for the next few years, followed up by Reini Urban up to 1.4, and then Marc-Etienne Vargenau since 1.5.
With version 1.4.0 Wikicreole 1.0 including additions and MediaWiki markup syntax are supported. In version 1.5.0 PHP 4 support was deprecated. | https://en.wikipedia.org/wiki?curid=22923 |
Poetry
Poetry (derived from the Greek "poiesis", "making") is a form of literature that uses aesthetic and often rhythmic qualities of language—such as phonaesthetics, sound symbolism, and metre—to evoke meanings in addition to, or in place of, the prosaic ostensible meaning.
Poetry has a long history – dating back to prehistoric times with hunting poetry in Africa, and to panegyric and elegiac court poetry of the empires of the Nile, Niger, and Volta River valleys. Some of the earliest written poetry in Africa occurs among the Pyramid Texts written during the 25th century BCE. The earliest surviving Western Asian epic poetry, the "Epic of Gilgamesh", was written in Sumerian.
Early poems in the Eurasian continent evolved from folk songs such as the Chinese "Shijing"; or from a need to retell oral epics, as with the Sanskrit "Vedas", the Zoroastrian "Gathas", and the Homeric epics, the "Iliad" and the "Odyssey". Ancient Greek attempts to define poetry, such as Aristotle's "Poetics", focused on the uses of speech in rhetoric, drama, song, and comedy. Later attempts concentrated on features such as repetition, verse form, and rhyme, and emphasized the aesthetics which distinguish poetry from more objectively-informative prosaic writing.
Poetry uses forms and conventions to suggest differential interpretations of words, or to evoke emotive responses. Devices such as assonance, alliteration, onomatopoeia, and rhythm may convey musical or incantatory effects. The use of ambiguity, symbolism, irony, and other stylistic elements of poetic diction often leaves a poem open to multiple interpretations. Similarly, figures of speech such as metaphor, simile, and metonymy establish a resonance between otherwise disparate images—a layering of meanings, forming connections previously not perceived. Kindred forms of resonance may exist, between individual verses, in their patterns of rhyme or rhythm.
Some poetry types are specific to particular cultures and genres and respond to characteristics of the language in which the poet writes. Readers accustomed to identifying poetry with Dante, Goethe, Mickiewicz, or Rumi may think of it as written in lines based on rhyme and regular meter. There are, however, traditions, such as Biblical poetry, that use other means to create rhythm and euphony. Much modern poetry reflects a critique of poetic tradition, testing the principle of euphony itself or altogether forgoing rhyme or set rhythm.
In an increasingly globalized world, poets often adapt forms, styles, and techniques from diverse cultures and languages.
A Western cultural tradition (which extends at least from Homer to Rilke) associates the production of poetry with inspiration – often by a Muse (either classical or contemporary).
Some scholars believe that the art of poetry may predate literacy.
Others, however, suggest that poetry did not necessarily predate writing.
The oldest surviving epic poem, the "Epic of Gilgamesh", dates from the 3rd millenniumBCE in Sumer (in Mesopotamia, now Iraq), and was written in cuneiform script on clay tablets and, later, on papyrus. A tablet #2461 dating to 2000BCE describes an annual rite in which the king symbolically married and mated with the goddess Inanna to ensure fertility and prosperity; some have labelled it the world's oldest love poem. An example of Egyptian epic poetry is "The Story of Sinuhe" (c. 1800 BCE).
Other ancient epic poetry includes the Greek epics, the "Iliad" and the "Odyssey"; the Avestan books, the "Gathic Avesta" and the "Yasna"; the Roman national epic, Virgil's "Aeneid" (written between 29 and 19 BCE); and the Indian epics, the "Ramayana" and the "Mahabharata". Epic poetry, including the "Odyssey", the "Gathas", and the Indian "Vedas", appears to have been composed in poetic form as an aid to memorization and oral transmission in prehistoric and ancient societies.
Other forms of poetry developed directly from folk songs. The earliest entries in the oldest extant collection of Chinese poetry, the "Shijing", were initially lyrics.
The efforts of ancient thinkers to determine what makes poetry distinctive as a form, and what distinguishes good poetry from bad, resulted in "poetics"—the study of the aesthetics of poetry. Some ancient societies, such as China's through her "Shijing" ("Classic of Poetry"), developed canons of poetic works that had ritual as well as aesthetic importance. More recently, thinkers have struggled to find a definition that could encompass formal differences as great as those between Chaucer's "Canterbury Tales" and Matsuo Bashō's "Oku no Hosomichi", as well as differences in content spanning Tanakh religious poetry, love poetry, and rap.
Classical thinkers in the West employed classification as a way to define and assess the quality of poetry. Notably, the existing fragments of Aristotle's "Poetics" describe three genres of poetry—the epic, the comic, and the tragic—and develop rules to distinguish the highest-quality poetry in each genre, based on the perceived underlying purposes of the genre. Later aestheticians identified three major genres: epic poetry, lyric poetry, and dramatic poetry, treating comedy and tragedy as subgenres of dramatic poetry.
Aristotle's work was influential throughout the Middle East during the Islamic Golden Age, as well as in Europe during the Renaissance. Later poets and aestheticians often distinguished poetry from, and defined it in opposition to prose, which they generally understood as writing with a proclivity to logical explication and a linear narrative structure.
This does not imply that poetry is illogical or lacks narration, but rather that poetry is an attempt to render the beautiful or sublime without the burden of engaging the logical or narrative thought-process. English Romantic poet John Keats termed this escape from logic "Negative capability". This "romantic" approach views form as a key element of successful poetry because form is abstract and distinct from the underlying notional logic. This approach remained influential into the 20th century.
During this period, there was also substantially more interaction among the various poetic traditions, in part due to the spread of European colonialism and the attendant rise in global trade. In addition to a boom in translation, during the Romantic period numerous ancient works were rediscovered.
Some 20th-century literary theorists rely less on the ostensible opposition of prose and poetry, instead focusing on the poet as simply one who creates using language, and poetry as what the poet creates. The underlying concept of the poet as creator is not uncommon, and some modernist poets essentially do not distinguish between the creation of a poem with words, and creative acts in other media. Yet other modernists challenge the very attempt to define poetry as misguided.
The rejection of traditional forms and structures for poetry that began in the first half of the 20th century coincided with a questioning of the purpose and meaning of traditional definitions of poetry and of distinctions between poetry and prose, particularly given examples of poetic prose and prosaic poetry. Numerous modernist poets have written in non-traditional forms or in what traditionally would have been considered prose, although their writing was generally infused with poetic diction and often with rhythm and tone established by non-metrical means. While there was a substantial formalist reaction within the modernist schools to the breakdown of structure, this reaction focused as much on the development of new formal structures and syntheses as on the revival of older forms and structures.
Recently, postmodernism has come to regard more completely prose and poetry as distinct entities, and also different genres of poetry as having meaning only as cultural artifacts. Postmodernism goes beyond modernism's emphasis on the creative role of the poet, to emphasize the role of the reader of a text (hermeneutics), and to highlight the complex cultural web within which a poem is read. Today, throughout the world, poetry often incorporates poetic form and diction from other cultures and from the past, further confounding attempts at definition and classification that once made sense within a tradition such as the Western canon.
The early 21st-century poetic tradition appears to continue to strongly orient itself to earlier precursor poetic traditions such as those initiated by Whitman, Emerson, and Wordsworth. The literary critic Geoffrey Hartman (1929–2016) used the phrase "the anxiety of demand" to describe the contemporary response to older poetic traditions as "being fearful that the fact no longer has a form", building on a trope introduced by Emerson. Emerson had maintained that in the debate concerning poetic structure where either "form" or "fact" could predominate, that one need simply "Ask the fact for the form." This has been challenged at various levels by other literary scholars such as Bloom (1930–2019), who has stated: "The generation of poets who stand together now, mature and ready to write the major American verse of the twenty-first century, may yet be seen as what Stevens called 'a great shadow's last embellishment,' the shadow being Emerson's."
Prosody is the study of the meter, rhythm, and intonation of a poem. Rhythm and meter are different, although closely related. Meter is the definitive pattern established for a verse (such as iambic pentameter), while rhythm is the actual sound that results from a line of poetry. Prosody also may be used more specifically to refer to the scanning of poetic lines to show meter.
The methods for creating poetic rhythm vary across languages and between poetic traditions. Languages are often described as having timing set primarily by accents, syllables, or moras, depending on how rhythm is established, though a language can be influenced by multiple approaches. Japanese is a mora-timed language. Latin, Catalan, French, Leonese, Galician and Spanish are called syllable-timed languages. Stress-timed languages include English, Russian and, generally, German. Varying intonation also affects how rhythm is perceived. Languages can rely on either pitch or tone. Some languages with a pitch accent are Vedic Sanskrit or Ancient Greek. Tonal languages include Chinese, Vietnamese and most Subsaharan languages.
Metrical rhythm generally involves precise arrangements of stresses or syllables into repeated patterns called feet within a line. In Modern English verse the pattern of stresses primarily differentiate feet, so rhythm based on meter in Modern English is most often founded on the pattern of stressed and unstressed syllables (alone or elided). In the classical languages, on the other hand, while the metrical units are similar, vowel length rather than stresses define the meter. Old English poetry used a metrical pattern involving varied numbers of syllables but a fixed number of strong stresses in each line.
The chief device of ancient Hebrew Biblical poetry, including many of the psalms, was "parallelism", a rhetorical structure in which successive lines reflected each other in grammatical structure, sound structure, notional content, or all three. Parallelism lent itself to antiphonal or call-and-response performance, which could also be reinforced by intonation. Thus, Biblical poetry relies much less on metrical feet to create rhythm, but instead creates rhythm based on much larger sound units of lines, phrases and sentences. Some classical poetry forms, such as Venpa of the Tamil language, had rigid grammars (to the point that they could be expressed as a context-free grammar) which ensured a rhythm.
Classical Chinese poetics, based on the tone system of Middle Chinese, recognized two kinds of tones: the level (平 "píng") tone and the oblique (仄 "zè") tones, a category consisting of the rising (上 "sháng") tone, the departing (去 "qù") tone and the entering (入 "rù") tone. Certain forms of poetry placed constraints on which syllables were required to be level and which oblique.
The formal patterns of meter used in Modern English verse to create rhythm no longer dominate contemporary English poetry. In the case of free verse, rhythm is often organized based on looser units of cadence rather than a regular meter. Robinson Jeffers, Marianne Moore, and William Carlos Williams are three notable poets who reject the idea that regular accentual meter is critical to English poetry. Jeffers experimented with sprung rhythm as an alternative to accentual rhythm.
In the Western poetic tradition, meters are customarily grouped according to a characteristic metrical foot and the number of feet per line. The number of metrical feet in a line are described using Greek terminology: tetrameter for four feet and hexameter for six feet, for example. Thus, "iambic pentameter" is a meter comprising five feet per line, in which the predominant kind of foot is the "iamb". This metric system originated in ancient Greek poetry, and was used by poets such as Pindar and Sappho, and by the great tragedians of Athens. Similarly, "dactylic hexameter", comprises six feet per line, of which the dominant kind of foot is the "dactyl". Dactylic hexameter was the traditional meter of Greek epic poetry, the earliest extant examples of which are the works of Homer and Hesiod. Iambic pentameter and dactylic hexameter were later used by a number of poets, including William Shakespeare and Henry Wadsworth Longfellow, respectively. The most common metrical feet in English are:
There are a wide range of names for other types of feet, right up to a choriamb, a four syllable metric foot with a stressed syllable followed by two unstressed syllables and closing with a stressed syllable. The choriamb is derived from some ancient Greek and Latin poetry. Languages which utilize vowel length or intonation rather than or in addition to syllabic accents in determining meter, such as Ottoman Turkish or Vedic, often have concepts similar to the iamb and dactyl to describe common combinations of long and short sounds.
Each of these types of feet has a certain "feel," whether alone or in combination with other feet. The iamb, for example, is the most natural form of rhythm in the English language, and generally produces a subtle but stable verse. Scanning meter can often show the basic or fundamental pattern underlying a verse, but does not show the varying degrees of stress, as well as the differing pitches and lengths of syllables.
There is debate over how useful a multiplicity of different "feet" is in describing meter. For example, Robert Pinsky has argued that while dactyls are important in classical verse, English dactylic verse uses dactyls very irregularly and can be better described based on patterns of iambs and anapests, feet which he considers natural to the language. Actual rhythm is significantly more complex than the basic scanned meter described above, and many scholars have sought to develop systems that would scan such complexity. Vladimir Nabokov noted that overlaid on top of the regular pattern of stressed and unstressed syllables in a line of verse was a separate pattern of accents resulting from the natural pitch of the spoken words, and suggested that the term "scud" be used to distinguish an unaccented stress from an accented stress.
Different traditions and genres of poetry tend to use different meters, ranging from the Shakespearean iambic pentameter and the Homeric dactylic hexameter to the anapestic tetrameter used in many nursery rhymes. However, a number of variations to the established meter are common, both to provide emphasis or attention to a given foot or line and to avoid boring repetition. For example, the stress in a foot may be inverted, a caesura (or pause) may be added (sometimes in place of a foot or stress), or the final foot in a line may be given a feminine ending to soften it or be replaced by a spondee to emphasize it and create a hard stop. Some patterns (such as iambic pentameter) tend to be fairly regular, while other patterns, such as dactylic hexameter, tend to be highly irregular. Regularity can vary between language. In addition, different patterns often develop distinctively in different languages, so that, for example, iambic tetrameter in Russian will generally reflect a regularity in the use of accents to reinforce the meter, which does not occur, or occurs to a much lesser extent, in English.
Some common metrical patterns, with notable examples of poets and poems who use them, include:
Rhyme, alliteration, assonance and consonance are ways of creating repetitive patterns of sound. They may be used as an independent structural element in a poem, to reinforce rhythmic patterns, or as an ornamental element. They can also carry a meaning separate from the repetitive sound patterns created. For example, Chaucer used heavy alliteration to mock Old English verse and to paint a character as archaic.
Rhyme consists of identical ("hard-rhyme") or similar ("soft-rhyme") sounds placed at the ends of lines or at predictable locations within lines ("internal rhyme"). Languages vary in the richness of their rhyming structures; Italian, for example, has a rich rhyming structure permitting maintenance of a limited set of rhymes throughout a lengthy poem. The richness results from word endings that follow regular forms. English, with its irregular word endings adopted from other languages, is less rich in rhyme. The degree of richness of a language's rhyming structures plays a substantial role in determining what poetic forms are commonly used in that language.
Alliteration is the repetition of letters or letter-sounds at the beginning of two or more words immediately succeeding each other, or at short intervals; or the recurrence of the same letter in accented parts of words. Alliteration and assonance played a key role in structuring early Germanic, Norse and Old English forms of poetry. The alliterative patterns of early Germanic poetry interweave meter and alliteration as a key part of their structure, so that the metrical pattern determines when the listener expects instances of alliteration to occur. This can be compared to an ornamental use of alliteration in most Modern European poetry, where alliterative patterns are not formal or carried through full stanzas. Alliteration is particularly useful in languages with less rich rhyming structures.
Assonance, where the use of similar vowel sounds within a word rather than similar sounds at the beginning or end of a word, was widely used in skaldic poetry but goes back to the Homeric epic. Because verbs carry much of the pitch in the English language, assonance can loosely evoke the tonal elements of Chinese poetry and so is useful in translating Chinese poetry. Consonance occurs where a consonant sound is repeated throughout a sentence without putting the sound only at the front of a word. Consonance provokes a more subtle effect than alliteration and so is less useful as a structural element.
In many languages, including modern European languages and Arabic, poets use rhyme in set patterns as a structural element for specific poetic forms, such as ballads, sonnets and rhyming couplets. However, the use of structural rhyme is not universal even within the European tradition. Much modern poetry avoids traditional rhyme schemes. Classical Greek and Latin poetry did not use rhyme. Rhyme entered European poetry in the High Middle Ages, in part under the influence of the Arabic language in Al Andalus (modern Spain). Arabic language poets used rhyme extensively from the first development of literary Arabic in the sixth century, as in their long, rhyming qasidas. Some rhyming schemes have become associated with a specific language, culture or period, while other rhyming schemes have achieved use across languages, cultures or time periods. Some forms of poetry carry a consistent and well-defined rhyming scheme, such as the chant royal or the rubaiyat, while other poetic forms have variable rhyme schemes.
Most rhyme schemes are described using letters that correspond to sets of rhymes, so if the first, second and fourth lines of a quatrain rhyme with each other and the third line do not rhyme, the quatrain is said to have an "aa-ba" rhyme scheme. This rhyme scheme is the one used, for example, in the rubaiyat form. Similarly, an "a-bb-a" quatrain (what is known as "enclosed rhyme") is used in such forms as the Petrarchan sonnet. Some types of more complicated rhyming schemes have developed names of their own, separate from the "a-bc" convention, such as the ottava rima and terza rima. The types and use of differing rhyming schemes are discussed further in the main article.
Poetic form is more flexible in modernist and post-modernist poetry and continues to be less structured than in previous literary eras. Many modern poets eschew recognizable structures or forms and write in free verse. But poetry remains distinguished from prose by its form; some regard for basic formal structures of poetry will be found in even the best free verse, however much such structures may appear to have been ignored. Similarly, in the best poetry written in classic styles there will be departures from strict form for emphasis or effect.
Among major structural elements used in poetry are the line, the stanza or verse paragraph, and larger combinations of stanzas or lines such as cantos. Also sometimes used are broader visual presentations of words and calligraphy. These basic units of poetic form are often combined into larger structures, called "poetic forms" or poetic modes (see the following section), as in the sonnet.
Poetry is often separated into lines on a page, in a process known as lineation. These lines may be based on the number of metrical feet or may emphasize a rhyming pattern at the ends of lines. Lines may serve other functions, particularly where the poem is not written in a formal metrical pattern. Lines can separate, compare or contrast thoughts expressed in different units, or can highlight a change in tone. See the article on line breaks for information about the division between lines.
Lines of poems are often organized into stanzas, which are denominated by the number of lines included. Thus a collection of two lines is a couplet (or distich), three lines a triplet (or tercet), four lines a quatrain, and so on. These lines may or may not relate to each other by rhyme or rhythm. For example, a couplet may be two lines with identical meters which rhyme or two lines held together by a common meter alone.
Other poems may be organized into verse paragraphs, in which regular rhymes with established rhythms are not used, but the poetic tone is instead established by a collection of rhythms, alliterations, and rhymes established in paragraph form. Many medieval poems were written in verse paragraphs, even where regular rhymes and rhythms were used.
In many forms of poetry, stanzas are interlocking, so that the rhyming scheme or other structural elements of one stanza determine those of succeeding stanzas. Examples of such interlocking stanzas include, for example, the ghazal and the villanelle, where a refrain (or, in the case of the villanelle, refrains) is established in the first stanza which then repeats in subsequent stanzas. Related to the use of interlocking stanzas is their use to separate thematic parts of a poem. For example, the strophe, antistrophe and epode of the ode form are often separated into one or more stanzas.
In some cases, particularly lengthier formal poetry such as some forms of epic poetry, stanzas themselves are constructed according to strict rules and then combined. In skaldic poetry, the dróttkvætt stanza had eight lines, each having three "lifts" produced with alliteration or assonance. In addition to two or three alliterations, the odd-numbered lines had partial rhyme of consonants with dissimilar vowels, not necessarily at the beginning of the word; the even lines contained internal rhyme in set syllables (not necessarily at the end of the word). Each half-line had exactly six syllables, and each line ended in a trochee. The arrangement of dróttkvætts followed far less rigid rules than the construction of the individual dróttkvætts.
Even before the advent of printing, the visual appearance of poetry often added meaning or depth. Acrostic poems conveyed meanings in the initial letters of lines or in letters at other specific places in a poem. In Arabic, Hebrew and Chinese poetry, the visual presentation of finely calligraphed poems has played an important part in the overall effect of many poems.
With the advent of printing, poets gained greater control over the mass-produced visual presentations of their work. Visual elements have become an important part of the poet's toolbox, and many poets have sought to use visual presentation for a wide range of purposes. Some Modernist poets have made the placement of individual lines or groups of lines on the page an integral part of the poem's composition. At times, this complements the poem's rhythm through visual caesuras of various lengths, or creates juxtapositions so as to accentuate meaning, ambiguity or irony, or simply to create an aesthetically pleasing form. In its most extreme form, this can lead to concrete poetry or asemic writing.
Poetic diction treats the manner in which language is used, and refers not only to the sound but also to the underlying meaning and its interaction with sound and form. Many languages and poetic forms have very specific poetic dictions, to the point where distinct grammars and dialects are used specifically for poetry. Registers in poetry can range from strict employment of ordinary speech patterns, as favoured in much late-20th-century prosody, through to highly ornate uses of language, as in medieval and Renaissance poetry.
Poetic diction can include rhetorical devices such as simile and metaphor, as well as tones of voice, such as irony. Aristotle wrote in the "Poetics" that "the greatest thing by far is to be a master of metaphor." Since the rise of Modernism, some poets have opted for a poetic diction that de-emphasizes rhetorical devices, attempting instead the direct presentation of things and experiences and the exploration of tone. On the other hand, Surrealists have pushed rhetorical devices to their limits, making frequent use of catachresis.
Allegorical stories are central to the poetic diction of many cultures, and were prominent in the West during classical times, the late Middle Ages and the Renaissance. "Aesop's Fables", repeatedly rendered in both verse and prose since first being recorded about 500 BCE, are perhaps the richest single source of allegorical poetry through the ages. Other notables examples include the "Roman de la Rose", a 13th-century French poem, William Langland's "Piers Ploughman" in the 14th century, and Jean de la Fontaine's "Fables" (influenced by Aesop's) in the 17th century. Rather than being fully allegorical, however, a poem may contain symbols or allusions that deepen the meaning or effect of its words without constructing a full allegory.
Another element of poetic diction can be the use of vivid imagery for effect. The juxtaposition of unexpected or impossible images is, for example, a particularly strong element in surrealist poetry and haiku. Vivid images are often endowed with symbolism or metaphor. Many poetic dictions use repetitive phrases for effect, either a short phrase (such as Homer's "rosy-fingered dawn" or "the wine-dark sea") or a longer refrain. Such repetition can add a somber tone to a poem, or can be laced with irony as the context of the words changes.
Specific poetic forms have been developed by many cultures. In more developed, closed or "received" poetic forms, the rhyming scheme, meter and other elements of a poem are based on sets of rules, ranging from the relatively loose rules that govern the construction of an elegy to the highly formalized structure of the ghazal or villanelle. Described below are some common forms of poetry widely used across a number of languages. Additional forms of poetry may be found in the discussions of the poetry of particular cultures or periods and in the glossary.
Among the most common forms of poetry, popular from the Late Middle Ages on, is the sonnet, which by the 13th century had become standardized as fourteen lines following a set rhyme scheme and logical structure. By the 14th century and the Italian Renaissance, the form had further crystallized under the pen of Petrarch, whose sonnets were translated in the 16th century by Sir Thomas Wyatt, who is credited with introducing the sonnet form into English literature. A traditional Italian or Petrarchan sonnet follows the rhyme scheme "ABBA, ABBA, CDECDE", though some variation, perhaps the most common being CDCDCD, especially within the final six lines (or "sestet"), is common. The English (or Shakespearean) sonnet follows the rhyme scheme "ABAB, CDCD, EFEF, GG", introducing a third quatrain (grouping of four lines), a final couplet, and a greater amount of variety with regard to rhyme than is usually found in its Italian predecessors. By convention, sonnets in English typically use iambic pentameter, while in the Romance languages, the hendecasyllable and Alexandrine are the most widely used meters.
Sonnets of all types often make use of a "volta", or "turn," a point in the poem at which an idea is turned on its head, a question is answered (or introduced), or the subject matter is further complicated. This "volta" can often take the form of a "but" statement contradicting or complicating the content of the earlier lines. In the Petrarchan sonnet, the turn tends to fall around the division between the first two quatrains and the sestet, while English sonnets usually place it at or near the beginning of the closing couplet.
Sonnets are particularly associated with high poetic diction, vivid imagery, and romantic love, largely due to the influence of Petrarch as well as of early English practitioners such as Edmund Spenser (who gave his name to the Spenserian sonnet), Michael Drayton, and Shakespeare, whose sonnets are among the most famous in English poetry, with twenty being included in the "Oxford Book of English Verse". However, the twists and turns associated with the "volta" allow for a logical flexibility applicable to many subjects. Poets from the earliest centuries of the sonnet to the present have utilized the form to address topics related to politics (John Milton, Percy Bysshe Shelley, Claude McKay), theology (John Donne, Gerard Manley Hopkins), war (Wilfred Owen, e.e. cummings), and gender and sexuality (Carol Ann Duffy). Further, postmodern authors such as Ted Berrigan and John Berryman have challenged the traditional definitions of the sonnet form, rendering entire sequences of "sonnets" that often lack rhyme, a clear logical progression, or even a consistent count of fourteen lines.
"Shi" () Is the main type of Classical Chinese poetry. Within this form of poetry the most important variations are "folk song" styled verse ("yuefu"), "old style" verse ("gushi"), "modern style" verse ("jintishi"). In all cases, rhyming is obligatory. The Yuefu is a folk ballad or a poem written in the folk ballad style, and the number of lines and the length of the lines could be irregular. For the other variations of "shi" poetry, generally either a four line (quatrain, or "jueju") or else an eight-line poem is normal; either way with the even numbered lines rhyming. The line length is scanned by an according number of characters (according to the convention that one character equals one syllable), and are predominantly either five or seven characters long, with a caesura before the final three syllables. The lines are generally end-stopped, considered as a series of couplets, and exhibit verbal parallelism as a key poetic device. The "old style" verse ("Gushi") is less formally strict than the "jintishi", or regulated verse, which, despite the name "new style" verse actually had its theoretical basis laid as far back as Shen Yue (441–513 CE), although not considered to have reached its full development until the time of Chen Zi'ang (661–702 CE). A good example of a poet known for his "Gushi" poems is Li Bai (701–762 CE). Among its other rules, the jintishi rules regulate the tonal variations within a poem, including the use of set patterns of the four tones of Middle Chinese. The basic form of jintishi (sushi) has eight lines in four couplets, with parallelism between the lines in the second and third couplets. The couplets with parallel lines contain contrasting content but an identical grammatical relationship between words. Jintishi often have a rich poetic diction, full of allusion, and can have a wide range of subject, including history and politics. One of the masters of the form was Du Fu (712–770 CE), who wrote during the Tang Dynasty (8th century).
The villanelle is a nineteen-line poem made up of five triplets with a closing quatrain; the poem is characterized by having two refrains, initially used in the first and third lines of the first stanza, and then alternately used at the close of each subsequent stanza until the final quatrain, which is concluded by the two refrains. The remaining lines of the poem have an a-b alternating rhyme. The villanelle has been used regularly in the English language since the late 19th century by such poets as Dylan Thomas, W. H. Auden, and Elizabeth Bishop.
A limerick is a poem that consists of five lines and is often humorous. Rhythm is very important in limericks for the first, second and fifth lines must have seven to ten syllables. However, the third and fourth lines only need five to seven. All of the lines must rhyme and have the same rhythm.
Tanka is a form of unrhymed Japanese poetry, with five sections totalling 31 "on" (phonological units identical to morae), structured in a 5-7-5-7-7 pattern. There is generally a shift in tone and subject matter between the upper 5-7-5 phrase and the lower 7-7 phrase. Tanka were written as early as the Asuka period by such poets as Kakinomoto no Hitomaro ("fl." late 7th century), at a time when Japan was emerging from a period where much of its poetry followed Chinese form. Tanka was originally the shorter form of Japanese formal poetry (which was generally referred to as "waka"), and was used more heavily to explore personal rather than public themes. By the tenth century, tanka had become the dominant form of Japanese poetry, to the point where the originally general term "waka" ("Japanese poetry") came to be used exclusively for tanka. Tanka are still widely written today.
Haiku is a popular form of unrhymed Japanese poetry, which evolved in the 17th century from the "hokku", or opening verse of a renku. Generally written in a single vertical line, the haiku contains three sections totalling 17 "on" (morae), structured in a 5-7-5 pattern. Traditionally, haiku contain a kireji, or cutting word, usually placed at the end of one of the poem's three sections, and a kigo, or season-word. The most famous exponent of the haiku was Matsuo Bashō (1644–1694). An example of his writing:
The "khlong" (, ) is among the oldest Thai poetic forms. This is reflected in its requirements on the tone markings of certain syllables, which must be marked with "mai ek" (, , ) or "mai tho" (, , ). This was likely derived from when the Thai language had three tones (as opposed to today's five, a split which occurred during the Ayutthaya Kingdom period), two of which corresponded directly to the aforementioned marks. It is usually regarded as an advanced and sophisticated poetic form.
In "khlong", a stanza ("bot", , ) has a number of lines ("bat", , , from Pali and Sanskrit "pāda"), depending on the type. The "bat" are subdivided into two "wak" (, , from Sanskrit "varga"). The first "wak" has five syllables, the second has a variable number, also depending on the type, and may be optional. The type of "khlong" is named by the number of "bat" in a stanza; it may also be divided into two main types: "khlong suphap" (, ) and "khlong dan" (, ). The two differ in the number of syllables in the second "wak" of the final "bat" and inter-stanza rhyming rules.
The "khlong si suphap" (, ) is the most common form still currently employed. It has four "bat" per stanza ("si" translates as "four"). The first "wak" of each "bat" has five syllables. The second "wak" has two or four syllables in the first and third "bat", two syllables in the second, and four syllables in the fourth. "Mai ek" is required for seven syllables and "Mai tho" is required for four, as shown below. "Dead word" syllables are allowed in place of syllables which require "mai ek", and changing the spelling of words to satisfy the criteria is usually acceptable.
Odes were first developed by poets writing in ancient Greek, such as Pindar, and Latin, such as Horace. Forms of odes appear in many of the cultures that were influenced by the Greeks and Latins. The ode generally has three parts: a strophe, an antistrophe, and an epode. The antistrophes of the ode possess similar metrical structures and, depending on the tradition, similar rhyme structures. In contrast, the epode is written with a different scheme and structure. Odes have a formal poetic diction and generally deal with a serious subject. The strophe and antistrophe look at the subject from different, often conflicting, perspectives, with the epode moving to a higher level to either view or resolve the underlying issues. Odes are often intended to be recited or sung by two choruses (or individuals), with the first reciting the strophe, the second the antistrophe, and both together the epode. Over time, differing forms for odes have developed with considerable variations in form and structure, but generally showing the original influence of the Pindaric or Horatian ode. One non-Western form which resembles the ode is the qasida in Persian poetry.
The ghazal (also ghazel, gazel, gazal, or gozol) is a form of poetry common in Arabic, Bengali, Persian and Urdu. In classic form, the ghazal has from five to fifteen rhyming couplets that share a refrain at the end of the second line. This refrain may be of one or several syllables and is preceded by a rhyme. Each line has an identical meter. The ghazal often reflects on a theme of unattainable love or divinity.
As with other forms with a long history in many languages, many variations have been developed, including forms with a quasi-musical poetic diction in Urdu. Ghazals have a classical affinity with Sufism, and a number of major Sufi religious works are written in ghazal form. The relatively steady meter and the use of the refrain produce an incantatory effect, which complements Sufi mystical themes well. Among the masters of the form is Rumi, a 13th-century Persian poet.
One of the most famous poet in this type of poetry is Hafez, whose poems often include the theme of exposing hypocrisy. His life and poems have been the subject of much analysis, commentary and interpretation, influencing post-fourteenth century Persian writing more than any other author. The West-östlicher Diwan of Johann Wolfgang von Goethe, a collection of lyrical poems, is inspired by the Persian poet Hafez.
In addition to specific forms of poems, poetry is often thought of in terms of different genres and subgenres. A poetic genre is generally a tradition or classification of poetry based on the subject matter, style, or other broader literary characteristics. Some commentators view genres as natural forms of literature. Others view the study of genres as the study of how different works relate and refer to other works.
Narrative poetry is a genre of poetry that tells a story. Broadly it subsumes epic poetry, but the term "narrative poetry" is often reserved for smaller works, generally with more appeal to human interest. Narrative poetry may be the oldest type of poetry. Many scholars of Homer have concluded that his "Iliad" and "Odyssey" were composed of compilations of shorter narrative poems that related individual episodes. Much narrative poetry—such as Scottish and English ballads, and Baltic and Slavic heroic poems—is performance poetry with roots in a preliterate oral tradition. It has been speculated that some features that distinguish poetry from prose, such as meter, alliteration and kennings, once served as memory aids for bards who recited traditional tales.
Notable narrative poets have included Ovid, Dante, Juan Ruiz, William Langland, Chaucer, Fernando de Rojas, Luís de Camões, Shakespeare, Alexander Pope, Robert Burns, Adam Mickiewicz, Alexander Pushkin, Edgar Allan Poe, Alfred Tennyson, and Anne Carson.
Lyric poetry is a genre that, unlike epic and dramatic poetry, does not attempt to tell a story but instead is of a more personal nature. Poems in this genre tend to be shorter, melodic, and contemplative. Rather than depicting characters and actions, it portrays the poet's own feelings, states of mind, and perceptions. Notable poets in this genre include Christine de Pizan, John Donne, Charles Baudelaire, Gerard Manley Hopkins, Antonio Machado, and Edna St. Vincent Millay.
Epic poetry is a genre of poetry, and a major form of narrative literature. This genre is often defined as lengthy poems concerning events of a heroic or important nature to the culture of the time. It recounts, in a continuous narrative, the life and works of a heroic or mythological person or group of persons. Examples of epic poems are Homer's "Iliad" and "Odyssey", Virgil's Aeneid, the "Nibelungenlied", Luís de Camões' "Os Lusíadas", the "Cantar de Mio Cid", the "Epic of Gilgamesh", the "Mahabharata", Valmiki's "Ramayana", Ferdowsi's "Shahnama", Nizami (or Nezami)'s Khamse (Five Books), and the "Epic of King Gesar". While the composition of epic poetry, and of long poems generally, became less common in the west after the early 20th century, some notable epics have continued to be written. Derek Walcott won a Nobel prize to a great extent on the basis of his epic, "Omeros".
Poetry can be a powerful vehicle for satire. The Romans had a strong tradition of satirical poetry, often written for political purposes. A notable example is the Roman poet Juvenal's satires.
The same is true of the English satirical tradition. John Dryden (a Tory), the first Poet Laureate, produced in 1682 "Mac Flecknoe", subtitled "A Satire on the True Blue Protestant Poet, T.S." (a reference to Thomas Shadwell). Another master of 17th-century English satirical poetry was John Wilmot, 2nd Earl of Rochester. Satirical poets outside England include Poland's Ignacy Krasicki, Azerbaijan's Sabir and Portugal's Manuel Maria Barbosa du Bocage.
An elegy is a mournful, melancholy or plaintive poem, especially a lament for the dead or a funeral song. The term "elegy," which originally denoted a type of poetic meter (elegiac meter), commonly describes a poem of mourning. An elegy may also reflect something that seems to the author to be strange or mysterious. The elegy, as a reflection on a death, on a sorrow more generally, or on something mysterious, may be classified as a form of lyric poetry.
Notable practitioners of elegiac poetry have included Propertius, Jorge Manrique, Jan Kochanowski, Chidiock Tichborne, Edmund Spenser, Ben Jonson, John Milton, Thomas Gray, Charlotte Turner Smith, William Cullen Bryant, Percy Bysshe Shelley, Johann Wolfgang von Goethe, Evgeny Baratynsky, Alfred Tennyson, Walt Whitman, Antonio Machado, Juan Ramón Jiménez, Giannina Braschi, William Butler Yeats, Rainer Maria Rilke, and Virginia Woolf.
The fable is an ancient literary genre, often (though not invariably) set in verse. It is a succinct story that features anthropomorphised animals, legendary creatures, plants, inanimate objects, or forces of nature that illustrate a moral lesson (a "moral"). Verse fables have used a variety of meter and rhyme patterns.
Notable verse fabulists have included Aesop, Vishnu Sarma, Phaedrus, Marie de France, Robert Henryson, Biernat of Lublin, Jean de La Fontaine, Ignacy Krasicki, Félix María de Samaniego, Tomás de Iriarte, Ivan Krylov and Ambrose Bierce.
Dramatic poetry is drama written in verse to be spoken or sung, and appears in varying, sometimes related forms in many cultures. Greek tragedy in verse dates to the 6th century B.C., and may have been an influence on the development of Sanskrit drama, just as Indian drama in turn appears to have influenced the development of the "bianwen" verse dramas in China, forerunners of Chinese Opera. East Asian verse dramas also include Japanese Noh. Examples of dramatic poetry in Persian literature include Nizami's two famous dramatic works, "Layla and Majnun" and "Khosrow and Shirin", Ferdowsi's tragedies such as "Rostam and Sohrab", Rumi's "Masnavi", Gorgani's tragedy of "Vis and Ramin", and Vahshi's tragedy of "Farhad".
Speculative poetry, also known as fantastic poetry (of which weird or macabre poetry is a major sub-classification), is a poetic genre which deals thematically with subjects which are "beyond reality", whether via extrapolation as in science fiction or via weird and horrific themes as in horror fiction. Such poetry appears regularly in modern science fiction and horror fiction magazines. Edgar Allan Poe is sometimes seen as the "father of speculative poetry". Poe's most remarkable achievement in the genre was his anticipation, by three-quarters of a century, of the Big Bang theory of the universe's origin, in his then much-derided 1848 essay (which, due to its very speculative nature, he termed a "prose poem"), "".
Prose poetry is a hybrid genre that shows attributes of both prose and poetry. It may be indistinguishable from the micro-story ( the "short short story", "flash fiction"). While some examples of earlier prose strike modern readers as poetic, prose poetry is commonly regarded as having originated in 19th-century France, where its practitioners included Aloysius Bertrand, Charles Baudelaire, Arthur Rimbaud and Stéphane Mallarmé. Since the late 1980s especially, prose poetry has gained increasing popularity, with entire journals, such as "The Prose Poem: An International Journal", "Contemporary Haibun Online", and "Haibun Today" devoted to that genre and its hybrids. Latin American poets of the 20th century who wrote prose poems include Octavio Paz and Giannina Braschi
Light poetry, or light verse, is poetry that attempts to be humorous. Poems considered "light" are usually brief, and can be on a frivolous or serious subject, and often feature word play, including puns, adventurous rhyme and heavy alliteration. Although a few free verse poets have excelled at light verse outside the formal verse tradition, light verse in English usually obeys at least some formal conventions. Common forms include the limerick, the clerihew, and the double dactyl.
While light poetry is sometimes condemned as doggerel, or thought of as poetry composed casually, humor often makes a serious point in a subtle or subversive way. Many of the most renowned "serious" poets have also excelled at light verse. Notable writers of light poetry include Lewis Carroll, Ogden Nash, X. J. Kennedy, Willard R. Espy, and Wendy Cope.
Slam poetry as a genre originated in 1986 in Chicago, Illinois, when Marc Kelly Smith organized the first slam. Slam performers comment emotively, aloud before an audience, on personal, social, or other matters. Slam focuses on the aesthetics of word play, intonation, and voice inflection. Slam poetry is often competitive, at dedicated "poetry slam" contests. | https://en.wikipedia.org/wiki?curid=22926 |
Probability
Probability is the branch of mathematics concerning numerical descriptions of how likely an event is to occur or how likely it is that a proposition is true. The probability of an event is a number between 0 and 1, where, roughly speaking, 0 indicates impossibility of the event and 1 indicates certainty. The higher the probability of an event, the more likely it is that the event will occur. A simple example is the tossing of a fair (unbiased) coin. Since the coin is fair, the two outcomes ("heads" and "tails") are both equally probable; the probability of "heads" equals the probability of "tails"; and since no other outcomes are possible, the probability of either "heads" or "tails" is 1/2 (which could also be written as 0.5 or 50%).
These concepts have been given an axiomatic mathematical formalization in probability theory, which is used widely in such areas of study as mathematics, statistics, finance, gambling, science (in particular physics), artificial intelligence/machine learning, computer science, game theory, and philosophy to, for example, draw inferences about the expected frequency of events. Probability theory is also used to describe the underlying mechanics and regularities of complex systems.
When dealing with experiments that are random and well-defined in a purely theoretical setting (like tossing a fair coin), probabilities can be numerically described by the number of desired outcomes divided by the total number of all outcomes. For example, tossing a fair coin twice will yield "head-head", "head-tail", "tail-head", and "tail-tail" outcomes. The probability of getting an outcome of "head-head" is 1 out of 4 outcomes, or, in numerical terms, 1/4, 0.25 or 25%. However, when it comes to practical application, there are two major competing categories of probability interpretations, whose adherents possess different views about the fundamental nature of probability:
The word "probability" derives from the Latin "probabilitas", which can also mean "probity", a measure of the authority of a witness in a legal case in Europe, and often correlated with the witness's nobility. In a sense, this differs much from the modern meaning of "probability", which, in contrast, is a measure of the weight of empirical evidence, and is arrived at from inductive reasoning and statistical inference.
The scientific study of probability is a modern development of mathematics. Gambling shows that there has been an interest in quantifying the ideas of probability for millennia, but exact mathematical descriptions arose much later. There are reasons for the slow development of the mathematics of probability. Whereas games of chance provided the impetus for the mathematical study of probability, are still obscured by the superstitions of gamblers.
According to Richard Jeffrey, "Before the middle of the seventeenth century, the term 'probable' (Latin "probabilis") meant "approvable", and was applied in that sense, univocally, to opinion and to action. A probable action or opinion was one such as sensible people would undertake or hold, in the circumstances." However, in legal contexts especially, 'probable' could also apply to propositions for which there was good evidence.
The earliest known forms of probability and statistics were developed by Middle Eastern mathematicians studying cryptography between the 8th and 13th centuries. Al-Khalil (717–786) wrote the "Book of Cryptographic Messages" which contains the first use of permutations and combinations to list all possible Arabic words with and without vowels. Al-Kindi (801–873) made the earliest known use of statistical inference in his work on cryptanalysis and frequency analysis. An important contribution of Ibn Adlan (1187–1268) was on sample size for use of frequency analysis.
The sixteenth century Italian polymath Gerolamo Cardano demonstrated the efficacy of defining odds as the ratio of favourable to unfavourable outcomes (which implies that the probability of an event is given by the ratio of favourable outcomes to the total number of possible outcomes).
Aside from the elementary work by Cardano, the doctrine of probabilities dates to the correspondence of Pierre de Fermat and Blaise Pascal (1654). Christiaan Huygens (1657) gave the earliest known scientific treatment of the subject. Jakob Bernoulli's "Ars Conjectandi" (posthumous, 1713) and Abraham de Moivre's "Doctrine of Chances" (1718) treated the subject as a branch of mathematics. See Ian Hacking's "The Emergence of Probability" and James Franklin's "The Science of Conjecture" for histories of the early development of the very concept of mathematical probability.
The theory of errors may be traced back to Roger Cotes's "Opera Miscellanea" (posthumous, 1722), but a memoir prepared by Thomas Simpson in 1755 (printed 1756) first applied the theory to the discussion of errors of observation. The reprint (1757) of this memoir lays down the axioms that positive and negative errors are equally probable, and that certain assignable limits define the range of all errors. Simpson also discusses continuous errors and describes a probability curve.
The first two laws of error that were proposed both originated with Pierre-Simon Laplace. The first law was published in 1774 and stated that the frequency of an error could be expressed as an exponential function of the numerical magnitude of the error, disregarding sign. The second law of error was proposed in 1778 by Laplace and stated that the frequency of the error is an exponential function of the square of the error. The second law of error is called the normal distribution or the Gauss law. "It is difficult historically to attribute that law to Gauss, who in spite of his well-known precocity had probably not made this discovery before he was two years old."
Daniel Bernoulli (1778) introduced the principle of the maximum product of the probabilities of a system of concurrent errors.
Adrien-Marie Legendre (1805) developed the method of least squares, and introduced it in his "Nouvelles méthodes pour la détermination des orbites des comètes" ("New Methods for Determining the Orbits of Comets"). In ignorance of Legendre's contribution, an Irish-American writer, Robert Adrain, editor of "The Analyst" (1808), first deduced the law of facility of error,
where formula_2 is a constant depending on precision of observation, and formula_3 is a scale factor ensuring that the area under the curve equals 1. He gave two proofs, the second being essentially the same as John Herschel's (1850). Gauss gave the first proof that seems to have been known in Europe (the third after Adrain's) in 1809. Further proofs were given by Laplace (1810, 1812), Gauss (1823), James Ivory (1825, 1826), Hagen (1837), Friedrich Bessel (1838), W.F. Donkin (1844, 1856), and Morgan Crofton (1870). Other contributors were Ellis (1844), De Morgan (1864), Glaisher (1872), and Giovanni Schiaparelli (1875). Peters's (1856) formula for "r", the probable error of a single observation, is well known.
In the nineteenth century authors on the general theory included Laplace, Sylvestre Lacroix (1816), Littrow (1833), Adolphe Quetelet (1853), Richard Dedekind (1860), Helmert (1872), Hermann Laurent (1873), Liagre, Didion, and Karl Pearson. Augustus De Morgan and George Boole improved the exposition of the theory.
Andrey Markov introduced the notion of Markov chains (1906), which played an important role in stochastic processes theory and its applications. The modern theory of probability based on the measure theory was developed by Andrey Kolmogorov (1931).
On the geometric side (see integral geometry) contributors to "The Educational Times" were influential (Miller, Crofton, McColl, Wolstenholme, Watson, and Artemas Martin).
Like other theories, the theory of probability is a representation of its concepts in formal terms—that is, in terms that can be considered separately from their meaning. These formal terms are manipulated by the rules of mathematics and logic, and any results are interpreted or translated back into the problem domain.
There have been at least two successful attempts to formalize probability, namely the Kolmogorov formulation and the Cox formulation. In Kolmogorov's formulation (see probability space), sets are interpreted as events and probability itself as a measure on a class of sets. In Cox's theorem, probability is taken as a primitive (that is, not further analyzed) and the emphasis is on constructing a consistent assignment of probability values to propositions. In both cases, the laws of probability are the same, except for technical details.
There are other methods for quantifying uncertainty, such as the Dempster–Shafer theory or possibility theory, but those are essentially different and not compatible with the laws of probability as usually understood.
Probability theory is applied in everyday life in risk assessment and modeling. The insurance industry and markets use actuarial science to determine pricing and make trading decisions. Governments apply probabilistic methods in environmental regulation, entitlement analysis (Reliability theory of aging and longevity), and financial regulation.
A good example of the use of probability theory in equity trading is the effect of the perceived probability of any widespread Middle East conflict on oil prices, which have ripple effects in the economy as a whole. An assessment by a commodity trader that a war is more likely can send that commodity's prices up or down, and signals other traders of that opinion. Accordingly, the probabilities are neither assessed independently nor necessarily very rationally. The theory of behavioral finance emerged to describe the effect of such groupthink on pricing, on policy, and on peace and conflict.
In addition to financial assessment, probability can be used to analyze trends in biology (e.g. disease spread) as well as ecology (e.g. biological Punnett squares). As with finance, risk assessment can be used as a statistical tool to calculate the likelihood of undesirable events occurring and can assist with implementing protocols to avoid encountering such circumstances. Probability is used to design games of chance so that casinos can make a guaranteed profit, yet provide payouts to players that are frequent enough to encourage continued play.
The discovery of rigorous methods to assess and combine probability assessments has changed society.
Another significant application of probability theory in everyday life is reliability. Many consumer products, such as automobiles and consumer electronics, use reliability theory in product design to reduce the probability of failure. Failure probability may influence a manufacturer's decisions on a product's warranty.
The cache language model and other statistical language models that are used in natural language processing are also examples of applications of probability theory.
Consider an experiment that can produce a number of results. The collection of all possible results is called the sample space of the experiment. The power set of the sample space is formed by considering all different collections of possible results. For example, rolling a die can produce six possible results. One collection of possible results gives an odd number on the die. Thus, the subset {1,3,5} is an element of the power set of the sample space of dice rolls. These collections are called "events". In this case, {1,3,5} is the event that the die falls on some odd number. If the results that actually occur fall in a given event, the event is said to have occurred.
A probability is a way of assigning every event a value between zero and one, with the requirement that the event made up of all possible results (in our example, the event {1,2,3,4,5,6}) is assigned a value of one. To qualify as a probability, the assignment of values must satisfy the requirement that if you look at a collection of mutually exclusive events (events with no common results, e.g., the events {1,6}, {3}, and {2,4} are all mutually exclusive), the probability that at least one of the events will occur is given by the sum of the probabilities of all the individual events.
The probability of an event "A" is written as formula_4, formula_5, or formula_6. This mathematical definition of probability can extend to infinite sample spaces, and even uncountable sample spaces, using the concept of a measure.
The "opposite" or "complement" of an event "A" is the event [not "A"] (that is, the event of "A" not occurring), often denoted as formula_7, or formula_8; its probability is given by . As an example, the chance of not rolling a six on a six-sided die is formula_9. See Complementary event for a more complete treatment.
If two events "A" and "B" occur on a single performance of an experiment, this is called the intersection or joint probability of "A" and "B", denoted as formula_10.
If two events, "A" and "B" are independent then the joint probability is
for example, if two coins are flipped the chance of both being heads is formula_12.
If either event "A" or event "B" can occur but never both simultaneously, then they are called mutually exclusive events.
If two events are mutually exclusive then the probability of both occurring is denoted as formula_10.
If two events are mutually exclusive then the probability of either occurring is denoted as formula_15.
For example, the chance of rolling a 1 or 2 on a six-sided is formula_17
If the events are not mutually exclusive then
For example, when drawing a single card at random from a regular deck of cards, the chance of getting a heart or a face card (J,Q,K) (or one that is both) is formula_19, because of the 52 cards of a deck 13 are hearts, 12 are face cards, and 3 are both: here the possibilities included in the "3 that are both" are included in each of the "13 hearts" and the "12 face cards" but should only be counted once.
"Conditional probability" is the probability of some event "A", given the occurrence of some other event "B".
Conditional probability is written formula_20, and is read "the probability of "A", given "B"". It is defined by
If formula_22 then formula_20 is formally undefined by this expression. However, it is possible to define a conditional probability for some zero-probability events using a σ-algebra of such events (such as those arising from a continuous random variable).
For example, in a bag of 2 red balls and 2 blue balls (4 balls in total), the probability of taking a red ball is formula_24; however, when taking a second ball, the probability of it being either a red ball or a blue ball depends on the ball previously taken, such as, if a red ball was taken, the probability of picking a red ball again would be formula_25 since only 1 red and 2 blue balls would have been remaining.
In probability theory and applications, Bayes' rule relates the odds of event formula_26 to event formula_27, before (prior to) and after (posterior to) conditioning on another event formula_28. The odds on formula_26 to event formula_27 is simply the ratio of the probabilities of the two events. When arbitrarily many events formula_31 are of interest, not just two, the rule can be rephrased as posterior is proportional to prior times likelihood, formula_32 where the proportionality symbol means that the left hand side is proportional to (i.e., equals a constant times) the right hand side as formula_31 varies, for fixed or given formula_28 (Lee, 2012; Bertsch McGrayne, 2012). In this form it goes back to Laplace (1774) and to Cournot (1843); see Fienberg (2005). See Inverse probability and Bayes' rule.
In a deterministic universe, based on Newtonian concepts, there would be no probability if all conditions were known (Laplace's demon), (but there are situations in which sensitivity to initial conditions exceeds our ability to measure them, i.e. know them). In the case of a roulette wheel, if the force of the hand and the period of that force are known, the number on which the ball will stop would be a certainty (though as a practical matter, this would likely be true only of a roulette wheel that had not been exactly levelled – as Thomas A. Bass' Newtonian Casino revealed). This also assumes knowledge of inertia and friction of the wheel, weight, smoothness and roundness of the ball, variations in hand speed during the turning and so forth. A probabilistic description can thus be more useful than Newtonian mechanics for analyzing the pattern of outcomes of repeated rolls of a roulette wheel. Physicists face the same situation in kinetic theory of gases, where the system, while deterministic "in principle", is so complex (with the number of molecules typically the order of magnitude of the Avogadro constant ) that only a statistical description of its properties is feasible.
Probability theory is required to describe quantum phenomena. A revolutionary discovery of early 20th century physics was the random character of all physical processes that occur at sub-atomic scales and are governed by the laws of quantum mechanics. The objective wave function evolves deterministically but, according to the Copenhagen interpretation, it deals with probabilities of observing, the outcome being explained by a wave function collapse when an observation is made. However, the loss of determinism for the sake of instrumentalism did not meet with universal approval. Albert Einstein famously in a letter to Max Born: "I am convinced that God does not play dice". Like Einstein, Erwin Schrödinger, who discovered the wave function, believed quantum mechanics is a statistical approximation of an underlying deterministic reality. In some modern interpretations of the statistical mechanics of measurement, quantum decoherence is invoked to account for the appearance of subjectively probabilistic experimental outcomes. | https://en.wikipedia.org/wiki?curid=22934 |
Poland
Poland ( ), officially the Republic of Poland ( ), is a country located in Central Europe. It is divided into 16 administrative subdivisions, covering an area of , and has a largely temperate seasonal climate. With a population of nearly 38.5 million people, Poland is the fifth most populous member state of the European Union. Poland's capital and largest metropolis is Warsaw. Other major cities include Kraków, Łódź, Wrocław, Poznań, Gdańsk, and Szczecin.
Poland is bordered by the Baltic Sea, Lithuania, and Russia's Kaliningrad Oblast to the north, Belarus and Ukraine to the east, Slovakia and the Czech Republic to the south, and Germany to the west.
The history of human activity on Polish soil spans thousands of years. Throughout late antiquity it became extensively diverse, with various cultures and tribes settling on the vast Central European Plain. However, it was the Western Polans who dominated the region and gave Poland its name. The establishment of Polish statehood can be traced to 966, when the pagan ruler of a realm coextensive with the territory of present-day Poland embraced Christianity and converted to Catholicism. The Kingdom of Poland was founded in 1025, and in 1569 it cemented its longstanding political association with Lithuania by signing the Union of Lublin. This union formed the Polish–Lithuanian Commonwealth, one of the largest (over 1,000,000 square kilometres – 400,000 square miles) and most populous nations of 16th and 17th century Europe, with a uniquely liberal political system which adopted Europe's first written national constitution, the Constitution of 3 May 1791.
With the passing of prominence and prosperity, the country was partitioned by neighbouring states at the end of the 18th century, and regained independence in 1918 with the Treaty of Versailles. After a series of territorial conflicts, the new multi-ethnic Poland restored its position as a key player in European politics. In September 1939, World War II began with the invasion of Poland by Nazi Germany, followed by the Soviet Union invading Poland in accordance with the Molotov–Ribbentrop Pact. Approximately six million Polish citizens, including three million of the country's Jews, perished during the course of the war. As member of the Eastern Bloc, the Polish People's Republic proclaimed forthwith was a chief signatory of the Warsaw Treaty amidst global Cold War tensions. In the wake of the 1989 events, notably through the emergence and contributions of the Solidarity movement, the communist government was dissolved and Poland reestablished itself as a presidential democratic republic.
Poland has a developed market and is a regional power in Central Europe, with the largest stock exchange in the East-Central European zone. It has the sixth largest economy by GDP (nominal) in the European Union and the tenth largest in all of Europe. It's one of the most dynamic economies in the world, simultaneously achieving a very high rank on the Human Development Index. Poland is a developed country, which maintains a high-income economy along with very high standards of living, life quality, safety, education, and economic freedom. Alongside a developed educational system, the state also provides free university education, social security, and a universal health care system. The country has 16 UNESCO World Heritage Sites, 15 of which are cultural.
Poland is a member state of the Schengen Area, the United Nations, NATO, the OECD, the Three Seas Initiative, the Visegrád Group, and guested at the G20.
The origin of the name "Poland" derives from the Lechitic tribe of Polans ("Polanie"), who inhabited the Warta river basin of present-day Greater Poland region starting in the mid-6th century. The origin of the name "Polanie" itself derives from the Proto-Slavic word "pole" (field). In some languages, such as Hungarian, Lithuanian, Persian and Turkish, the country's name is derived from the Lendians ("Lędzianie" or "Lachy"), who dwelled on the southeasternmost edge of present-day Lesser Poland, in the Cherven Grods between the 7th and 11th centuries — lands which were part of the territorial domain ruled over by the Polans. Their name derives from the Old Polish word "lęda" (open land or plain).
The early Bronze Age in Poland began around 2400 BC, while the Iron Age commenced in approximately 700 BC. During this time, the Lusatian culture, spanning both the Bronze and Iron Ages, became particularly prominent. The most famous archaeological find from the prehistory and protohistory of Poland is the Biskupin fortified settlement (now reconstructed as an open-air museum), dating from the Lusatian culture of the late Bronze Age, around 748 BC.
Throughout the Antiquity period, many distinct ancient ethnic groups populated the regions of what is now Poland in an era that dates from about 400 BC to 500 AD. These groups are identified as Celtic, Scythian, Germanic, Sarmatian, Slavic and Baltic tribes. Also, recent archeological findings in the Kuyavia region, confirmed the presence of the Roman Legions on the territory of Poland. These were most likely expeditionary missions sent out to protect the amber trade. The exact time and routes of the original migration and settlement of Slavic peoples lacks written records and can only be defined as fragmented. The West Slavic or Lechitic tribes who initially inhabited Poland migrated to these areas in the second half of the 5th century AD. Up until the creation of Mieszko's state and his subsequent conversion to Christianity in 966 AD, the main religion of the numerous tribes that inhabited the geographical area of present-day Poland was paganism. With the Baptism of Poland the Polish rulers accepted Western Christianity and the religious authority of the Roman Church. However, the transition from paganism was not a smooth and instantaneous process for the rest of the population as evident from the pagan reaction of the 1030s.
Poland began to form into a recognizable unitary and territorial entity around the middle of the 10th century under the Piast dynasty. Poland's first historically documented ruler, Mieszko I, accepted Christianity, as the rightful religion of his realm, under the auspices of the Latin Church with the Baptism of Poland in 966. The bulk of the population converted in the course of the next few centuries. In 1000, Boleslaw the Brave, continuing the policy of his father Mieszko, held a Congress of Gniezno and created the metropolis of Gniezno and the dioceses of Kraków, Kołobrzeg, and Wrocław. However, the pagan unrest led to the transfer of the capital to Kraków in 1038 by Casimir I the Restorer.
In 1109, Prince Bolesław III Wrymouth defeated the King of Germany Henry V at the Battle of Hundsfeld, stopping the German incursion into Poland. The clash between Bolesław III and Henry V was documented by Gallus Anonymus in his 1118 chronicle. In 1138, Poland fragmented into several smaller duchies when Bolesław divided his lands among his sons. In 1226, Konrad I of Masovia, one of the regional Piast dukes, invited the Teutonic Knights to help him fight the Baltic Prussian pagans; a decision that led to centuries of warfare with the Knights. In 1264, the Statute of Kalisz or the General Charter of Jewish Liberties introduced numerous right for the Jews in Poland, leading to a nearly autonomous "nation within a nation".
In the middle of the 13th century, the Silesian branch of the Piast dynasty (Henry I the Bearded and Henry II the Pious, ruled 1238–1241) nearly succeeded in uniting the Polish lands, but the Mongols invaded the country from the east and defeated the combined Polish forces at the Battle of Legnica where Duke Henry II the Pious died. In 1320, after a number of earlier unsuccessful attempts by regional rulers at uniting the Polish dukedoms, Władysław I consolidated his power, took the throne and became the first king of a reunified Poland. His son, Casimir III (reigned 1333–1370), has a reputation as one of the greatest Polish kings, and gained wide recognition for improving the country's infrastructure. He also extended royal protection to Jews, and encouraged their immigration to Poland. Casimir III realized that the nation needed a class of educated people, especially lawyers, who could codify the country's laws and administer the courts and offices. His efforts to create an institution of higher learning in Poland were finally rewarded when Pope Urban V granted him permission to open the University of Kraków.
The Golden Liberty of the nobles began to develop under Casimir's rule, when in return for their military support, the king made a series of concessions to the nobility, and establishing their legal status as superior to that of the townsmen. When Casimir the Great died in 1370, leaving no legitimate male heir, the Piast dynasty came to an end.
During the 13th and 14th centuries, Poland became a destination for German, Flemish and to a lesser extent Walloon, Danish and Scottish migrants. Also, Jews and Armenians began to settle and flourish in Poland during this era (see History of the Jews in Poland and Armenians in Poland).
The Black Death, a plague that ravaged Europe from 1347 to 1351, did not significantly affect Poland, and the country was spared from a major outbreak of the disease. The reason for this was the decision of Casimir the Great to quarantine the nation's borders.
The Jagiellon dynasty spanned the late Middle Ages and early Modern Era of Polish history. Beginning with the Lithuanian Grand Duke Jogaila (Władysław II Jagiełło), the Jagiellon dynasty (1386–1572) formed the Polish–Lithuanian union. The partnership brought vast Lithuanian-controlled Rus' areas into Poland's sphere of influence and proved beneficial for the Poles and Lithuanians, who coexisted and cooperated in one of the largest political entities in Europe for the next four centuries.
In the Baltic Sea region the struggle of Poland and Lithuania with the Teutonic Knights continued and culminated at the Battle of Grunwald in 1410, where a combined Polish-Lithuanian army inflicted a decisive victory against them. In 1466, after the Thirteen Years' War, King Casimir IV Jagiellon gave royal consent to the Peace of Thorn, which created the future Duchy of Prussia under Polish suzerainty. The Jagiellon dynasty at one point also established dynastic control over the kingdoms of Bohemia (1471 onwards) and Hungary. In the south, Poland confronted the Ottoman Empire and the Crimean Tatars (by whom they were attacked on 75 separate occasions between 1474 and 1569), and in the east helped Lithuania fight the Grand Duchy of Moscow. Some historians estimate that Crimean Tatar slave-raiding cost Poland-Lithuania one million of its population between the years of 1494 and 1694.
Poland was developing as a feudal state, with a predominantly agricultural economy and an increasingly powerful landed nobility. The "Nihil novi" act adopted by the Polish Sejm (parliament) in 1505, transferred most of the legislative power from the monarch to the Sejm, an event which marked the beginning of the period known as "Golden Liberty", when the state was ruled by the "free and equal" Polish nobility. Protestant Reformation movements made deep inroads into Polish Christianity, which resulted in the establishment of policies promoting religious tolerance, unique in Europe at that time. This tolerance allowed the country to avoid most of the religious turmoil that spread over Europe during the 16th century.
The European Renaissance evoked in late Jagiellon Poland (under kings Sigismund I the Old and Sigismund II Augustus) a sense of urgency in the need to promote a cultural awakening, and during this period Polish culture and the nation's economy flourished. In 1543, Nicolaus Copernicus, an astronomer from Toruń, published his epochal work "De revolutionibus orbium coelestium" ("On the Revolutions of the Celestial Spheres"), and thereby became the first proponent of a predictive mathematical model confirming the heliocentric theory, which became the accepted basic model for the practice of modern astronomy. Another major figure associated with the era is the classicist poet Jan Kochanowski.
The 1569 Union of Lublin established the Polish–Lithuanian Commonwealth, a more closely unified federal state with an elective monarchy, but which was governed largely by the nobility, through a system of local assemblies with a central parliament. The Warsaw Confederation (1573) guaranteed religious freedom for the Polish nobility "(szlachta)" and townsfolk "(mieszczanie)". However, the peasants "(chłopi)" were still subject to severe limitations imposed on them by the nobility. The establishment of the Commonwealth coincided with a period of stability and prosperity in Poland, with the union thereafter becoming a European power and a major cultural entity, occupying approximately one million square kilometers of Central and Eastern Europe, as well as an agent for the dissemination of Western culture through Polonization into areas of modern-day Lithuania, Latvia, Ukraine, Belarus and western Russia.
In the 16th and 17th centuries, Poland suffered from a number of dynastic crises during the reigns of the Vasa kings Sigismund III and Władysław IV and found itself engaged in major conflicts with Russia, Sweden and the Ottoman Empire, as well as a series of minor Cossack uprisings. In 1610, a Polish army under the command of Hetman Stanisław Żółkiewski seized Moscow after winning the Battle of Klushino. In 1611, the Tsar of Russia paid homage to the King of Poland.
After the signing of Truce of Deulino, Poland had in the years 1618–1621 an area of about .
From the middle of the 17th century, the nobles' democracy, suffering from internal disorder, gradually declined, thereby leaving the once powerful Commonwealth vulnerable to foreign intervention. Starting in 1648, the Cossack Khmelnytsky Uprising engulfed the south and east, eventually leaving Ukraine divided, with the eastern part, lost by the Commonwealth, becoming a dependency of the Tsardom of Russia. This was followed by the 'Deluge', a Swedish invasion of Poland, which marched through the Polish heartlands and ruined the country's population, culture and infrastructure—around four million of Poland's eleven million inhabitants died in famines and epidemics throughout the 17th century. However, under John III Sobieski the Commonwealth's military prowess was re-established, and in 1683 Polish forces played a major role in the Battle of Vienna against the Ottoman Army, commanded by Kara Mustafa, the Grand Vizier of the Ottoman Empire.
Sobieski's reign marked the end of the nation's golden era. Finding itself subjected to almost constant warfare and suffering enormous population losses as well as massive damage to its economy, the Commonwealth fell into decline. The government became ineffective as a result of large-scale internal conflicts (e.g. Lubomirski Rebellion against John II Casimir and rebellious confederations) and corrupted legislative processes. The nobility fell under the control of a handful of "magnats", and this, compounded with two relatively weak kings of the Saxon Wettin dynasty, Augustus II and Augustus III, as well as the rise of Russia and Prussia after the Great Northern War only served to worsen the Commonwealth's plight. Despite this The Commonwealth-Saxony personal union gave rise to the emergence of the Commonwealth's first reform movement, and laid the foundations for the Polish Enlightenment.
During the later part of the 18th century, the Commonwealth made attempts to implement fundamental internal reforms; with the second half of the century bringing a much improved economy, significant population growth and far-reaching progress in the areas of education, intellectual life, art, and especially toward the end of the period, evolution of the social and political system. The most populous capital city of Warsaw replaced Gdańsk (Danzig) as the leading centre of commerce, and the role of the more prosperous townsmen increased.
The royal election of 1764 resulted in the elevation of Stanisław II August (a Polish aristocrat connected to the Czartoryski family faction of magnates) to the monarchy. However, as a one-time personal admirer of Empress Catherine II of Russia, the new king spent much of his reign torn between his desire to implement reforms necessary to save his nation, and his perceived necessity to remain in a political relationship with his Russian sponsor. This led to the formation of the 1768 Bar Confederation, a "szlachta" rebellion directed against the Polish king and his Russian sponsors, which aimed to preserve Poland's independence and the szlachta's traditional privileges.
Attempts at reform provoked the union's neighbours, and in 1772 the First Partition of the Commonwealth by Prussia, Russia and Austria took place; an act which the "Partition Sejm", under considerable duress, eventually "ratified" "fait accompli". Disregarding this loss, in 1773 the king established the Commission of National Education, the first government education authority in Europe. Corporal punishment of children was officially prohibited in 1783.
The Great Sejm convened by Stanisław II August in 1788 successfully adopted the 3 May Constitution, the first set of modern supreme national laws in Europe. However, this document, accused by detractors of harbouring revolutionary sympathies, generated strong opposition from the Commonwealth's nobles and conservatives as well as from Catherine II, who, determined to prevent the rebirth of a strong Commonwealth set about planning the final dismemberment of the Polish-Lithuanian state. Russia was aided in achieving its goal when the Targowica Confederation, an organisation of Polish nobles, appealed to the Empress for help. In May 1792, Russian forces crossed the Commonwealth's frontier, thus beginning the Polish-Russian War.
The defensive war fought by the Poles ended prematurely when the King, convinced of the futility of resistance, capitulated and joined the Targowica Confederation. The Confederation then took over the government. Russia and Prussia, fearing the mere existence of a Polish state, arranged for, and in 1793 executed, the Second Partition of the Commonwealth, which left the country deprived of so much territory that it was practically incapable of independent existence. Eventually, in 1795, following the failed Kościuszko Uprising, the Commonwealth was partitioned one last time by all three of its more powerful neighbours, and with this, effectively ceased to exist.
Poles rebelled several times against the partitioners, particularly near the end of the 18th century and the beginning of the 19th century. An unsuccessful attempt at defending Poland's sovereignty took place in 1794 during the Kościuszko Uprising, where a popular and distinguished general Tadeusz Kościuszko, who had several years earlier served under Washington in the American Revolutionary War, led Polish insurrectionists against numerically superior Russian forces. Despite the victory at the Battle of Racławice, his ultimate defeat ended Poland's independent existence for 123 years.
In 1807, Napoleon I of France temporarily recreated a Polish state as the satellite Duchy of Warsaw, after a successful Greater Poland Uprising of 1806 against Prussian rule. But, after the failed Napoleonic Wars, Poland was again split between the victorious powers at the Congress of Vienna of 1815. The eastern part was ruled by the Russian tsar as Congress Poland, which had a liberal constitution. However, over time the Russian monarch reduced Polish freedoms, and Russia annexed the country in virtually all but name. Meanwhile, the Prussian controlled territory of Poland came under increased Germanization. Thus, in the 19th century, only Austrian-ruled Galicia, and particularly the Free City of Kraków, allowed free Polish culture to flourish.
Throughout the period of the partitions, political and cultural repression of the Polish nation led to the organisation of a number of uprisings against the authorities of the occupying Russian, Prussian and Austrian governments. In 1830, the November Uprising began in Warsaw when, led by Lieutenant Piotr Wysocki, young non-commissioned officers at the Officer Cadet School in Warsaw revolted. They were joined by large segments of Polish society, and together forced Warsaw's Russian garrison to withdraw north of the city.
Over the course of the next seven months, Polish forces successfully defeated the Russian armies of Field Marshal Hans Karl von Diebitsch and a number of other Russian commanders; however, finding themselves in a position unsupported by any other foreign powers, save distant France and the newborn United States, and with Prussia and Austria refusing to allow the import of military supplies through their territories, the Poles accepted that the uprising was doomed to failure. Upon the surrender of Warsaw to General Ivan Paskievich, many Polish troops, feeling they could not go on, withdrew into Prussia and there laid down their arms. After the defeat, the semi-independent Congress Poland lost its constitution, army and legislative assembly, and was integrated more closely with the Russian Empire.
During the Spring of Nations (a series of revolutions which swept across Europe), Poles took up arms in the Greater Poland Uprising of 1848 to resist Prussian rule. Initially, the uprising manifested itself in the form of civil disobedience, but eventually turned into an armed struggle when the Prussian military was sent in to pacify the region. Eventually, after several battles the uprising was suppressed by the Prussians, and the Grand Duchy of Posen was more completely incorporated into Prussia.
In 1863, a new Polish uprising against Russian rule began. The January Uprising started out as a spontaneous protest by young Poles against conscription into the Imperial Russian Army. However, the insurrectionists, despite being joined by high-ranking Polish-Lithuanian officers and numerous politicians, were still severely outnumbered and lacking in foreign support. They were forced to resort to guerrilla warfare tactics and failed to win any major military victories. Afterwards no major uprising was witnessed in the Russian-controlled Congress Poland, and Poles resorted instead to fostering economic and cultural self-improvement. Congress Poland was rapidly industrialised towards the end of the 19th century, and successively transformed into the Empire's wealthiest and most developed subject.
Despite the political unrest experienced during the partitions, Poland did benefit from large-scale industrialisation and modernisation programs, instituted by the occupying powers, which helped it develop into a more economically coherent and viable entity. This was particularly true in Greater Poland, Silesia and Eastern Pomerania controlled by Prussia (later becoming a part of the German Empire); areas which eventually, thanks largely to the Greater Poland Uprising of 1918 and Silesian Uprisings, were reconstituted as a part of the Second Polish Republic, becoming the country's most prosperous regions.
During World War I, all the Allies agreed on the reconstitution of Poland that United States President Woodrow Wilson proclaimed in Point 13 of his Fourteen Points. A total of 2 million Polish troops fought with the armies of the three occupying powers, and 450,000 died. Shortly after the armistice with Germany in November 1918, Poland regained its independence as the Second Polish Republic ("II Rzeczpospolita Polska"). It reaffirmed its independence after a series of military conflicts, the most notable being the Polish–Soviet War (1919–21) when Poland inflicted a crushing defeat on the Red Army at the Battle of Warsaw, an event which is considered to have halted the advance of Communism into Europe and forced Vladimir Lenin to rethink his objective of achieving global socialism. The event is often referred to as the "Miracle at the Vistula".
During this period, Poland successfully managed to fuse the territories of the three former partitioning powers into a cohesive nation state. Railways were restructured to direct traffic towards Warsaw instead of the former imperial capitals, a new network of national roads was gradually built up and a major seaport was opened on the Baltic Coast, so as to allow Polish exports and imports to bypass the politically charged Free City of Danzig.
The inter-war period heralded in a new era of Polish politics. Whilst Polish political activists had faced heavy censorship in the decades up until the First World War, the country now found itself trying to establish a new political tradition. For this reason, many exiled Polish activists, such as Ignacy Paderewski (who would later become prime minister) returned home to help; a significant number of them then went on to take key positions in the newly formed political and governmental structures. Tragedy struck in 1922 when Gabriel Narutowicz, inaugural holder of the presidency, was assassinated at the Zachęta Gallery in Warsaw by painter and right-wing nationalist Eligiusz Niewiadomski.
In 1926, a May coup, led by the hero of the Polish independence campaign Marshal Józef Piłsudski, turned rule of the Second Polish Republic over to the nonpartisan Sanacja ("Healing") movement in an effort to prevent radical political organizations on both the left and the right from destabilizing the country. The movement functioned integrally until Piłsudski's death in 1935. Following Marshall Piłsudski's death, Sanation split into several competing factions. By the late 1930s, due to increased threats posed by political extremism inside the country, the Polish government became increasingly heavy-handed, banning a number of radical organizations, including communist and ultra-nationalist political parties, which threatened the stability of the country.
As a subsequent result of the Munich Agreement in 1938, Czechoslovakia ceded to Poland the small 350 sq mi Zaolzie region. The area was a point of contention between the Polish and Czechoslovak governments in the past and the two countries fought a brief seven-day war over it in 1919.
World War II began with the Nazi German invasion of Poland on 1 September 1939, followed by the Soviet invasion of Poland on 17 September. On 28 September 1939, Warsaw fell. As agreed in the Molotov–Ribbentrop Pact, Poland was split into two zones, one occupied by Nazi Germany, the other by the Soviet Union. In 1939–41, the Soviets deported hundreds of thousands of Poles. The Soviet NKVD executed thousands of Polish prisoners of war (inter alia Katyn massacre) ahead of the Operation Barbarossa. German planners had in November 1939 called for "the complete destruction" of all Poles and their fate, as well as many other Slavs, as outlined in the genocidal "Generalplan Ost".
Polish intelligence operatives proved extremely valuable to the Allies, providing much of the intelligence from Europe and beyond, and Polish code breakers were responsible for cracking the Enigma cipher.
Poland made the fourth-largest troop contribution in Europe and its troops served both the Polish Government in Exile in the west and Soviet leadership in the east. Polish troops played an important role in the Normandy, Italian and North African Campaigns and are particularly remembered for the Battle of Monte Cassino. In the east, the Soviet-backed Polish 1st Army distinguished itself in the battles for Warsaw and Berlin.
During the Battle of Britain Polish squadrons such as the No. 303 "Kościuszko" fighter squadron achieved considerable success.
The wartime resistance movement, and the Armia Krajowa ("Home Army"), fought against German occupation. It was one of the three largest resistance movements of the entire war, and encompassed a range of clandestine activities, which functioned as an underground state complete with degree-awarding universities and a court system. The resistance was loyal to the exiled government and generally resented the idea of a communist Poland; for this reason, in the summer of 1944 it initiated Operation Tempest, of which the Warsaw Uprising that begun on 1 August 1944 is the best known operation.
Nazi German forces under orders from Adolf Hitler set up six German extermination camps in occupied Poland, including Treblinka, Majdanek and Auschwitz. The Germans transported millions of Jews from across occupied Europe to be murdered in those camps.
Altogether, 3 million Polish Jews – approximately 90% of Poland's pre-war Jewry – and between 1.8 and 2.8 million ethnic Poles were killed during the German occupation of Poland, including between 50,000 and 100,000 members of the Polish intelligentsia – academics, doctors, lawyers, nobility and priesthood. Around 150,000 Polish civilians were killed by Soviets between 1939 and 1941 during the Soviet Union's occupation of eastern Poland (Kresy), and another estimated 100,000 Poles were murdered by the Ukrainian Insurgent Army (UPA) between 1943 and 1944 in what became known as the Wołyń Massacres. Of all the countries in the war, Poland lost the highest percentage of its citizens: around 6 million perished – more than one-sixth of Poland's pre-war population – half of them Polish Jews. About 90% of deaths were non-military in nature.
In 1945, Poland's borders were shifted westwards. Over two million Polish inhabitants of Kresy were expelled along the Curzon Line by Stalin. The western border became the Oder-Neisse line. As a result, Poland's territory was reduced by 20%, or . The shift forced the migration of millions of other people, most of whom were Poles, Germans, Ukrainians, and Jews.
At the insistence of Joseph Stalin, the Yalta Conference sanctioned the formation of a new provisional pro-Communist coalition government in Moscow, which ignored the Polish government-in-exile based in London. This action angered many Poles who considered it a betrayal by the Allies. In 1944, Stalin had made guarantees to Churchill and Roosevelt that he would maintain Poland's sovereignty and allow democratic elections to take place. However, upon achieving victory in 1945, the elections organized by the occupying Soviet authorities were falsified and were used to provide a veneer of legitimacy for Soviet hegemony over Polish affairs. The Soviet Union instituted a new communist government in Poland, analogous to much of the rest of the Eastern Bloc. As elsewhere in Communist Europe, the Soviet influence over Poland was met with armed resistance from the outset which continued into the 1950s.
Despite widespread objections, the new Polish government accepted the Soviet annexation of the pre-war eastern regions of Poland (in particular the cities of Wilno and Lwów) and agreed to the permanent garrisoning of Red Army units on Poland's territory. Military alignment within the Warsaw Pact throughout the Cold War came about as a direct result of this change in Poland's political culture. In the European scene, it came to characterize the full-fledged integration of Poland into the brotherhood of communist nations.
The new communist government took control with the adoption of the Small Constitution on 19 February 1947. The Polish People's Republic ("Polska Rzeczpospolita Ludowa") was officially proclaimed in 1952. In 1956, after the death of Bolesław Bierut, the régime of Władysław Gomułka became temporarily more liberal, freeing many people from prison and expanding some personal freedoms. Collectivization in the Polish People's Republic failed. A similar situation repeated itself in the 1970s under Edward Gierek, but most of the time persecution of anti-communist opposition groups persisted. Despite this, Poland was at the time considered to be one of the least oppressive states of the Eastern Bloc.
Labour turmoil in 1980 led to the formation of the independent trade union "Solidarity" (""Solidarność""), which over time became a political force. Despite persecution and imposition of martial law in 1981, it eroded the dominance of the Polish United Workers' Party and by 1989 had triumphed in Poland's first partially free and democratic parliamentary elections since the end of the Second World War. Lech Wałęsa, a Solidarity candidate, eventually won the presidency in 1990. The Solidarity movement heralded the collapse of communist regimes and parties across Europe.
A shock therapy programme, initiated by Leszek Balcerowicz in the early 1990s, enabled the country to transform its socialist-style planned economy into a market economy. As with other post-communist countries, Poland suffered declines in social and economic standards, but it became the first post-communist country to reach its pre-1989 GDP levels, which it achieved by 1995 thanks largely to its booming economy.
Most visibly, there were numerous improvements in human rights, such as freedom of speech, internet freedom (no censorship), civil liberties (1st class) and political rights (1st class), as ranked by Freedom House non-governmental organization. In 1991, Poland became a member of the Visegrád Group and joined the North Atlantic Treaty Organization (NATO) alliance in 1999 along with the Czech Republic and Hungary. Poles then voted to join the European Union in a referendum in June 2003, with Poland becoming a full member on 1 May 2004.
Poland joined the Schengen Area in 2007, as a result of which, the country's borders with other member states of the European Union have been dismantled, allowing for full freedom of movement within most of the EU. In contrast to this, a section of Poland's eastern border now constitutes the external EU border with Belarus, Russia and Ukraine. That border has become increasingly well protected, and has led in part to the coining of the phrase 'Fortress Europe', in reference to the seeming 'impossibility' of gaining entry to the EU for citizens of the former Soviet Union.
In an effort to strengthen military cooperation with its neighbors, Poland set up the Visegrád Battlegroup with Hungary, Czech Republic and Slovakia, with a total of 3,000 troops ready for deployment. Also, in east Poland, it formed the LITPOLUKRBRIG battle groups with Lithuania and Ukraine. These battle groups will operate outside of NATO and within the European defense initiative framework.
On 10 April 2010, the President of the Republic of Poland, Lech Kaczyński, along with 89 other high-ranking Polish officials died in a plane crash near Smolensk, Russia. The president's party was on their way to attend an annual service of commemoration for the victims of the Katyń massacre when the tragedy took place.
In 2011, the ruling Civic Platform won parliamentary elections. Poland joined the European Space Agency in 2012, as well as organised the UEFA Euro 2012 (along with Ukraine). In 2013, Poland also became a member of the Development Assistance Committee. In 2014, the Prime Minister of Poland, Donald Tusk, was chosen to be President of the European Council, and resigned as prime minister. The 2015 elections were won by the opposition Law and Justice Party (PiS).
Poland's territory extends across several geographical regions, between latitudes 49° and 55° N, and longitudes 14° and 25° E. In the north-west is the Baltic seacoast, which extends from the Bay of Pomerania to the Gulf of Gdańsk. This coast is marked by several spits, coastal lakes (former bays that have been cut off from the sea), and dunes. The largely straight coastline is indented by the Szczecin Lagoon, the Bay of Puck, and the Vistula Lagoon.
The centre and parts of the north of the country lie within the North European Plain. Rising above these lowlands is a geographical region comprising four hilly districts of moraines and moraine-dammed lakes formed during and after the Pleistocene ice age. These lake districts are the Pomeranian Lake District, the Greater Polish Lake District, the Kashubian Lake District, and the Masurian Lake District. The Masurian Lake District is the largest of the four and covers much of north-eastern Poland. The lake districts form part of the Baltic Ridge, a series of moraine belts along the southern shore of the Baltic Sea.
South of the Northern European Plain are the regions of Lusatia, Silesia and Masovia, which are marked by broad ice-age river valleys. Farther south is a mountainous region, including the Sudetes, the Kraków-Częstochowa Uplands, the Świętokrzyskie Mountains, and the Carpathian Mountains, including the Beskids. The highest part of the Carpathians is the Tatra Mountains, along Poland's southern border.
The geological structure of Poland has been shaped by the continental collision of Europe and Africa over the past 60 million years and, more recently, by the Quaternary glaciations of northern Europe. Both processes shaped the Sudetes and the Carpathian Mountains. The moraine landscape of northern Poland contains soils made up mostly of sand or loam, while the ice age river valleys of the south often contain loess. The Polish Jura, the Pieniny, and the Western Tatras consist of limestone, while the High Tatras, the Beskids, and the Karkonosze are made up mainly of granite and basalts. The Polish Jura Chain has some of the oldest rock formation on the continent of Europe.
Poland has 70 mountains over in elevation, all in the Tatras. The Polish Tatras, which consist of the High Tatras and the Western Tatras, is the highest mountain group of Poland and of the entire Carpathian range. In the High Tatras lies Poland's highest point, the north-western summit of Rysy, in elevation. At its foot lie the mountain lakes of Czarny Staw pod Rysami (Black Lake below Mount Rysy) and Morskie Oko (the Eye of the Sea), both naturally-made tarns.
The second highest mountain group in Poland is the Beskids, whose highest peak is Babia Góra, at . The next highest mountain groups are the Karkonosze in the Sudetes, the highest point of which is Śnieżka at , and the Śnieżnik Mountains, the highest point of which is Śnieżnik at .
Other notable uplands include the Table Mountains, which are noted for their interesting rock formations, the Bieszczady Mountains in the far southeast of the country, in which the highest Polish peak is Tarnica at , the Gorce Mountains in Gorce National Park, whose highest point is Turbacz at , the Pieniny in Pieniny National Park, the highest point of which is Wysokie Skałki (Wysoka) at , and the Świętokrzyskie Mountains in Świętokrzyski National Park, which have two similarly high peaks: Łysica at and Łysa Góra at .
The lowest point in Poland – at below sea level – is at Raczki Elbląskie, near Elbląg in the Vistula Delta.
In the Zagłębie Dąbrowskie (the Coal Fields of Dąbrowa) region in the Silesian Voivodeship in southern Poland is an area of sparsely vegetated sand known as the Błędów Desert. It covers an area of . It is not a natural desert but results from human activity from the Middle Ages onwards.
The Baltic Sea activity in Słowiński National Park created sand dunes which in the course of time separated the bay from the sea creating two lakes. As waves and wind carry sand inland the dunes slowly move, at a rate of per year. Some dunes reach the height of up to . The highest peak of the park is Rowokol ( above sea level).
The longest rivers are the Vistula (), long; the Oder () which forms part of Poland's western border, long; its tributary, the Warta, long; and the Bug, a tributary of the Vistula, long. The Vistula and the Oder flow into the Baltic Sea, as do numerous smaller rivers in Pomerania.
The Łyna and the Angrapa flow by way of the Pregolya to the Baltic Sea, and the Czarna Hańcza flows into the Baltic Sea through the Neman. While the great majority of Poland's rivers drain into the Baltic Sea, Poland's Beskids are the source of some of the upper tributaries of the Orava, which flows via the Váh and the Danube to the Black Sea. The eastern Beskids are also the source of some streams that drain through the Dniester to the Black Sea.
Poland's rivers have been used since early times for navigation. The Vikings, for example, traveled up the Vistula and the Oder in their longships. In the Middle Ages and in early modern times, when the Polish–Lithuanian Commonwealth was the breadbasket of Europe; the shipment of grain and other agricultural products down the Vistula toward Gdańsk and onward to other parts of Europe took on great importance.
In the valley of Pilica river in Tomaszów Mazowiecki there is a unique natural karst spring of water containing calcium salts, that is an object of protection in Niebieskie Źródła Nature Reserve in Sulejów Landscape Park. The origin of the name of the reserve "Niebieskie Źródła", that means "Blue Springs", comes from the fact that red waves are absorbed by water and only blue and green are reflected from the bottom of the spring, giving that atypical colour.
With almost ten thousand closed bodies of water covering more than each, Poland has one of the highest numbers of lakes in the world. In Europe, only Finland has a greater density of lakes. The largest lakes, covering more than , are Lake Śniardwy and Lake Mamry in Masuria, and Lake Łebsko and Lake Drawsko in Pomerania.
In addition to the lake districts in the north (in Masuria, Pomerania, Kashubia, Lubuskie, and Greater Poland), there are also many mountain lakes in the Tatras, of which the Morskie Oko is the largest in area. The lake with the greatest depth—of more than —is Lake Hańcza in the Wigry Lake District, east of Masuria in Podlaskie Voivodeship.
Among the first lakes whose shores were settled are those in the Greater Polish Lake District. The stilt house settlement of Biskupin, occupied by more than one thousand residents, was founded before the 7th century BC by people of the Lusatian culture.
Lakes have always played an important role in Polish history and continue to be of great importance to today's modern Polish society. The ancestors of today's Poles, the Polanie, built their first fortresses on islands in these lakes. The legendary Prince Popiel ruled from Kruszwica tower erected on the Lake Gopło. The first historically documented ruler of Poland, Duke Mieszko I, had his palace on an island in the Warta River in Poznań. Nowadays the Polish lakes provide a location for the pursuit of water sports such as yachting and wind-surfing.
The Polish Baltic coast is approximately long and extends from Świnoujście on the islands of Usedom and Wolin in the west to Krynica Morska on the Vistula Spit in the east. For the most part, Poland has a smooth coastline, which has been shaped by the continual movement of sand by currents and winds. This continual erosion and deposition has formed cliffs, dunes, and spits, many of which have migrated landwards to close off former lagoons, such as Łebsko Lake in Słowiński National Park.
The largest spits are Hel Peninsula and the Vistula Spit. The coast line is varied also by Szczecin and Vistula Lagoons and a few lakes, e.g. Łebsko and Jamno. The largest Polish Baltic island is called Wolin known for its Wolin National Park. The largest sea harbours are Szczecin, Świnoujście, Gdańsk, Gdynia, Police and Kołobrzeg and the main coastal resorts – Świnoujście, Międzydzdroje, Kołobrzeg, Łeba, Sopot, Władysławowo and the Hel Peninsula.
Forests cover about 30.5% of Poland's land area based on international standards. Its overall percentage is still increasing. Forests of Poland are managed by the national program of reforestation (KPZL), aiming at an increase of forest-cover to 33% in 2050. The richness of Polish forest (per SoEF 2011 statistics) is more than twice as high as European average (with Germany and France at the top), containing 2.304 billion cubic metres of trees. The largest forest complex in Poland is Lower Silesian Wilderness.
More than 1% of Poland's territory, , is protected within 23 Polish national parks. Three more national parks are projected for Masuria, the Polish Jura, and the eastern Beskids. In addition, wetlands along lakes and rivers in central Poland are legally protected, as are coastal areas in the north. There are over 120 areas designated as landscape parks, along with numerous nature reserves and other protected areas (e.g. Natura 2000).
Since Poland's accession to the European Union in 2004, Polish agriculture has performed extremely well and the country has over two million private farms. It is the leading producer in Europe of potatoes and rye (world's second largest in 1989) the world's largest producer of triticale, and one of the more important producers of barley, oats, sugar beets, flax, and fruits. Poland is the European Union's fourth largest supplier of pork after Germany, Spain and France.
Phytogeographically, Poland belongs to the Central European province of the Circumboreal Region within the Boreal Kingdom. According to the World Wide Fund for Nature, the territory of Poland belongs to three Palearctic Ecoregions of the continental forest spanning Central and Northern European temperate broadleaf and mixed forest ecoregions as well as the Carpathian montane conifer forest.
Many animals that have since died out in other parts of Europe still survive in Poland, such as the wisent in the ancient woodland of the Białowieża Forest and in Podlaskie. Other such species include the brown bear in Białowieża, in the Tatras, and in the Beskids, the gray wolf and the Eurasian lynx in various forests, the moose in northern Poland, and the beaver in Masuria, Pomerania, and Podlaskie.
In the forests there are game animals, such as red deer, roe deer and wild boar. In eastern Poland there are a number of ancient woodlands, like Białowieża forest, that have never been cleared or disturbed much by people. There are also large forested areas in the mountains, Masuria, Pomerania, Lubusz Land and Lower Silesia.
Poland is the most important breeding ground for a variety of European migratory birds. One quarter of the global population of white storks (40,000 breeding pairs) live in Poland, particularly in the lake districts and the wetlands along the Biebrza, the Narew, and the Warta, which are part of nature reserves or national parks.
Poland has historically been home to the two largest European species of mammals — wisent ("żubr") and aurochs ("tur"). Both survived in Poland longer than anywhere else. The last aurochs of Europe became extinct in 1627, in the Jaktorów Forest, while European wood bisons survived until the 20th century only in the Białowieża Forest, but have been reintroduced to other countries since.
The climate is mostly temperate throughout the country. The climate is oceanic in the north and west and becomes gradually warmer and continental towards the south and east. Summers are generally warm, with average temperatures between depending on the region. Winters are rather cold, with average temperatures around in the northwest and in the northeast. Precipitation falls throughout the year, although, especially in the east, winter is drier than summer.
The warmest region in Poland is Lower Silesia in the southwest of the country, where temperatures in the summer average between but can go as high as on some days in the warmest months of July and August. The warmest cities in Poland are Tarnów in Lesser Poland, and Wrocław in Lower Silesia. The average temperatures in Wrocław are in the summer and in the winter, but Tarnów has the longest summer in all of Poland, which lasts for 115 days, from mid-May to mid-September. The coldest region of Poland is in the northeast in the Podlaskie Voivodeship near the borders with Belarus and Lithuania. Usually the coldest city is Suwałki. The climate is affected by cold fronts which come from Scandinavia and Siberia. The average temperature in the winter in Podlaskie ranges from . The biggest impact of the oceanic climate is observed in Świnoujście and Baltic Sea seashore area from Police to Słupsk.
Poland is a representative democracy, with a president as a head of state, whose current constitution dates from 1997. The government structure centers on the Council of Ministers, led by a prime minister. The president appoints the cabinet according to the proposals of the prime minister, typically from the majority coalition in the Sejm. The president is elected by popular vote every five years. The current president is Andrzej Duda and the prime minister is Mateusz Morawiecki.
Polish voters elect a bicameral parliament consisting of a 460-member lower house (Sejm) and a 100-member Senate (Senat). The Sejm is elected under proportional representation according to the d'Hondt method, a method similar to that used in many parliamentary political systems. The Senat, on the other hand, is elected under the first-past-the-post voting method, with one senator being returned from each of the 100 constituencies.
With the exception of ethnic minority parties, only candidates of political parties receiving at least 5% of the total national vote can enter the Sejm. When sitting in joint session, members of the Sejm and Senat form the National Assembly (the "Zgromadzenie Narodowe"). The National Assembly is formed on three occasions: when a new president takes the oath of office; when an indictment against the President of the Republic is brought to the State Tribunal ("Trybunał Stanu"); and when a president's permanent incapacity to exercise his duties due to the state of his health is declared. To date only the first instance has occurred.
The judicial branch plays an important role in decision-making. Its major institutions include the Supreme Court ("Sąd Najwyższy"); the Supreme Administrative Court ("Naczelny Sąd Administracyjny"); the Constitutional Tribunal ("Trybunał Konstytucyjny"); and the State Tribunal ("Trybunał Stanu"). On the approval of the Senat, the Sejm also appoints the ombudsman or the Commissioner for Civil Rights Protection ("Rzecznik Praw Obywatelskich") for a five-year term. The ombudsman has the duty of guarding the observance and implementation of the rights and liberties of Polish citizens and residents, of the law and of principles of community life and social justice.
The Constitution of Poland is the supreme law in contemporary Poland, and the Polish legal system is based on the principle of civil rights, governed by the code of Civil Law. Historically, the most famous Polish legal act is the Constitution of 3 May 1791. Historian Norman Davies describes it as the first of its kind in Europe. The Constitution was instituted as a Government Act () and then adopted on 3 May 1791 by the Sejm of the Polish–Lithuanian Commonwealth. Primarily, it was designed to redress long-standing political defects of the federative Polish–Lithuanian Commonwealth and its Golden Liberty. Previously only the Henrician Articles (1573) signed by each of Poland's elected kings could perform the function of a set of basic laws.
The new Constitution introduced political equality between townspeople and the nobility ("szlachta"), and placed the peasants under the protection of the government. The Constitution abolished pernicious parliamentary institutions such as the "liberum veto", which at one time had placed the Sejm at the mercy of any deputy who might choose, or be bribed by an interest or foreign power, to have rescinded all the legislation that had been passed by that Sejm. The 3 May Constitution sought to supplant the existing anarchy fostered by some of the country's reactionary magnates, with a more egalitarian and democratic constitutional monarchy. The adoption of was treated as a threat by Poland's neighbours. In response Prussia, Austria and Russia formed an anti-Polish alliance and over the next decade collaborated with one another to partition their weaker neighbour and destroyed the Polish state. In the words of two of its co-authors, Ignacy Potocki and Hugo Kołłątaj, the constitution represented "the last will and testament of the expiring Fatherland." Despite this, its text influenced many later democratic movements across the globe. In Poland, freedom of expression is guaranteed by the Article 25 (section I. The Republic) and Article 54 (section II. The Freedoms, Rights and Obligations of Persons and Citizens) of the Constitution of Poland.
Prior to the last Partition in 1795, tax-paying females were allowed to take part in political life. Since 1918, following the return to independence, all women could vote. Poland was the 15th (12th sovereign) country to introduce universal women's suffrage. Currently, in Poland abortion is allowed only in special circumstances, such as when the woman's life or health is endangered by the continuation of pregnancy, when the pregnancy is a result of a criminal act, or when the fetus is seriously malformed.
Poland's current constitution was adopted by the National Assembly of Poland on 2 April 1997, approved by a national referendum on 25 May 1997, and came into effect on 17 October 1997. It guarantees a multi-party state, the freedoms of religion, speech and assembly, and specifically casts off many Communist ideals to create a 'free market economic system'. It requires public officials to pursue ecologically sound public policy and acknowledges the inviolability of the home, the right to form trade unions, and to strike, whilst at the same time prohibiting the practices of forced medical experimentation, torture and corporal punishment.
Poland is the fifth most populous member state of the European Union and has a grand total of 52 representatives in the European Parliament as of 2020. Since joining the union in 2004, successive Polish governments have pursued policies to extend the country's role in European and international affairs. Poland is an important hub for international relations and a regional power in Central Europe, with the largest economy of the Three Seas Initiative. The capital of Warsaw serves as the headquarters for Frontex, the European Union's agency for external border security as well as ODIHR, one of the principal institutions of the Organization for Security and Cooperation in Europe.
Apart from the European Union, Poland has been a member of NATO since 1999, the UN, the World Trade Organization, the Organisation for Economic Co-operation and Development (OECD) since 1996, European Economic Area, International Energy Agency, Council of Europe, Organization for Security and Co-operation in Europe, International Atomic Energy Agency, European Space Agency, G6, Council of the Baltic Sea States, Visegrád Group, Weimar Triangle and Schengen Agreement. Poland also guested as an invitee at the G20 summit due to its economic prominence in the region. In 2014, the consulting company Ernst & Young published a report which defined Poland as an 'optimal member' of the G20.
Post-war communist Poland, then known as the Polish People's Republic, was a key member of the Eastern Bloc and the Warsaw Pact, which was signed in Warsaw on 14 May 1955. The treaty acted as a balance of power or counterweight to NATO, which was then headed by western capitalist states. The People's Republic was more liberal than other communist countries in Europe at the time and the Polish communist government maintained neutral or warm ties with Western Europe, the United States and vice versa. As changes since the fall of communism in 1989 have redrawn the map of Europe, Poland has tried to forge strong and mutually beneficial relations with its seven new neighbours; this has notably included signing 'friendship treaties' to replace links severed by the disintegration disintegration of the Warsaw Pact. Poland has also established extensive relations with Ukraine, in an effort to firmly anchor that country within the Western world and prevent the Ukrainians from forming an alliance with the Russian Federation.
Over the past two decades, Poland significantly strengthened its ties with the United States, thus becoming one of its closest allies in Europe. Poland was part of the US-led coalition force during the Iraq War in 2003, and sent its troops in the first phase of the conflict, jointly with the United Kingdom and Australia. In recent years, Poland also sought to become a mediator between the Trump administration and the European Union in solving European disputes.
Poland's current voivodeships (provinces) are largely based on the country's historic regions, whereas those of the past two decades (to 1998) had been centred on and named for individual cities. The new units range in area from less than for Opole Voivodeship to more than for Masovian Voivodeship. Administrative authority at voivodeship level is shared between a government-appointed voivode (governor), an elected regional assembly ("sejmik") and a voivodeship marshal, an executive elected by that assembly.
The voivodeships are subdivided into "powiats" (often referred to in English as counties), and these are further divided into "gminas" (also known as communes or municipalities). Major cities normally have the status of both "gmina" and "powiat". Poland has 16 voivodeships, 380 powiats (including 66 cities with "powiat" status), and 2,478 "gminas".
The Polish armed forces are composed of five branches: Land Forces ("Wojska Lądowe"), Navy ("Marynarka Wojenna"), Air Force ("Siły Powietrzne"), Special Forces ("Wojska Specjalne") and Territorial Defence Force ("Wojska Obrony Terytorialnej") – a military component of the Polish armed forces created of 2016. Plans call for the force, once fully active, to consist of 53,000 people who will be trained and equipped to counter potential hybrid warfare threats. The military is subordinate to the Minister for National Defence. However, its commander-in-chief is the President of the Republic.
The Polish army's size is estimated at around 101,500 soldiers (2016). The Polish Navy primarily operates in the Baltic Sea and conducts operations such as maritime patrol, search and rescue for the section of the Baltic under Polish sovereignty, as well as hydrographic measurements and research. Also, the Polish Navy played a more international role as part of the 2003 invasion of Iraq, providing logistical support for the United States Navy. The current position of the Polish Air Force is much the same; it has routinely taken part in Baltic Air Policing assignments, but otherwise, with the exception of a number of units serving in Afghanistan, has seen no active combat. In 2003, the F-16C Block 52 was chosen as the new general multi-role fighter for the air force, the first deliveries taking place in November 2006.
The most important mission of the armed forces is the defence of Polish territorial integrity and Polish interests abroad. Poland's national security goal is to further integrate with NATO and European defence, economic, and political institutions through the modernisation and reorganisation of its military. The armed forces are being re-organised according to NATO standards, and since 2010, the transition to an entirely contract-based military has been completed. Compulsory military service for men was discontinued in 2008. From 2007, until conscription ended in 2008, the mandatory service was nine months.
Polish military doctrine reflects the same defensive nature as that of its NATO partners. From 1953 to 2009 Poland was a large contributor to various United Nations peacekeeping missions. The Polish Armed Forces took part in the 2003 invasion of Iraq, deploying 2,500 soldiers in the south of that country and commanding the 17-nation Multinational force in Iraq.
The military was temporarily, but severely, affected by the 2010 Polish Air Force Tu-154 crash, which killed the Chief of the Army's General Staff Franciszek Gągor and Air Force commanding general Andrzej Błasik, among others.
Currently, Poland's military is going through a significant modernization phase, which will be completed in 2022. The government plans to spend up to 130 billion złoty (US$34 billion), however the final total may reach 235 billion złoty (US$62 billion) to replace dated equipment and purchase new weapons systems. Under the program, the military plans to purchase new tracked armoured personnel carriers, self-propelled howitzers, utility and attack helicopters, a mid-range surface-to-air missile system, attack submarines, minehunters, and coastal anti-ship missiles. Also, the army plans to modernize its existing inventory of main battle tanks, and update its stock of small arms. Poland is currently spending 2% of its GDP on defense, and is expected to grow to 2.5% of GDP by 2030. In May 2017 the Ministry of National Defence has assured that the Polish army will be increased to 250,000 active personnel.
Poland has a highly developed system of law enforcement with a long history of effective policing by the State Police Service (Policja). The structure of law enforcement agencies within Poland is a multi-tier one, with the State Police providing criminal-investigative services, Municipal Police serving to maintain public order and a number of other specialized agencies, such as the Polish Border Guard, acting to fulfill their assigned missions. In addition to these state services, private security companies are also common, although they possess no powers assigned to state agencies, such as, for example, the power to make an arrest or detain a suspect.
Emergency services in Poland consist of the emergency medical services, search and rescue units of the Polish Armed Forces and State Fire Service. Emergency medical services in Poland are, unlike other services, provided for by local and regional government.
Since joining the European Union in 2004 all of Poland's emergency services underwent major restructuring and have, in the process, acquired large amounts of new equipment and trained staff. All emergency services personnel are uniformed and security services can be easily recognized during regular patrols in both large urban areas or smaller suburban localities.
Poland's economy and Gross Domestic Product (GDP) is currently the sixth largest in the European Union by nominal standards, and the fifth largest by purchasing power parity. It is also one of the fastest growing within the Union. Around 60% of the employed population belongs to the tertiary service sector, 30% to industry and manufacturing, and the remaining 10% to the agricultural sector. Although Poland is a member of EU's single market, the country has not adopted the Euro as legal tender and maintains its own currency – the Polish złoty (zł, PLN).
Having a strong domestic market, low private debt, low unemployment rate, flexible currency, and not being dependent on a single export sector, Poland is the only European economy to have avoided the recession of 2008. Since the fall of the communist government, Poland has pursued a policy of liberalising the economy. It is an example of the transition from a centrally planned to a primarily market-based economy. The country is the 25th largest exporter of goods and services in the world and its most successful exports include machinery, furniture, food products, clothing, shoes and cosmetics. These account to approximately 55% of the total GDP, as of 2018. Poland's largest trading partners include Germany, Czech Republic, United Kingdom, France and Italy.
The Polish banking sector is the largest in the Central and Eastern European region, with 32.3 branches per 100,000 adults. The banks are the largest and most developed sector of the country's financial markets. They are regulated by the Polish Financial Supervision Authority. During the transformation to a market-oriented economy, the government privatized several banks, recapitalized the rest, and introduced legal reforms that made the sector more competitive. This has attracted a significant number of strategic foreign investors (ICFI). Poland's banking sector has approximately 5 national banks, a network of nearly 600 cooperative banks and 18 branches of foreign-owned banks. In addition, foreign investors have controlling stakes in nearly 40 commercial banks, which make up 68% of the banking capital.
Poland has many private farms in its agricultural sector, with the potential to become a leading producer of food in the European Union. The biggest money-makers abroad include smoked and fresh fish, fine chocolate, and dairy products, meats and specialty breads, with the exchange rate conducive to export growth. Structural reforms in health care, education, the pension system, and state administration have resulted in larger-than-expected fiscal pressures. Warsaw leads Central Europe in foreign investment.
Since the gradual opening of the European Union labour market from 2004, Poland has experienced mass emigration of over 2.3 million nationals, due to higher wages abroad and high unemployment at home, even as Poland avoided the Great Recession of 2008. The emigration has increased the average wages for the workers who remained in Poland, in particular for those with intermediate level skills. Unemployment also gradually decreased; in September 2018 the unemployment rate in Poland was estimated at 5.7%, one of the lowest in the European Union. In 2019, Poland passed a law that would exempt workers under the age of 26 from income tax.
Products and goods manufactured in Poland include: electronics, buses and trams (Solaris, Solbus), helicopters and planes (PZL Świdnik, PZL Mielec), trains (Pesa SA, Newag), ships (Gdańsk Shipyard, Szczecin Shipyard, Gdynia Polish Navy Shipyard), military equipment (FB "Łucznik" Radom, ), medicines (, ), food (Tymbark, Hortex, E. Wedel), clothes (LLP), glass, pottery (Bolesławiec), chemical products and others. Poland is also one of the world's biggest producers of copper, silver and coal, as well as potatoes, rye, rapeseed, cabbage, apples, strawberries and ribes.
Poland is recognised as a regional economic leader within Central Europe, with nearly 40 percent of the 500 biggest companies in the region (by revenues) as well as a high globalisation rate. The country's largest firms compose the WIG30 index, which is traded on the Warsaw Stock Exchange.
The economic transition in 1989 has resulted in a dynamic increase in the number and value of investments conducted by Polish corporations abroad. Over a quarter of these companies have participated in a foreign project or joint venture, and 72 percent decided to continue foreign expansion. According to reports made by the National Bank of Poland, the value of Polish foreign direct investments reached almost 300 billion PLN at the end of 2014. The Central Statistical Office estimated that in 2014 there were around 1,437 Polish corporations with interests in 3,194 foreign entities.
Well known Polish brands include, among others PKO Bank Polski, PKN Orlen, PGE Energy, PZU, PGNiG, Tauron Group, Lotos Group, KGHM Polska Miedź, Asseco, Plus, Play, LOT Polish Airlines, Poczta Polska, Polish State Railways (PKP), Biedronka, and TVP.
The list includes the largest companies by turnover in 2019:
Poland experienced a significant increase in the number of tourists after joining the European Union in 2004. With nearly 21 million international arrivals in 2019, tourism contributes considerably to the overall economy and makes up a relatively large proportion of the country's service market.
Tourist attractions in Poland vary, from the mountains in the south to the sandy beaches in the north, with a trail of nearly every architectural style. The most visited city is Kraków, which was the former capital of Poland and serves as a relic of the Polish Golden Age and the Renaissance. Kraków also held royal coronations of most Polish kings and monarchs at Wawel, the nation's chief historical landmark. Among other notable sites in the country is Wrocław, one of the oldest cities in Poland, famous for its dwarfs. Wrocław possesses a large market square with two city halls, as well as the oldest Zoological Gardens with one of the world's largest number of animal species. The Polish capital Warsaw and its historical Old Town were entirely reconstructed after wartime destruction. Other cities attracting countless tourists include Gdańsk, Poznań, Lublin, Toruń as well as the site of the German Auschwitz concentration camp in Oświęcim. A notable highlight is the 13th-century Wieliczka Salt Mine with its labyrinthine tunnels, a subterranean lake and chapels carved by miners out of rock salt beneath the ground.
Poland's main tourist offerings include outdoor activities such as skiing, sailing, mountain hiking and climbing, as well as agrotourism, sightseeing historical monuments. Tourist destinations include the Baltic Sea coast in the north; the Masurian Lake District and Białowieża Forest in the east; on the south Karkonosze, the Table Mountains and the Tatra Mountains, where Rysy, the highest peak of Poland, and the famous Orla Perć mountain trail are located. The Pieniny and Bieszczady Mountains lie in the extreme south-east. There are over 100 castles in the country, many in the Lower Silesian Voivodeship and along the popular Trail of the Eagles' Nests. The largest castle in the world by land area is situated in Malbork, in north-central Poland.
The electricity generation sector in Poland is largely fossil-fuel–based. Many power plants nationwide use Poland's position as a major European exporter of coal to their advantage by continuing to use coal as the primary raw material in production of their energy. In 2013, Poland scored 48 out of 129 states in the Energy Sustainability Index. The three largest Polish coal mining firms (Węglokoks, Kompania Węglowa and JSW) extract around 100 million tonnes of coal annually. All three of these companies are key constituents of the Warsaw Stock Exchange's lead economic indexes.
Renewable forms of energy account for a smaller proportion of Poland's full energy generation capacity. However, the national government has set targets for the development of renewable energy sources in Poland which should see the portion of power produced by renewable resources climb to 15% in 2020 (in 2017 it was 10.9%). This is to be achieved mainly through the construction of wind farms and a number of hydroelectric stations.
Poland has around 164,800,000,000 m3 of proven natural gas reserves and around 96,380,000 barrels of proven oil reserves. These reserves are exploited by energy supply companies such as PKN Orlen ("the only Polish company listed in the Fortune Global 500"), PGNiG. However, the small amounts of fossil fuels naturally occurring in Poland is insufficient to satisfy the full energy consumption needs of the population. Therefore, the country is a net importer of oil and natural gas.
The 5 largest companies supplying Poland electricity are PGE, Tauron, Enea, and Innogy Poland.
Transport in Poland is provided by means of rail, road, marine shipping and air travel. Positioned in Central Europe with its eastern and part of its northeastern border constituting the longest land border of the Schengen Area with the rest of Northern and Central Europe.
Since joining the EU in May 2004, Poland has invested large amounts of public funds into modernization projects of its extensive transport networks. The country has a good network of highways, composed of express roads and motorways. At the start of 2020, Poland had of highways in use. In addition, all local and regional roads are monitored by the National Road Rebuilding Programme, which aims to improve the quality of travel in the countryside and suburban localities.
The longest European route - E40 runs through Poland.
In 2017, the nation had of railway track, the third longest in Europe after Germany and France. Polish authorities maintain a program of improving operating speeds across the entire Polish rail network. To that end, Polish State Railways (PKP) is adopting new rolling stock which is in principle capable of speeds up to . Additionally, in December 2014, Poland began to implement high–speed rail routes connecting major Polish cities. The Polish government has revealed that it intends to connect all major cities to a future high-speed rail network by 2020. The new PKP Pendolino ETR 610 test train set the record for the fastest train in the history of Poland, reaching on 24 November 2013. On 14 December 2014, Polish State Railways started passenger service using the PKP Pendolino ED250, operating at 200 km/h on the Central Rail Line (CMK). Poland is gradually implementing the European Rail Traffic Management System. Polish regulations allow trains without ETCS to travel at speeds up to 160 km/h, trains with ETCS1 up to 200 km/h, and trains with ETCS2 over 200 km/h. Most interregional connections rail routes in Poland is operated by PKP Intercity, whilst regional trains are run by a number of operators, the largest of which is Polregio. The largest passenger train station in terms of the number of travelers is Wrocław Główny.
The air and maritime transport markets in Poland are largely well developed. Poland has a number of international airports, the largest of which is Warsaw Chopin Airport, the primary global hub for LOT Polish Airlines. It was established in 1929 from a merger of Aerolloyd (1922) and Aero (1925). Other major airports with international connections include John Paul II International Airport Kraków–Balice, Copernicus Airport Wrocław, Gdańsk Lech Wałęsa Airport. Poland has begun preparations for a construction that can handle 100 million passengers of the Central Communication Port.
Seaports exist all along Poland's Baltic coast, with most freight operations using Świnoujście, Police, Szczecin, Kołobrzeg, Gdynia, Gdańsk and Elbląg as their base.
Passenger ferries link Poland with Scandinavia all year round; these services are provided from Gdańsk and Świnoujście by Polferries, Stena Line from Gdynia and Unity Line from the Świnoujście. The Port of Gdańsk is the only port in the Baltic Sea adapted to receive oceanic vessels.
Over the course of history, the Polish people have made considerable contributions in the fields of science, technology and mathematics. Perhaps the most renowned Pole to support this theory was Nicolaus Copernicus ("Mikołaj Kopernik"), who triggered the Copernican Revolution by placing the Sun rather than the Earth at the center of the universe. He also derived a quantity theory of money, which made him a pioneer of economics. Copernicus' achievements and discoveries are considered the basis of Polish culture and cultural identity.
Poland's tertiary education institutions; traditional universities, as well as technical, medical, and economic institutions, employ around tens of thousands of researchers and staff members. There are hundreds of research and development institutes. However, in the 19th and 20th centuries many Polish scientists worked abroad; one of the most important of these exiles was Maria Skłodowska-Curie, a physicist and chemist who lived much of her life in France. In 1925 she established Poland's Radium Institute.
In the first half of the 20th century, Poland was a flourishing centre of mathematics. Outstanding Polish mathematicians formed the Lwów School of Mathematics (with Stefan Banach, Stanisław Mazur, Hugo Steinhaus, Stanisław Ulam) and Warsaw School of Mathematics (with Alfred Tarski, Kazimierz Kuratowski, Wacław Sierpiński and Antoni Zygmund). Numerous mathematicians, scientists, chemists or economists emigrated due to historic vicissitudes, among them Benoit Mandelbrot, Leonid Hurwicz, Alfred Tarski, Joseph Rotblat and Nobel Prize laureates Roald Hoffmann, Georges Charpak and Tadeusz Reichstein.
Over 40 research and development centers and 4,500 researchers make Poland the biggest research and development hub in Central and Eastern Europe. Multinational companies such as: ABB, Delphi, GlaxoSmithKline, Google, Hewlett–Packard, IBM, Intel, LG Electronics, Microsoft, Motorola, Siemens and Samsung all have set up research and development centres in Poland. Companies chose Poland because of the availability of highly qualified labour force, presence of universities, support of authorities, and the largest market in East-Central Europe. According to a KPMG report from 2011, 80% of Poland's current investors are content with their choice and willing to reinvest.
The public postal service in Poland is operated by "Poczta Polska" (the Polish Post). It was created on 18 October 1558, when King Sigismund II Augustus established a permanent postal route from Kraków to Venice. The service was dissolved during the foreign partitions in the 18th century. After regaining independence in 1918, Poland's postal system developed rapidly as new services were introduced including money transfers, payment of pensions, delivery of magazines, and air mail. The government-owned enterprise of Polish Post, Telegraph and Telephone ("Polska Poczta, Telegraf i Telefon") was established in 1928.
During wars and national uprisings, communication was provided mainly through the military authorities. Many important events in the history of Poland involved the postal service, like the defence of the Polish Post Office in Gdańsk in 1939, and the participation of the Polish Scouts' Postal Service in the Warsaw Uprising.
At present, the service is a modern state-owned company that provides a number of standard and express delivery as well as home-delivery services. With an estimated number of around 83,000 employees (2013), "Poczta Polska" also has a personal tracking system for parcels. In 2017 the company adopted a strategy that assumes increasing revenues to 6.9 billion PLN by 2021; the aim is to double revenues from courier and parcel services and a five-fold growth in logistics services.
Poland, with its 38,544,513 inhabitants, has the eighth-largest population in Europe and the fifth-largest in the European Union. It has a population density of 122 inhabitants per square kilometer (328 per square mile).
In recent years, Poland's population has decreased due to an increase in emigration and a decline in the birth rate. Since Poland's accession to the European Union on 1 May 2004, a significant number of Poles have emigrated, primarily to the United Kingdom, Germany and Ireland in search of better work opportunities abroad. Since 2012, the economy has improved, and emigration has decreased which allowed the nation to focus on expanding its workforce.
As a result, the Polish Minister of Development Mateusz Morawiecki suggested that Poles abroad should return to Poland. Polish minorities are still present in the neighboring countries of Ukraine, Belarus, and Lithuania, as well as in other countries (see Poles for population numbers). Altogether, the number of ethnic Poles living abroad is estimated to be around 20 million. The largest number of the Polish diaspora can be found in the United States, Germany, United Kingdom and Canada. The total fertility rate (TFR) in Poland was estimated in 2016 at 1.39 children born to a woman. Poland has an average age of 41.1 years.
Polish ("język polski") belongs to the Lechitic subgroup of West Slavic languages, with close relations to Czech language and Slovak. It is the official and predominant spoken language in Poland, but it is also used throughout the world by Polish minorities in other countries as well as being one of the official languages of the European Union. Its written standard is the Polish alphabet, which has 9 additions to the letters of the Latin script ("ą", "ć", "ę", "ł", "ń", "ó", "ś", "ź", "ż"), with the notable exclusion of "q","v", and "x", which are used mainly for foreign or borrowed words. The deaf communities use Polish Sign Language belonging to the German family of Sign Languages.
According to the "Act of 6 January 2005 on national and ethnic minorities and on the regional languages", 16 other languages have officially recognized status of minority languages: 1 regional language (Kashubian – spoken by around 366,000 people, but only 108,000 declared its everyday use in the census of 2011), 10 languages of 9 national minorities (minority groups that have their own independent state elsewhere) and 5 languages of 4 ethnic minorities (spoken by the members of minorities not having a separate state elsewhere). Jewish and Romani minorities each have 2 minority languages recognized.
Languages having the status of national minority's language are Armenian, Belarusian, Czech, German, Yiddish, Hebrew, Lithuanian, Russian, Slovak and Ukrainian. Languages having the status of ethnic minority's language are Karaim, Rusyn (called "Lemko" in Poland) and Tatar. Also, official recognition is granted to two Romani languages: Polska Roma and Bergitka Roma.
Official recognition of a language provides certain rights (under conditions prescribed by the law): of education in that language, of having the language established as the secondary administrative language or auxiliary language in bilingual municipalities and of financial support from the state for the promotion of that language. Currently German holds such status in 33 gminas, Lithuanian in 1, and Belarussian and Kashubian in 5 gminas each.
More than 50% of Polish citizens declare knowledge of the English language, followed by German (38%) and Russian (34%).
The formation of the Polish identity and ethnicity can be traced to the 10th century when Duke Mieszko I politically unified the Lechitic tribes of Western Polans, Mazovians, Silesians, Vistulans, Pomeranians, Lendians and others, which inhabited the area of central Europe between the Oder River in the west and the Bug River in the east, and between the Carpathian and Sudetes mountains in the south and the Baltic sea in the north. Then by accepting Christianity, under the auspices of the Latin Church.
Following the formation of the Polish–Lithuanian Commonwealth in 1569, the country over the next two centuries contained many languages, cultures and religions. The Commonwealth was primarily composed of three nations: Poles, Lithuanians, and Ruthenians (Ukrainians and Belarusians) — there were also sizable minorities of groups such as Germans, Jews, Latvians, Scots, Armenians, Mennonites and Tatars. After the partitions of Poland at the end of the 18th century, the bulk of the ethnic Polish population was primarily located in Congress Poland, and in the Galicia and Poznań provinces. When Poland regained its independence in 1918, Poles constituted the majority of the population in the country, during the interwar period, with sizable Ukrainian, Belarusian, Jewish and German minorities.
Today, Poland is primarily inhabited by ethnic Poles. In the 2011 census, 37,310,341 (96.88%) reported Polish as their first identity, 435,750 (1.13%) Silesian, 17,746 (0.04%) Kashubian, 74,464 (0.19%) German, 38,387 (0.09%) Ukrainian, and 36,399 (0.09%) Belarusian. Other identities were reported by 88,577 people (0.23%) and 521,470 people (1.35%) did not report any identity. Other minority national and ethnic groups in Poland include the Romani, Polish Jews, Lemkos, Lithuanians, Armenians, Vietnamese, Slovaks, Czechs, Russians, Greeks and Lipka Tatars. Ethnic Poles themselves can be divided into many diverse regional ethnographic sub-groups, such as Masovians, Kurpie, Masurians, Kashubians, , , Krakowiacy, Lubliniacy, Lachy Sądeckie, Pogórzanie, Gorals, Silesians and Silesian Gorals among many others.
The statistics on Ukrainians do not include recently arrived migrant workers. More than 1.7 million Ukrainian citizens worked legally in Poland in 2017.
For centuries the Lechitic people inhabiting the lands of modern-day Poland have practiced various forms of paganism known as "Rodzimowierstwo", or "native faith". In the year 966, Duke Mieszko I converted to Christianity, and submitted to the authority of the Roman Catholic Church. This event came to be known as the Baptism of Poland. However, this did not put an end to pagan beliefs in the country. The persistence was demonstrated by a series of rebellions known as the Pagan reaction in the first half of the 11th century, which also showed elements of a peasant uprising against landowners and feudalism, and led to a mutiny that destabilized the country.
Since then, Poland has been a predominantly Catholic nation, however throughout its history, religious tolerance was an important part of its political culture. In 1264, the Statute of Kalisz, also known as a Charter of Jewish Liberties, granted Jews living in the Polish lands unprecedented legal rights not found anywhere in Europe. In 1424, the Polish king was pressed by the Bishops to issue the Edict of Wieluń, outlawing early Protestant Hussitism. Then in 1573, the Warsaw Confederation marked the formal beginning of extensive religious freedoms granted to all faiths in the Polish-Lithuanian Commonwealth. The act was not imposed by a king or consequence of war, but rather resulted from the actions of members of the Polish-Lithuanian society. It was also influenced by the events of the 1572 French St. Bartholomew's Day Massacre, which prompted the Polish-Lithuanian nobility to see that no monarch would ever be able to carry out such reprehensible atrocities in Poland. The act is also credited with keeping the Polish-Lithuanian Commonwealth out of the Thirty Years' War, fought between German Protestants and Catholics.
Religious tolerance in Poland spurred many theological movements such as Calvinist Polish Brethren and a number of other Protestant groups, as well as atheists, such as ex-Jesuit philosopher Kazimierz Łyszczyński, one of the first atheist thinkers in Europe. Also, in the 16th century, Anabaptists from the Netherlands and Germany settled in Poland—after being persecuted in Western Europe—and became known as the Vistula delta Mennonites.
In 2014, an estimated 87% of the population belonged to the Catholic Church. Though rates of religious observance are lower at 52%, Poland remains one of the most religious countries in Europe. Contemporary religious minorities include Polish Orthodox (about 506,800), various Protestants (about 150,000) — including 77,500 Lutherans in the Evangelical-Augsburg Church, 23,000 Pentecostals in the Pentecostal Church in Poland, 10,000 Adventists in the Seventh-day Adventist Church and other smaller Evangelical denominations — Jehovah's Witnesses (126,827), Eastern Catholics, Mariavites, Jews, and Muslims, including the Tatars of Białystok region. There are also several thousand neopagans, some of whom are members of the Native Polish Church.
From 16 October 1978 until his death on 2 April 2005, Karol Józef Wojtyła was Pope of the Roman Catholic Church. He is the only Polish Pope to date. Additionally he is credited with having played a significant role in hastening the downfall of communism in Poland and throughout Central and Eastern Europe.
Freedom of religion is guaranteed by the 1989 statute of the Polish Constitution, enabling the emergence of additional denominations. The Concordat between the Holy See and Poland guarantees the teaching of religion in state schools. According to a 2007 survey, 72% of respondents were not opposed to religious instruction in public schools; alternative courses in ethics are available only in one percent of the entire public educational system.
Famous sites of Roman Catholic pilgrimage in Poland include the Monastery of Jasna Góra in the southern Polish city of Częstochowa, Basilica of Our Lady of Licheń, Divine Mercy Sanctuary, Kraków. Many tourists also visit the Family home of John Paul II in Wadowice outside Kraków. Christ the King is the tallest statue of Jesus in the world. Christian Orthodox pilgrims visit Mount Grabarka near Grabarka-Klasztor and the Hasidic Jews travel annually to the grave of a great rabbi in Leżajsk.
Poland's healthcare system is based on an all-inclusive insurance system. State subsidised healthcare is available to all Polish citizens who are covered by this general health insurance program. However, it is not compulsory to be treated in a state-run hospital as a number of private medical complexes exist nationwide.
All medical service providers and hospitals in Poland are subordinate to the Polish Ministry of Health, which provides oversight and scrutiny of general medical practice as well as being responsible for the day-to-day administration of the healthcare system. In addition to these roles, the ministry is tasked with the maintenance of standards of hygiene and patient-care.
Hospitals in Poland are organised according to the regional administrative structure, resultantly most towns have their own hospital "(Szpital Miejski)". Larger and more specialised medical complexes tend only to be found in larger cities, with some even more specialised units located only in the capital, Warsaw. However, all voivodeships have their own general hospital (most have more than one), all of which are obliged to have a trauma centre; these types of hospital, which are able to deal with almost all medical problems are called 'regional hospitals' "(Szpital Wojewódzki)". The last category of hospital in Poland is that of specialised medical centres, an example of which would be the Skłodowska-Curie Institute of Oncology, Poland's leading, and most highly specialised centre for the research and treatment of cancer.
In 2012, the Polish health-care industry experienced further transformation. Hospitals were given priority for refurbishment where necessary. As a result of this process, many hospitals were updated with the latest medical equipment.
According to the Human Development Report from 2018, the average life expectancy at birth is 78.5 years (74.6 years for an infant male and 82.4 years for an infant female).
The Commission of National Education ("Komisja Edukacji Narodowej") established in 1773, was the world's first state ministry of education. The education of Polish society was a goal of the nation's rulers as early as the 12th century. The library catalogue of the Cathedral Chapter of Kraków dating back to 1110 shows that in the early 12th century Polish academia had access to European and Classical literature. The Jagiellonian University was founded in 1364 by King Casimir III in Kraków—the school is the world's 19th oldest university.
, Programme for International Student Assessment, coordinated by the Organisation for Economic Co-operation and Development, ranks Poland's educational system higher than the OECD average.
Education in Poland starts at the age of five or six (with the particular age chosen by the parents) for the '0' class (Kindergarten) and six or seven years in the 1st class of primary school (Polish "szkoła podstawowa"). It is compulsory that children participate in one year of formal education before entering the 1st class at no later than 7 years of age. Corporal punishment of children in schools is officially prohibited since 1783 (before the partitions) and criminalised since 2010 (in schools as well as at home).
At the end of the 6th class when students are 13, students take a compulsory exam that will determine their acceptance and transition into a specific lower secondary school ("gimnazjum"—middle school or junior high). They will attend this school for three years during classes 7, 8, and 9. Students then take another compulsory exam to determine the upper secondary level school they will attend. There are several alternatives, the most common being the three years in a "liceum" or four years in a technikum. Both end with a maturity examination (matura—similar to French baccalauréat), and may be followed by several forms of higher education, leading to licencjat or inżynier (the Polish Bologna Process first cycle qualification), magister (second cycle qualification) and eventually doktor (third cycle qualification).
In Poland, there are 500 university-level institutions for the pursuit of higher education. There are 18 fully accredited traditional universities, 20 technical universities, 9 independent medical universities, 5 universities for the study of economics, 9 agricultural academies, 3 pedagogical universities, a theological academy, 3 maritime service universities and 4 national military academies. Also, there are a number of higher educational institutions dedicated to the teaching of the arts—amongst these are the 7 academies of music.
The culture of Poland is closely connected with its intricate 1,000-year history and forms an important constituent in western civilization. With origins in the culture of the tribal Lechites, over time Polish culture has been influenced by its interweaving ties with the neighbouring Germanic and Latinate worlds as well as in continual dialogue with the many other ethnic groups and minorities living in Poland. The people of Poland have traditionally been seen as hospitable to artists from abroad and eager to follow cultural and artistic trends popular in other countries. In the 19th and 20th centuries the Polish focus on cultural advancement often took precedence over political and economic activity. These factors have contributed to the versatile nature of Polish art.
The appreciation of Poland's traditions, history and cultural heritage is commonly known as "Polonophilia".
Artists from Poland, including famous musicians such as Chopin, Rubinstein, Paderewski, Penderecki and Wieniawski, and traditional, regionalized folk composers create a lively and diverse music scene, which even recognizes its own music genres, such as sung poetry and disco polo.
The origins of Polish music can be traced to the 13th century; manuscripts have been found in Stary Sącz containing polyphonic compositions related to the Parisian Notre Dame School. Other early compositions, such as the melody of "Bogurodzica" and "God Is Born" (a coronation polonaise for Polish kings by an unknown composer), may also date back to this period, however, the first known notable composer, Nicholas of Radom, lived in the 15th century. During the 16th century, two main musical groups – both based in Kraków and belonging to the King and Archbishop of Wawel – led to the rapid development of Polish renaissance music. Diomedes Cato, a native-born Italian who lived in Kraków from about the age of five, became a renowned lutenist at the court of Sigismund III, and not only imported some of the musical styles from southern Europe, but blended them with native folk music.
In the 17th and 18th centuries, Polish baroque composers mostly wrote either liturgical music or secular compositions such as concertos and sonatas for voices or instruments. At the end of the 18th century, Polish classical music evolved into national forms like the polonaise. Also, Wojciech Bogusławski's "", which premiered in 1794, is regarded as the first Polish national opera.
Traditional Polish folk music has had a major effect on the works of many well-known Polish composers, and no more so than on Fryderyk Chopin, a widely recognised national hero of the arts. All of Chopin's works involve the piano and are technically demanding, emphasising nuance and expressive depth. As a great composer, Chopin invented the musical form known as the instrumental ballade and made major innovations to the piano sonata, mazurka, waltz, nocturne, polonaise, étude, impromptu and prélude, he was also the composer of a number of polonaises which borrowed heavily from traditional Polish folk music. It is largely thanks to him that such pieces gained great popularity throughout Europe during the 19th century. Several Polish composers such as Szymanowski would later go on to draw inspiration from Chopin's folk-influenced style. Nowadays the most distinctive folk music can be heard in the towns and villages of the mountainous south, particularly in the region surrounding the winter resort town of Zakopane.
Today Poland has an active music scene, with the jazz and metal genres being particularly popular among the contemporary populace. Polish jazz musicians such as Krzysztof Komeda created a unique style, which was most famous in the 1960s and 1970s and continues to be popular to this day. Since the fall of communism throughout Europe, Poland has become a major venue for large-scale music festivals, chief among which are the Open'er Festival, Opole Festival and Sopot Festival.
Art in Poland has always reflected European trends while maintaining its unique character. The Kraków Academy of Fine Arts, later developed by Jan Matejko, produced monumental portrayals of customs and significant events in Polish history. Other institutions such as the Academy of Fine Arts in Warsaw were more innovative and focused on both historical and contemporary styles. In recent years, art academies such as the Kraków School of Art and Fashion Design, Art Academy of Szczecin, University of Fine Arts in Poznań and Geppert Academy of Fine Arts in Wrocław gained much recognition.
Perhaps the most prominent and internationally admired Polish artist was Tamara de Lempicka, who specialized in the style of Art Deco. Lempicka was described as "the first woman artist to become a glamour star." Another notable was Caziel, born Zielenkiewicz, who represented Cubism and Abstraction in France and England.
Prior to the 19th century only Daniel Schultz and Italian-born Marcello Bacciarelli had the privilege of being recognized abroad. The Young Poland movement witnessed the birth of modern Polish art, and engaged in a great deal of formal experimentation led by Jacek Malczewski, Stanisław Wyspiański, Józef Mehoffer, and a group of Polish Impressionists. Stanisław Witkiewicz was an ardent supporter of Realism, its main representative being Józef Chełmoński, while Artur Grottger specialized in Romanticism. Within historically-orientated circles, Henryk Siemiradzki dominated with his monumental Academic Art and ancient Roman theme.
Since the inter-war years, Polish art and documentary photography has enjoyed worldwide fame and in the 1960s the Polish School of Posters was formed. Throughout the entire country, many national museum and art institutions hold valuable works by famous masters. Major museums in Poland include the National Museum in Warsaw, Poznań, Wrocław, Kraków, and Gdańsk, as well as the Museum of John Paul II Collection, and the Wilanów Museum. Important collections are also held at the Royal Castle in Warsaw, Wawel Castle and in the Palace on the Isle. The most distinguished painting of Poland is "Lady with an Ermine" by Leonardo da Vinci, held at the Czartoryski Museum in Kraków. Although not Polish, the work had a strong influence on Polish culture and has been often associated with Polish identity.
Polish cities and towns reflect a whole spectrum of European architectural styles. Romanesque architecture is represented by St. Andrew's Church, Kraków, and St. Mary's Church, Gdańsk, is characteristic for the Brick Gothic style found in Poland. Richly decorated attics and arcade loggias are the common elements of the Polish Renaissance architecture, as evident in the City Hall in Poznań. For some time the late renaissance style known as mannerism, most notably in the Bishop's Palace in Kielce, coexisted with the early baroque style, typified in the Church of Saints Peter and Paul in Kraków.
History has not been kind to Poland's architectural monuments. Nonetheless, a number of ancient structures has survived: castles, churches, and stately homes, often unique in the regional or European context. Some of them have been painstakingly restored, like Wawel Castle, or completely reconstructed, including the Old Town and Royal Castle of Warsaw and the Old Town of Gdańsk.
The architecture of Gdańsk is mostly of the Hanseatic variety, a Gothic style common among the former trading cities along the Baltic sea and in the northern part of Central Europe. The architectural style of Wrocław is mainly representative of German architecture, since it was for centuries located within the Holy Roman Empire. The centres of Kazimierz Dolny and Sandomierz on the Vistula are good examples of well-preserved medieval towns. Poland's ancient capital, Kraków, ranks among the best-preserved Gothic and Renaissance urban complexes in Europe.
The second half of the 17th century is marked by baroque architecture. Side towers, such as those of Branicki Palace in Białystok, are typical for the Polish baroque. The classical Silesian baroque is represented by the University in Wrocław. The profuse decorations of the Branicki Palace in Warsaw are characteristic of the rococo style. The centre of Polish classicism was Warsaw under the rule of the last Polish king Stanisław II Augustus.
The Palace on the Water is the most notable example of Polish neoclassical architecture. Lublin Castle represents the Gothic Revival style in architecture, while the Izrael Poznański Palace in Łódź is an example of eclecticism.
Traditional folk architecture in the villages and small towns scattered across the vast Polish countryside is characterized by its extensive use of wood and bare brick as primary building materials, common for Central Europe. Some of the best preserved and oldest structures include ancient stone temples in Silesia and fortified wooden churches across southeastern Poland in the Beskids and Bieszczady regions of the Carpathian mountains. Numerous examples of secular structures such as Polish manor houses ("dworek"), farmhouses (), granaries, mills, barns and country inns () can still be found across some Polish regions. However, traditional construction methods faded in the first decades of the 1900s, when Poland's population experienced a demographic shift to urban dwelling away form the countryside.
The earliest Polish literature dates back to the 12th century, when Poland's official language was Latin. Within Polish literary customs, it is appropriate to highlight the published works concerning Poland not written by ethnic Poles. The most vivid example is Gallus Anonymus, a foreign monk and the first chronicler who described Poland and its territories.
The first documented phrase in the Polish language reads ""Day ut ia pobrusa, a ti poziwai"" ("Let me grind, and you take a rest"), reflecting the culture of early Poland. It was composed by an abbot named Piotr (Peter) within the Latin language chronicle "Liber fundationis" from between 1269 and 1273, which described the history of the Cistercian monastery in Henryków, Silesia. The sentence was allegedly uttered almost a hundred years earlier by a Bohemian settler, who expressed pity for his spouse's duty of grinding by the quern-stone. The sentence has been included in the UNESCO Memory of World Register.
Most medieval records in Latin and the Old Polish language contain the oldest extant manuscript of fine Polish prose entitled the "Holy Cross Sermons", as well as the earliest Polish-language bible, the so-called Bible of Queen Sophia. One of the first printing houses was established by Kasper Straube in the 1470s, while Jan Haller was considered the pioneer of commercial print in Poland. Haller's Calendarium cracoviense, an astronomical wall calendar from 1474, is Poland's oldest surviving print.
The tradition of extending Polish historiography in Latin was subsequently inherited by Vincent Kadłubek, Bishop of Kraków in the 13th century, and Jan Długosz in the 15th century. This practice, however, was abandoned by Jan Kochanowski, who became one of the first Polish Renaissance authors to write most of his works in Polish, along with Mikołaj Rej. Poland also hosted many famed poets and writers from abroad like Filippo "Kallimach" Buonaccorsi, Conrad Celtes and Laurentius Corvinus. A Polish writer who utilized Latin as his principal tool of expression was Klemens "Ianicius" Janicki, one of the most renowned Latin poets of his time, who was laureled by the Pope. Other writers of the Polish Renaissance include Johannes Dantiscus, Andreus Fricius Modrevius, Matthias Sarbievius and Piotr Skarga. Throughout this period Poland also experienced the early stages of Protestant Reformation. The main figure of Polish Reformation was John Laski, who, with the permission of King Edward VI of England, created the European Protestant Congregation of London in 1550.
During the Polish Baroque era, the Jesuits greatly influenced Polish literature and literary techniques, often relying on God and religious matters. The leading baroque poet was Jan Andrzej Morsztyn, who incorporated Marinism into his publications. Jan Chryzostom Pasek, also a respected baroque writer, is mostly remembered for his tales and memoirs reflecting sarmatian culture in the Polish-Lithuanian Commonwealth. Subsequently, the Polish Enlightenment was dominated by Samuel Linde, Hugo Kołłątaj, Izabela Czartoryska, Julian Ursyn Niemcewicz and two Polish monarchs, Stanisław I and Stanisław II Augustus. In 1776, Ignacy Krasicki composed the first proper novel entitled The Adventures of Mr. Nicholas Wisdom, which was a milestone for Polish literature.
Among the best known Polish Romantics are the "Three Bards"–the three national poets active in the age of foreign partitions–Adam Mickiewicz, Juliusz Słowacki and Zygmunt Krasiński. Adam Mickiewicz is widely regarded as one of the greatest Polish, Slavic and European poets. He is known primarily for the national epic poem "Pan Tadeusz", a masterpiece of Polish literature.
A Polish prose poet of the highest order, Joseph Conrad, the son of dramatist Apollo Korzeniowski, won worldwide fame with his English-language novels and stories that are informed with elements of the Polish national experience. Conrad's books and published novels like "Heart of Darkness", "Nostromo" and "Victory" are believed to be one of the finest works ever written, placing Conrad among the greatest novelists of all time.
In the 20th century, five Polish novelists and poets were awarded the Nobel Prize in Literature–Henryk Sienkiewicz for "Quo Vadis", Władysław Reymont for "The Peasants", Isaac Bashevis Singer, Czesław Miłosz and Wisława Szymborska. In 2019, Polish author Olga Tokarczuk was awarded the Nobel Prize in Literature for the year 2018.
The history of Polish cinema is as long as history of cinematography itself. Over decades, Poland has produced outstanding directors, film producers, cartoonists and actors that achieved world fame, especially in Hollywood. Moreover, Polish inventors played an important role in the development of world cinematography and modern-day television. Among the most famous directors and producers, who worked in Poland as well as abroad are Roman Polański, Andrzej Wajda, Samuel Goldwyn, the Warner brothers (Harry, Albert, Sam, and Jack), Max Fleischer, Lee Strasberg, Agnieszka Holland and Krzysztof Kieślowski.
In the 19th century, throughout partitioned Poland, numerous amateur inventors, such as Kazimierz Prószyński, were eager to construct a film projector. In 1894, Prószyński was successful in creating a Pleograph, one of the first cameras in the world. The invention, which took photographs and projected pictures, was built before the Lumière brothers lodged their patent. He also patented an Aeroscope, the first successful hand-held operated film camera. In 1897, Jan Szczepanik, obtained a British patent for his Telectroscope. This prototype of television could easily transmit image and sound, thus allowing a live remote view. Following the invention of appropriate apparatus and technological development in the upcoming years, his then-impossible concept became reality.
Polish cinema developed rapidly in the interwar period. The most renowned star of the silent film era was Polish actress Pola Negri. During this time, the Yiddish cinema also evolved in Poland. Films in the Yiddish language with Jewish themes, such as "The Dybbuk" (1937), played an important part in pre-war Polish cinematography. In 1945 the government established 'Film Polski', a state-run film production and distribution organization, with director Aleksander Ford as the head of the company. Ford's "Knights of the Teutonic Order" (1960) was viewed by millions of people in the Soviet Union, Czechoslovakia and France. This success was followed by the popular historical films of Jerzy Hoffman and Andrzej Wajda. Wajda's 1975 film "The Promised Land" was nominated at the 48th Academy Awards.
In 2015, "Ida" by Paweł Pawlikowski won the Academy Award for Best Foreign Language Film. In 2019, Pawlikowski received an Academy Award for Best Director nomination for his historical drama "Cold War". Other well-known Polish Oscar-winning productions include "The Pianist" (2002) by Roman Polański.
Poland has a number of major media outlets, chief among which are the national television channels. TVP is Poland's public broadcasting corporation; about a third of its income comes from a broadcast receiver licence, while the rest is made through revenue from commercials and sponsorships. State television operates two mainstream channels, TVP 1 and TVP 2, as well as regional programs for each of the country's 16 voivodeships (as TVP 3). In addition to these general channels, TVP runs a number of genre-specific programmes such as TVP Sport, TVP Historia, TVP Kultura, TVP Rozrywka, TVP Seriale and TVP Polonia, the latter is a state-run channel dedicated to the transmission of Polish language television for the Polish diaspora abroad.
Poland has several 24-hour news channels: Polsat News, Polsat News 2, TVP Info, TVN 24, TVN 24 Biznes i Świat, TV Republika and WPolsce.pl.
In Poland, there are also daily newspapers like "Gazeta Wyborcza" ("Electoral Gazette"), "Rzeczpospolita" ("The Republic") and "Gazeta Polska Codziennie" ("Polish Daily Newspaper") which provide traditional opinion and news, and tabloids such as "Fakt" and "Super Express". "Rzeczpospolita", founded in 1920 is one of the oldest newspapers still in operation in the country. Weeklies include Tygodnik Angora, , Polityka, Wprost, Newsweek Polska, Gość Niedzielny and Gazeta Polska.
Poland has also emerged as a major hub for video game developers in Europe, with the country now being home to hundreds of studios. Among the most successful ones are CD Projekt, Techland, CI Games and People Can Fly. Some of the most popular video games developed in Poland include "The Witcher", "" and "". Other notable games include "Bulletstorm", "Call of Juarez", "Painkiller", "Dead Island", "Lords of the Fallen", "The Vanishing of Ethan Carter", "", "Dying Light", "Shadow Warrior", "", "Observer", "Layers of Fear", "Book of Demons" and "Cyberpunk 2077". Katowice hosts Intel Extreme Masters, one of the biggest eSports events in the world.
Polish cuisine has evolved over the centuries to become highly eclectic due to Poland's history. Polish cuisine shares many similarities with other Central European cuisines, especially German and Austrian as well as Jewish, French, Italian and Turkish culinary traditions. Polish-styled cooking in other cultures is often referred to as "cuisine à la polonaise".
Polish dishes are usually rich in meat, especially pork, chicken and beef (depending on the region), winter vegetables (sauerkraut cabbage in "bigos"), and spices. It is also characteristic in its use of various kinds of noodles, the most notable of which are kluski, as well as cereals such as "kasha" (from the Polish word kasza) and a variety of breads like the world-renowned bagel. Polish cuisine is hearty and uses a lot of cream and eggs. Festive meals such as the meatless Christmas Eve dinner ("Wigilia") or Easter breakfast could take days to prepare in their entirety.
The main course usually includes a serving of meat, such as roast, chicken, or "kotlet schabowy" (breaded pork cutlet), vegetables, side dishes and salads, including "surówka" – shredded root vegetables with lemon and sugar (carrot, celeriac, seared beetroot) or sauerkraut (, ). The side dishes are usually potatoes, rice or "kasza" (cereals). Meals conclude with a dessert such as "sernik" (cheesecake), "makowiec" (poppy seed pastry), or "napoleonka" (cream pie), and tea.
The Polish national dishes are "bigos" ; "pierogi" ; "kielbasa"; "kotlet schabowy" breaded cutlet; "gołąbki" cabbage rolls; "zrazy" roulade; "pieczeń" roast ; sour cucumber soup ("zupa ogórkowa", ); mushroom soup, ("zupa grzybowa", quite different from the North American cream of mushroom); "zupa pomidorowa" tomato soup ; "rosół" variety of meat broth; "żurek" sour rye soup; "flaki" tripe soup; "barszcz" and "chłodnik" among others.
Traditional alcoholic beverages include honey mead, widespread since the 13th century, beer, wine and vodka (old Polish names include "okowita" and "gorzała"). The world's first written mention of vodka originates from Poland. The most popular alcoholic drinks at present are beer and wine which took over from vodka more popular in the years 1980–1998. Tea remains common in Polish society since the 19th century, whilst coffee is drunk widely since the 18th century. Other frequently consumed beverages include various mineral waters and juices, soft drinks popularized by the fast-food chains since the late 20th century, as well as buttermilk, soured milk and kefir.
Volleyball and Association football are among the country's most popular sports, with a rich history of international competitions. Track and field, basketball, handball, boxing, MMA, motorcycle speedway, ski jumping, cross-country skiing, ice hockey, tennis, fencing, swimming and weightlifting are other popular sports. Notable Polish sportspeople include Zbigniew Boniek, Irena Szewińska, Agnieszka Radwańska, Justyna Kowalczyk, Robert Lewandowski, Kamil Stoch and Anita Włodarczyk.
The golden era of football in Poland occurred throughout the 1970s and went on until the early 1980s when the Polish national football team achieved their best results in any FIFA World Cup competitions finishing 3rd place in the 1974 and the 1982 tournaments. The team won a gold medal in football at the 1972 Summer Olympics and two silver medals, in 1976 and in 1992. Poland, along with Ukraine, hosted the UEFA European Football Championship in 2012.
As of 2019, the Polish men's national volleyball team is ranked as 3rd in the world. Volleyball team won a gold medal in Olympic 1976 Montreal and three gold medals in FIVB World Championship 1974, 2014 and 2018.
Mariusz Pudzianowski is a highly successful strongman competitor and has won more World's Strongest Man titles than any other competitor in the world, winning the event in 2008 for the fifth time. The first Polish Formula One driver, Robert Kubica, has brought awareness of Formula One racing to Poland. He won the 2008 Canadian Grand Prix, took part in rallying and had a full-time seat for the 2019 F1 season.
Poland has made a distinctive mark in motorcycle speedway racing thanks to Tomasz Gollob, a highly successful Polish rider. The top Ekstraliga division has one of the highest average attendances for any sport in Poland. The national speedway team of Poland, one of the major teams in international speedway, has won the Speedway World Team Cup championships three times consecutively, in 2009, 2010, and 2011. No team has ever managed such feat.
Poles made significant achievements in mountaineering, in particular, in the Himalayas and the winter ascending of the eight-thousanders. Polish mountains are one of the tourist attractions of the country. Hiking, climbing, skiing and mountain biking and attract numerous tourists every year from all over the world. Water sports are the most popular summer recreation activities, with ample locations for fishing, canoeing, kayaking, sailing and windsurfing especially in the northern regions of the country.
Fashion was always an important aspect of Poland and its national identity. Although the Polish fashion industry is not as famed in comparison to the industries of France and Italy, it still contributed to global trends and clothing habits. Moreover, several Polish designers and stylists left a lifelong legacy of beauty inventions and cosmetics, which are still in use nowadays.
Throughout history, the clothing styles in Poland often varied due to foreign influence, especially from the neighbouring countries and the Middle East. By the 17th century, the high-class nobility and magnates wore attire that somewhat mediated between Western and Ottoman styles. The outfits included a żupan, delia, kontusz, and a type of sword called karabela, brought by Armenian merchants. Wealthy Polish aristocrats also kept captive Tatars and Janissaries in their courts; this affected the national dress. The extensive multiculuralism present in the Polish-Lithuanian Commonwealth developed the ideology of "Sarmatism".
The Polish national dress as well as the fashion and etiquette of Poland also reached the royal court at Versailles in the 1700s. Some French dresses inspired by Polish outfits were called "à la polonaise", meaning "Polish-styled". The most famous example is the "robe à la polonaise" or simply "Polonaise", a woman's garment with draped and swagged overskirt, worn over an underskirt or petticoat. Another notable example is the Witzchoura, a long mantle with collar and hood, which was possibly introduced by Napoleon's Polish mistress Maria Walewska. The scope of influence also entailed furniture; rococo Polish beds with canopies became commonplace in French palaces during the 18th century.
In the early 20th century, the underdeveloped fashion and cosmetics industry in Congress Poland was heavily dominated by western styles, mostly from the United Kingdom and the United States. This inspired Polish beautician Maksymilian Faktorowicz to seek employment abroad and create a line of cosmetics company called Max Factor in California. In 1920, Faktorowicz invented the conjoined word "make-up" based on the verb phrase "to make up" one's face, which is now used as an alternative for "cosmetics". Faktorowicz also raised to fame by inventing modern eyelash extensions and providing services to Hollywood artists of the era.
Another Pole that contributed to the development of cosmetics was Helena Rubinstein, the founder of "Helena Rubinstein Incorporated Cosmetics Company", which made her one of the richest women in the world, and was bought by L'Oréal. One of Rubinstein's most controversial quotes was "There are no ugly women, only lazy ones".
Inglot Cosmetics founded in 1983, is Poland's largest beauty products manufacturer and retailer, sold in 700 locations worldwide, including retail salons in New York City, London, Milan, Dubai and Las Vegas. Internationally successful models from Poland include Anja Rubik, Joanna Krupa, Jac Jagaciak, Kasia Struss, Małgosia Bela, and Magdalena Frąckowiak.
Established in 1999, the retail store Reserved is Poland's most successful clothing store chain, operating over 1,700 retail shops in 19 countries. | https://en.wikipedia.org/wiki?curid=22936 |
Performing arts
Performing arts refers to forms of art in which artists use their voices, bodies or inanimate objects to convey artistic expression. It is different from visual arts, which is when artists use paint, canvas or various materials to create physical or static art objects. Performing arts include a range of disciplines which are performed in front of a live audience.
Theatre, music, dance and object manipulation, and other kinds of performances are present in all human cultures. The history of music and dance date to pre-historic times whereas circus skills date to at least Ancient Egypt. Many performing arts are performed professionally. Performance can be in purpose built buildings, such as theatres and opera houses, on open air stages at festivals, on stages in tents such as circuses and on the street.
Live performances before an audience are a form of entertainment. The development of audio and video recording has allowed for private consumption of the performing arts.
The performing arts often aims to express one's emotions and feelings.
Artists who participate in performing arts in front of an audience are called performers. Examples of these include actors, comedians, dancers, magicians, circus artists, musicians, and singers. Performing arts are also supported by workers in related fields, such as songwriting, choreography and stagecraft.
A performer who excels in acting, singing, and dancing is commonly referred to as a triple threat. Well-known examples of historical triple threat artists include Gene Kelly, Fred Astaire, Judy Garland, and Sammy Davis Jr.
Performers often adapt their appearance, such as with costumes and stage makeup, stage lighting, and sound.
Performing arts may include dance, music, opera, theatre and musical theatre, magic, illusion, mime, spoken word, puppetry, circus arts, performance art.
There is also a specialized form of fine art, in which the artists "perform" their work live to an audience. This is called performance art. Most performance art also involves some form of plastic art, perhaps in the creation of props. Dance was often referred to as a plastic art during the Modern dance era.
Theatre is the branch of performing arts; concerned with acting out stories in front of an audience, using a combination of speech, gesture, music, dance, sound and spectacle. Any one or more of these elements is performing arts. In addition to the standard narrative dialogue style of plays. Theater takes such forms as plays, musicals, opera, ballet, illusion, mime, classical Indian dance, kabuki, mummers' plays, improvisational theatre, comedy, pantomime, and non-conventional or contemporary forms like postmodern theatre, postdramatic theatre, or performance art.
In the context of performing arts, dance generally refers to human movement, typically rhythmic and to music, used as a form of audience entertainment in a performance setting. Definitions of what constitutes dance are dependent on social, cultural, aesthetic artistic and moral constraints and range from functional movement (such as folk dance) to codified, virtuoso techniques such as ballet.
There is one another modern form of dance that emerged in 19th- 20th century with the name of Free-Dance style. This form of dance was structured to create a harmonious personality which included features such as physical and spiritual freedom. Isadora Duncan was the first female dancer who argued about "woman of future" and developed novel vector of choreography using Nietzsche’s idea of "supreme mind in free mind".
Dance is a powerful impulse, but the art of dance is that impulse channeled by skillful performers into something that becomes intensely expressive and that may delight spectators who feel no wish to dance themselves. These two concepts of the art of dance—dance as a powerful impulse and dance as a skillfully choreographed art practiced largely by a professional few—are the two most important connecting ideas running through any consideration of the subject. In dance, the connection between the two concepts is stronger than in some other arts, and neither can exist without the other.
Choreography is the art of making dances, and the person who practices this art is called a choreographer.
Music is an art form which combines pitch, rhythm, and dynamic to create sound. It can be performed using a variety of instruments and styles and is divided into genres such as folk, jazz, hip hop, pop, and rock, etc. As an art form, music can occur in live or recorded formats, and can be planned or improvised.
Starting in the 6th century BC, the Classical period of performing art began in Greece, ushered in by the tragic poets such as Sophocles. These poets wrote plays which, in some cases, incorporated dance (see Euripides). The Hellenistic period began the widespread use of comedy.
However, by the 6th century AD, Western performing arts had been largely ended, as the Dark Ages began. Between the 9th century and 14th century, performing art in the West was limited to religious historical enactments and morality plays, organized by the Church in celebration of holy days and other important events.
In the 15th century performing arts, along with the arts in general, saw a revival as the Renaissance began in Italy and spread throughout Europe plays, some of which incorporated dance, which were performed and Domenico da Piacenza credited with the first use of the term "ballo" (in "De Arte Saltandi et Choreas Ducendi") instead of "danza" (dance) for his "baletti" or "balli". The term eventually became "Ballet". The first Ballet "per se" is thought to be Balthasar de Beaujoyeulx's Ballet Comique de la Reine (1581).
By the mid-16th century Commedia Dell'arte became popular in Europe, introducing the use of improvisation. This period also introduced the Elizabethan masque, featuring music, dance and elaborate costumes as well as professional theatrical companies in England. William Shakespeare's plays in the late 16th century developed from this new class of professional performance.
In 1597, the first opera, Dafne was performed and throughout the 17th century, opera would rapidly become the entertainment of choice for the aristocracy in most of Europe, and eventually for large numbers of people living in cities and towns throughout Europe.
The introduction of the proscenium arch in Italy during the 17th century established the traditional theatre form that persists to this day. Meanwhile, in England, the Puritans forbade acting, bringing a halt to performing arts that lasted until 1660. After that, women began to appear in both French and English plays. The French introduced a formal dance instruction in the late 17th century.
It is also during this time that the first plays were performed in the American Colonies.
During the 18th century, the introduction of the popular opera buffa brought opera to the masses as an accessible form of performance. Mozart's "The Marriage of Figaro" and "Don Giovanni" are landmarks of the late 18th century opera.
At the turn of the 19th century, Beethoven and the Romantic movement ushered in a new era that led first to the spectacles of grand opera and then to the musical dramas of Giuseppe Verdi and the Gesamtkunstwerk (total work of art) of the operas of Richard Wagner leading directly to the music of the 20th century.
The 19th century was a period of growth for the performing arts for all social classes, technical advances such as the introduction of gaslight to theatres, burlesque, minstrel dancing, and variety theatre. In ballet, women make great progress in the previously male-dominated art.
Modern dance began in the late 19th century and early 20th century in response to the restrictions of traditional ballet.
Konstantin Stanislavski's "System" revolutionized acting in the early 20th century, and continues to have a major influence on actors of stage and screen to the current day. Both impressionism and modern realism were introduced to the stage during this period.
The arrival of Sergei Diaghilev's Ballets Russes (1909–1929) revolutionized ballet and the performing arts generally throughout the Western world, most importantly through Diaghilev's emphasis on collaboration, which brought choreographers, dancers, set designers/artists, composers and musicians together to revitalize and revolutionize ballet. It is extremely complex.
With the invention of the motion picture in the late 19th century by Thomas Edison and the growth of the motion picture industry in Hollywood in the early 20th century, film became a dominant performance medium throughout the 20th and 21st centuries.
Rhythm and blues, a cultural phenomenon of black America, became to prominence in the early 20th century; influencing a range of later popular music styles internationally.
In the 1930s Jean Rosenthal introduced what would become modern stage lighting, changing the nature of the stage as the Broadway musical became a phenomenon in the United States.
Post-World War II performing arts were highlighted by the resurgence of both ballet and opera in the Western world.
Postmodernism in performing arts dominated the 1960s to large extent.
The earliest recorded theatrical event dates back to 2000 BC with the passion plays of Ancient Egypt. This story of the god Osiris was performed annually at festivals throughout the civilization, marking the known beginning of a long relationship between theatre and religion.
The most popular forms of theater in the medieval Islamic world were puppet theatre (which included hand puppets, shadow plays and marionette productions) and live passion plays known as "ta'ziya", where actors re-enact episodes from Muslim history. In particular, Shia Islamic plays revolved around the "shaheed" (martyrdom) of Ali's sons Hasan ibn Ali and Husayn ibn Ali. Live secular plays were known as "akhraja", recorded in medieval "adab" literature, though they were less common than puppetry and "ta'ziya" theater.
In Iran there are other forms of theatrical events such as "Naghali" (story telling), "ٰRu-Howzi", "Siah-Bazi", "Parde-Khani", "Mareke giri".
Folk theatre and dramatics can be traced to the religious ritualism of the Vedic peoples in the 2nd millennium BC. This folk theatre of the misty past was mixed with dance, food, ritualism, plus a depiction of events from daily life. The last element made it the origin of the classical theatre of later times. Many historians, notably D. D. Kosambi, Debiprasad Chattopadhyaya, Adya Rangacharaya, etc. have referred to the prevalence of ritualism amongst Indo-Aryan tribes in which some members of the tribe acted as if they were wild animals and some others were the hunters. Those who acted as mammals like goats, buffaloes, reindeer, monkeys, etc. were chased by those playing the role of hunters.
Bharata Muni (fl. 5th–2nd century BC) was an ancient Indian writer best known for writing the "Natya Shastra of Bharata", a theoretical treatise on Indian performing arts, including theatre, dance, acting, and music, which has been compared to Aristotle's "Poetics". Bharata is often known as the father of Indian theatrical arts. His "Natya Shastra" seems to be the first attempt to develop the technique or rather art, of drama in a systematic manner. The Natya Shastra tells us not only what is to be portrayed in a drama, but how the portrayal is to be done. Drama, as Bharata Muni says, is the imitation of men and their doings ("loka-vritti"). As men and their doings have to be respected on the stage, so drama in Sanskrit is also known by the term "roopaka", which means portrayal.
The "Ramayana" and "Mahabharata" can be considered the first recognized plays that originated in India. These epics provided the inspiration to the earliest Indian dramatists and they do it even today. Indian dramatists such as Bhāsa in the 2nd century BC wrote plays that were heavily inspired by the "Ramayana" and "Mahabharata".
Kālidāsa in the 1st century BC, is arguably considered to be ancient India's greatest dramatist. Three famous romantic plays written by Kālidāsa are the "Mālavikāgnimitram" ("Mālavikā and Agnimitra"), "Vikramōrvaśīyam" ("Pertaining to Vikrama and Urvashi"), and "Abhijñānaśākuntala" ("The Recognition of Shakuntala"). The last was inspired by a story in the "Mahabharata" and is the most famous. It was the first to be translated into English and German. In comparison to Bhāsa, who drew heavily from the epics, Kālidāsa can be considered an original playwright.
The next great Indian dramatist was Bhavabhuti (c. 7th century). He is said to have written the following three plays: "Malati-Madhava", "Mahaviracharita" and "Uttar Ramacharita". Among these three, the last two cover between them, the entire epic of "Ramayana". The powerful Indian emperor Harsha (606–648) is credited with having written three plays: the comedy "Ratnavali", "Priyadarsika", and the Buddhist drama "Nagananda". Many other dramatists followed during the Middle Ages.
There were many performing art forms in the southern part of India, Kerala is such a state with different such art forms like Koodiyattam, Nangyarkoothu, Kathakali, Chakyar koothu, Thirayattam and there were many prominent artists like Painkulam Raman Chakyar and others.
There are references to theatrical entertainments in China as early as 1500 BC during the Shang dynasty; they often involved music, clowning and acrobatic displays.
The Tang dynasty is sometimes known as "The Age of 1000 Entertainments". During this era, Emperor Xuanzong formed an acting school known as the Children of the Pear Garden to produce a form of drama that was primarily musical.
During the Han Dynasty, shadow puppetry first emerged as a recognized form of theatre in China. There were two distinct forms of shadow puppetry, Cantonese southern and Pekingese northern. The two styles were differentiated by the method of making the puppets and the positioning of the rods on the puppets, as opposed to the type of play performed by the puppets. Both styles generally performed plays depicting great adventure and fantasy, rarely was this very stylized form of theatre used for political propaganda. Cantonese shadow puppets were the larger of the two. They were built using thick leather that created more substantial shadows. Symbolic color was also very prevalent; a black face represented honesty, a red one bravery. The rods used to control Cantonese puppets were attached perpendicular to the puppets' heads. Thus, they were not seen by the audience when the shadow was created. Pekingese puppets were more delicate and smaller. They were created out of thin, translucent leather usually taken from the belly of a donkey. They were painted with vibrant paints, thus they cast a very colorful shadow. The thin rods that controlled their movements were attached to a leather collar at the neck of the puppet. The rods ran parallel to the bodies of the puppet then turned at a ninety degree angle to connect to the neck. While these rods were visible when the shadow was cast, they laid outside the shadow of the puppet; thus they did not interfere with the appearance of the figure. The rods attached at the necks to facilitate the use of multiple heads with one body. When the heads were not being used, they were stored in a muslin book or fabric lined box. The heads were always removed at night. This was in keeping with the old superstition that if left intact, the puppets would come to life at night. Some puppeteers went so far as to store the heads in one book and the bodies in another, to further reduce the possibility of reanimating puppets. Shadow puppetry is said to have reached its highest point of artistic development in the 11th century before becoming a tool of the government.
In the Song dynasty, there were many popular plays involving acrobatics and music. These developed in the Yuan dynasty into a more sophisticated form with a four- or five-act structure. Yuan drama spread across China and diversified into numerous regional forms, the best known of which is Beijing Opera, which is still popular today.
In Thailand, it has been a tradition from the Middle Ages to stage plays based on plots drawn from Indian epics. In particular, the theatrical version of Thailand's national epic "Ramakien", a version of the Indian "Ramayana", remains popular in Thailand even today.
In Cambodia, inscriptions dating back to the 6th century AD indicates evidences of dancers at a local temple and using puppetry for religious plays. At the ancient capital Angkor Wat, stories from the Indian epics "Ramayana" and "Mahabharata" have been carved on the walls of temples and palaces. Similar reliefs are found at Borobudur in Indonesia.
During the 14th century, there were small companies of actors in Japan who performed short, sometimes vulgar comedies. A director of one of these companies, Kan'ami (1333–1384), had a son, Zeami Motokiyo (1363–1443) who was considered one of the finest child actors in Japan. When Kan'ami's company performed for Ashikaga Yoshimitsu (1358–1408), the Shōgun of Japan, he implored Zeami to have a court education for his arts. After Zeami succeeded his father, he continued to perform and adapt his style into what is today Noh. A mixture of pantomime and vocal acrobatics, this style has fascinated the Japanese for hundreds of years.
Japan, after a long period of civil wars and political disarray, was unified and at peace primarily due to shōgun Tokugawa Ieyasu (1600–1668). However, alarmed at increasing Christian growth, he cut off contact from Japan to Europe and China and outlawed Christianity. When peace did come, a flourish of cultural influence and growing merchant class demanded its own entertainment. The first form of theatre to flourish was Ningyō jōruri (commonly referred to as Bunraku). The founder of and main contributor to Ningyō jōruri, Chikamatsu Monzaemon (1653–1725), turned his form of theatre into a true art form. Ningyō jōruri is a highly stylized form of theatre using puppets, today about 1/3d the size of a human. The men who control the puppets train their entire lives to become master puppeteers, when they can then operate the puppet's head and right arm and choose to show their faces during the performance. The other puppeteers, controlling the less important limbs of the puppet, cover themselves and their faces in a black suit, to imply their invisibility. The dialogue is handled by a single person, who uses varied tones of voice and speaking manners to simulate different characters. Chikamatsu wrote thousands of plays during his lifetime, most of which are still used today.
Kabuki began shortly after Bunraku, legend has it by an actress named Okuni, who lived around the end of the 16th century. Most of Kabuki's material came from Nõ and Bunraku, and its erratic dance-type movements are also an effect of Bunraku. However, Kabuki is less formal and more distant than Nõ, yet very popular among the Japanese public. Actors are trained in many varied things including dancing, singing, pantomime, and even acrobatics. Kabuki was first performed by young girls, then by young boys, and by the end of the 16th century, Kabuki companies consisted of all men. The men who portrayed women on stage were specifically trained to elicit the essence of a woman in their subtle movements and gestures. | https://en.wikipedia.org/wiki?curid=22938 |
Physics
Physics (from , from "phýsis" 'nature') is the natural science that studies matter, its motion and behavior through space and time, and the related entities of energy and force. Physics is one of the most fundamental scientific disciplines, and its main goal is to understand how the universe behaves.
Physics is one of the oldest academic disciplines and, through its inclusion of astronomy, perhaps "the" oldest. Over much of the past two millennia, physics, chemistry, biology, and certain branches of mathematics were a part of natural philosophy, but during the Scientific Revolution in the 17th century these natural sciences emerged as unique research endeavors in their own right. Physics intersects with many interdisciplinary areas of research, such as biophysics and quantum chemistry, and the boundaries of physics are not rigidly defined. New ideas in physics often explain the fundamental mechanisms studied by other sciences and suggest new avenues of research in academic disciplines such as mathematics and philosophy.
Advances in physics often enable advances in new technologies. For example, advances in the understanding of electromagnetism, solid-state physics, and nuclear physics led directly to the development of new products that have dramatically transformed modern-day society, such as television, computers, domestic appliances, and nuclear weapons; advances in thermodynamics led to the development of industrialization; and advances in mechanics inspired the development of calculus.
Astronomy is one of the oldest natural sciences. Early civilizations dating back before 3000 BCE, such as the Sumerians, ancient Egyptians, and the Indus Valley Civilisation, had a predictive knowledge and a basic understanding of the motions of the Sun, Moon, and stars. The stars and planets, believed to represent gods, were often worshipped. While the explanations for the observed positions of the stars were often unscientific and lacking in evidence, these early observations laid the foundation for later astronomy, as the stars were found to traverse great circles across the sky, which however did not explain the positions of the planets.
According to Asger Aaboe, the origins of Western astronomy can be found in Mesopotamia, and all Western efforts in the exact sciences are descended from late Babylonian astronomy. Egyptian astronomers left monuments showing knowledge of the constellations and the motions of the celestial bodies, while Greek poet Homer wrote of various celestial objects in his "Iliad" and "Odyssey"; later Greek astronomers provided names, which are still used today, for most constellations visible from the Northern Hemisphere.
Natural philosophy has its origins in Greece during the Archaic period (650 BCE – 480 BCE), when pre-Socratic philosophers like Thales rejected non-naturalistic explanations for natural phenomena and proclaimed that every event had a natural cause. They proposed ideas verified by reason and observation, and many of their hypotheses proved successful in experiment; for example, atomism was found to be correct approximately 2000 years after it was proposed by Leucippus and his pupil Democritus.
The Western Roman Empire fell in the fifth century, and this resulted in a decline in intellectual pursuits in the western part of Europe. By contrast, the Eastern Roman Empire (also known as the Byzantine Empire) resisted the attacks from the barbarians, and continued to advance various fields of learning, including physics.
In the sixth century Isidore of Miletus created an important compilation of Archimedes' works that are copied in the Archimedes Palimpsest.
In sixth century Europe John Philoponus, a Byzantine scholar, questioned Aristotle's teaching of physics and noted its flaws. He introduced the theory of impetus. Aristotle's physics was not scrutinized until Philoponus appeared; unlike Aristotle, who based his physics on verbal argument, Philoponus relied on observation. On Aristotle's physics Philoponus wrote:But this is completely erroneous, and our view may be corroborated by actual observation more effectively than by any sort of verbal argument. For if you let fall from the same height two weights of which one is many times as heavy as the other, you will see that the ratio of the times required for the motion does not depend on the ratio of the weights, but that the difference in time is a very small one. And so, if the difference in the weights is not considerable, that is, of one is, let us say, double the other, there will be no difference, or else an imperceptible difference, in time, though the difference in weight is by no means negligible, with one body weighing twice as much as the otherPhiloponus' criticism of Aristotelian principles of physics served as an inspiration for Galileo Galilei ten centuries later, | https://en.wikipedia.org/wiki?curid=22939 |
Papua New Guinea
Papua New Guinea (PNG; , ; ; ), officially the Independent State of Papua New Guinea (; ), is a sovereign state in Oceania that occupies the eastern half of the island of New Guinea and its offshore islands in Melanesia, a region of the southwestern Pacific Ocean north of Australia. Its capital, located along its southeastern coast, is Port Moresby. The western half of New Guinea forms the Indonesian provinces of Papua and West Papua. It is the world's third largest island country with .
At the national level, after being ruled by three external powers since 1884, Papua New Guinea established its sovereignty in 1975. This followed nearly 60 years of Australian administration, which started during World War I. It became an independent Commonwealth realm in 1975 with Elizabeth II as its queen. It also became a member of the Commonwealth of Nations in its own right.
Papua New Guinea is one of the most culturally diverse countries in the world. It is also one of the most rural, as only 18% of its people live in urban centres. There are 851 known languages in the country, of which 11 now have no known speakers. Most of the population of more than 8,000,000 people lives in customary communities, which are as diverse as the languages. The country is one of the world's least explored, culturally and geographically. It is known to have numerous groups of uncontacted peoples, and researchers believe there are many undiscovered species of plants and animals in the interior.
Papua New Guinea is classified as a developing economy by the International Monetary Fund. Strong growth in Papua New Guinea's mining and resource sector led to the country becoming the sixth-fastest-growing economy in the world in 2011. Growth was expected to slow once major resource projects came on line in 2015. Mining remains a major economic factor, however. Local and national governments are discussing the potential of resuming mining operations at the Panguna mine in Bougainville Province, which has been closed since the civil war in the 1980s–1990s. Nearly 40% of the population lives a self-sustainable natural lifestyle with no access to global capital.
Most of the people still live in strong traditional social groups based on farming. Their social lives combine traditional religion with modern practices, including primary education. These societies and clans are explicitly acknowledged by the Papua New Guinea Constitution, which expresses the wish for "traditional villages and communities to remain as viable units of Papua New Guinean society" and protects their continuing importance to local and national community life. The nation is an observer state in the Association of Southeast Asian Nations or ASEAN since 1976, and has filed its application for full membership status. It is a full member of the Pacific Community (SPC), the Pacific Islands Forum (formerly South Pacific Forum) and the Commonwealth of Nations.
The word "papua" is derived from an old local term of uncertain origin. "New Guinea" ("Nueva Guinea") was the name coined by the Spanish explorer Yñigo Ortiz de Retez. In 1545, he noted the resemblance of the people to those he had earlier seen along the Guinea coast of Africa. Guinea, in its turn, is etymologically derived from the Portuguese word "Guiné". The name is one of several toponyms sharing similar etymologies, ultimately meaning "land of the blacks" or similar meanings, in reference to the dark skin of the inhabitants.
Archaeological evidence indicates that humans first arrived in Papua New Guinea around 42,000 to 45,000 years ago. They were descendants of migrants out of Africa, in one of the early waves of human migration.
Agriculture was independently developed in the New Guinea highlands around 7000 BC, making it one of the few areas in the world where people independently domesticated plants. A major migration of Austronesian-speaking peoples to coastal regions of New Guinea took place around 500 BC. This has been correlated with the introduction of pottery, pigs, and certain fishing techniques.
In the 18th century, traders brought the sweet potato to New Guinea, where it was adopted and became part of the staples. Portuguese traders had obtained it from South America and introduced it to the Moluccas. The far higher crop yields from sweet potato gardens radically transformed traditional agriculture and societies. Sweet potato largely supplanted the previous staple, taro, and resulted in a significant increase in population in the highlands.
Although by the late 20th century headhunting and cannibalism had been practically eradicated, in the past they were practised in many parts of the country as part of rituals related to warfare and taking in enemy spirits or powers. In 1901, on Goaribari Island in the Gulf of Papua, missionary Harry Dauncey found 10,000 skulls in the island's long houses, a demonstration of past practices. According to Marianna Torgovnick, writing in 1991, "The most fully documented instances of cannibalism as a social institution come from New Guinea, where head-hunting and ritual cannibalism survived, in certain isolated areas, into the Fifties, Sixties, and Seventies, and still leave traces within certain social groups."
Little was known in Europe about the island until the 19th century, although Portuguese and Spanish explorers, such as Dom Jorge de Menezes and Yñigo Ortiz de Retez, had encountered it as early as the 16th century. Traders from Southeast Asia had visited New Guinea beginning 5,000 years ago to collect bird-of-paradise plumes.
The country's dual name results from its complex administrative history before independence.
In the nineteenth century, Germany ruled the northern half of the country for some decades, beginning in 1884, as a colony named German New Guinea. In 1914 after the outbreak of World War I, Australian forces landed and captured German New Guinea in a small military campaign and occupied it throughout the war. After the war, in which Germany and the Central Powers were defeated, the League of Nations authorised Australia to administer this area as a League of Nations mandate territory that became the Territory of New Guinea.
The southern half of the country had been colonised in 1884 by the United Kingdom as British New Guinea. With the Papua Act 1905, the UK transferred this territory to the newly formed Commonwealth of Australia, which took on its administration. Additionally, from 1905, British New Guinea was renamed as the Territory of Papua. In contrast to establishing an Australian mandate in former German New Guinea, the League of Nations determined that Papua was an external territory of the Australian Commonwealth; as a matter of law it remained a British possession. The difference in legal status meant that until 1949, Papua and New Guinea had entirely separate administrations, both controlled by Australia. These conditions contributed to the complexity of organising the country's post-independence legal system.
During World War II, the New Guinea campaign (1942–1945) was one of the major military campaigns and conflicts between Japan and the Allies. Approximately 216,000 Japanese, Australian, and US servicemen died. After World War II and the victory of the Allies, the two territories were combined into the Territory of Papua and New Guinea. This was later referred to as "Papua New Guinea".
The natives of Papua appealed to the United Nations for oversight and independence. The nation established independence from Australia on 16 September 1975, becoming a Commonwealth realm, continuing to share Queen Elizabeth II as its head of state. It maintains close ties with Australia, which continues to be its largest aid donor. Papua New Guinea was admitted to membership in the United Nations on 10 October 1975.
A secessionist revolt in 1975–76 on Bougainville Island resulted in an eleventh-hour modification of the draft Constitution of Papua New Guinea to allow for Bougainville and the other eighteen districts to have quasi-federal status as provinces. A renewed uprising on Bougainville started in 1988 and claimed 20,000 lives until it was resolved in 1997. Bougainville had been the primary mining region of the country, generating 40% of the national budget. The native peoples felt they were bearing the adverse environmental effects of the mining, which poisoned the land, water and air, without gaining a fair share of the profits.
The government and rebels negotiated a peace agreement that established the Bougainville Autonomous District and Province. The autonomous Bougainville elected Joseph Kabui as president in 2005, who served until his death in 2008. He was succeeded by his deputy John Tabinaman as acting president while an election to fill the unexpired term was organised. James Tanis won that election in December 2008 and served until the inauguration of John Momis, the winner of the 2010 elections. As part of the current peace settlement, a non-binding independence referendum was held, between 23 November and 7 December 2019. The referendum question was a choice between greater autonomy within Papua New Guinea and full independence for Bougainville, and voters voted overwhelmingly (98.31%) for independence.
Numerous Chinese have worked and lived in Papua New Guinea, establishing Chinese-majority communities. Chinese merchants became established in the islands before European exploration. Anti-Chinese rioting involving tens of thousands of people broke out in May 2009. The initial spark was a fight between ethnic Chinese and indigenous workers at a nickel factory under construction by a Chinese company. Native resentment against Chinese ownership of numerous small businesses and their commercial monopoly in the islands led to the rioting. The Chinese have long been merchants in Papua New Guinea.
From March to April 2018, a chain of earthquakes hit Papua New Guinea, causing various damage. Various nations from Oceania, Australia, the Philippines and Timor-Leste immediately sent aid to the country.
Papua New Guinea is a Commonwealth realm with Elizabeth II as Queen of Papua New Guinea. The constitutional convention, which prepared the draft constitution, and Australia, the outgoing metropolitan power, had thought that Papua New Guinea would not remain a monarchy. The founders, however, considered that imperial honours had a cachet. The monarch is represented by the Governor-General of Papua New Guinea, currently Bob Dadae. Papua New Guinea (and the Solomon Islands) are unusual among Commonwealth realms in that governors-general are elected by the legislature, rather than chosen by the executive branch.
The Prime Minister heads the cabinet, which consists of 31 MPs from the ruling coalition, which make up the government. The current prime minister is James Marape. The unicameral National Parliament has 111 seats, of which 22 are occupied by the governors of the 22 provinces and the National Capital District (NCD). Candidates for members of parliament are voted upon when the prime minister asks the governor-general to call a national election, a maximum of five years after the previous national election.
In the early years of independence, the instability of the party system led to frequent votes of no confidence in parliament, with resulting changes of the government, but with referral to the electorate, through national elections only occurring every five years. In recent years, successive governments have passed legislation preventing such votes sooner than 18 months after a national election and within 12 months of the next election. In December 2012, the first two (of three) readings were passed to prevent votes of no confidence occurring within the first 30 months. This restriction on votes of no confidence has arguably resulted in greater stability, although perhaps at a cost of reducing the accountability of the executive branch of government.
Elections in PNG attract numerous candidates. After independence in 1975, members were elected by the first-past-the-post system, with winners frequently gaining less than 15% of the vote. Electoral reforms in 2001 introduced the Limited Preferential Vote system (LPV), a version of the Alternative Vote. The 2007 general election was the first to be conducted using LPV.
In 2011 there was a constitutional crisis between the parliament-elect Prime Minister, Peter O'Neill (voted into office by a large majority of MPs), and Sir Michael Somare, who was deemed by the supreme court (in a December Opinion, 3:2) to retain office. The stand-off between parliament and the supreme court continued until the July 2012 national elections, with legislation passed effectively removing the chief justice and subjecting the supreme court members to greater control by the legislature, as well as a series of other laws passed, for example limiting the age for a prime minister. The confrontation reached a peak, with the Deputy Prime Minister entering the supreme court during a hearing, escorted by some police, ostensibly to arrest the Chief Justice. There was strong pressure among some MPs to defer the national elections for a further six months to one year, although their powers to do that were highly questionable.
The parliament-elect prime minister and other cooler-headed MPs carried the votes for the writs for the new election to be issued, slightly late, but for the election itself to occur on time, thereby avoiding a continuation of the constitutional crisis. The crisis was tense at times, but largely restricted to the political and legal fraternity, plus some police factions. The public and public service (including most police and military) stood back. It was a period when, with increased telecommunication access and use of social media (notably Facebook and mobile phones), the public and students played some part in helping maintain restraint and demanding the leadership to adhere to constitutional processes. They insisted on having the elections so that the people could say who should be their legitimate representatives for the next five years.
Under a 2002 amendment, the leader of the party winning the largest number of seats in the election is invited by the governor-general to form the government, if he can muster the necessary majority in parliament. The process of forming such a coalition in PNG, where parties do not have much ideology, involves considerable horsetrading right up until the last moment. Peter O'Neill emerged as Papua New Guinea's prime minister after the July 2012 election, and formed a government with Leo Dion, the former Governor of East New Britain Province, as deputy prime minister.
The unicameral Parliament enacts legislation in the same manner as in other Commonwealth realms that use the Westminster system of government. The cabinet collectively agree government policy, then the relevant minister introduces bills to Parliament, depending on which government department is responsible for implementation of a particular law. Back bench members of parliament can also introduce bills. Parliament debates bills, and if approved the bill is forwarded to the Governor-General for Royal assent, following which it becomes law.
All ordinary statutes enacted by Parliament must be consistent with the Constitution. The courts have jurisdiction to rule on the constitutionality of statutes, both in disputes before them and on a reference where there is no dispute but only an abstract question of law. Unusually among developing countries, the judicial branch of government in Papua New Guinea has remained remarkably independent, and successive executive governments have continued to respect its authority.
The "underlying law" (Papua New Guinea's common law) consists of principles and rules of common law and equity in English common law as it stood on 16 September 1975 (the date of independence), and thereafter the decisions of PNG's own courts. The courts are directed by the Constitution and, latterly, the "Underlying Law Act", to take note of the "custom" of traditional communities. They are to determine which customs are common to the whole country and may be declared also to be part of the underlying law. In practice, this has proved extremely difficult and has been largely neglected. Statutes are largely adapted from overseas jurisdictions, primarily Australia and England. Advocacy in the courts follows the adversarial pattern of other common-law countries.
This national court system, used in towns and cities, is supported by a village court system in the more remote areas. The law underpinning the village courts is 'customary law'.
In foreign policy, Papua New Guinea is a member of the Commonwealth of Nations, Pacific Community (SPC), Pacific Islands Forum, and the Melanesian Spearhead Group (MSG) of countries. It was accorded Observer status within ASEAN in 1976, followed later by Special Observer status in 1981. It is also a member of APEC and an ACP country, associated with the European Union.
Papua New Guinea supported Indonesia's control of Western New Guinea: the focus of the Papua conflict where numerous human rights violations have reportedly been committed by the Indonesian security forces. In September 2017, Papua New Guinea rejected the West Papuan Independence Petition in the UN General Assembly.
The Papua New Guinea Defence Force (PNGDF) is the military organisation responsible for the defence of Papua New Guinea. It consists of three wings. The Land Element, a land force consisting of the Royal Pacific Islands Regiment, a small special forces unit, a battalion of engineers, and three other small units primarily dealing with signals and health, as well as a military academy, is concerned with defence of the nation on land. The Air Element is a small, underfunded aircraft squadron consisting of two utility aircraft, with another three on order, two leased helicopters, and two trainers once they are delivered. Its present purpose is transportation for the other military wings. The Maritime Element is a small navy consisting of four Pacific-class patrol boats, three ex-Australian Balikpapan-class landing craft, and one Guardian-class patrol boat. One of the landing craft is used as a training ship. Three more Guardian-class patrol boats are under construction in Australia, to replace the old Pacific-class vessels. The main tasks of the Maritime Element are patrol of inshore waters and transport of the Land Element. Papua New Guinea has such a large Exclusive Economic Zone that patrols by the small Pacific-class patrol boats, which are often unserviceable due to underfunding, are ineffective, so the Maritime Element is heavily reliant on satellite imagery for surveillance of its waters. This problem will be partially corrected when all of the larger Guardian-class patrol boats enter service.
Papua New Guinea is often ranked as likely the worst place in the world for violence against women. A 2013 study in "The Lancet" found that 27% of men on Bougainville Island, Papua New Guinea, reported having raped a non-partner, while 14.1% reported having committed gang rape. According to UNICEF, nearly half of reported rape victims are under 15 years of age and 13% are under 7 years of age. A report by ChildFund Australia, citing former Parliamentarian Dame Carol Kidu, claimed 50% of those seeking medical help after rape are under 16, 25% are under 12, and 10% are under 8.
The 1971 Sorcery Act imposed a penalty of up to 2 years in prison for the practice of "black" magic, until the Act was repealed in 2013. An estimated 50–150 alleged witches are killed each year in Papua New Guinea. There are also no protections given to LGBT citizens in the country. Homosexual acts are prohibited by law in Papua New Guinea.
Papua New Guinea is divided into four regions, which are not the primary administrative divisions but are quite significant in many aspects of government, commercial, sporting and other activities.
The nation has 22 province-level divisions: twenty provinces, the Autonomous Region of Bougainville and the National Capital District. Each province is divided into one or more districts, which in turn are divided into one or more Local-Level Government areas.
Provinces are the primary administrative divisions of the country. Provincial governments are branches of the national government as Papua New Guinea is not a federation of provinces. The province-level divisions are as follows:
In 2009, Parliament approved the creation of two additional provinces: Hela Province, consisting of part of the existing Southern Highlands Province, and Jiwaka Province, formed by dividing Western Highlands Province. Jiwaka and Hela officially became separate provinces on 17 May 2012. The declaration of Hela and Jiwaka is a result of the largest liquefied natural gas (LNG) project in the country that is situated in both provinces. The government set 23 November 2019 as the voting date for a non-binding independence referendum in the Bougainville autonomous region. In December 2019, the autonomous region voted overwhelmingly for independence, with 97.7% voting in favor of obtaining full independence and around 1.7% voting in favor of greater autonomy.
At , Papua New Guinea is the world's 54th largest country and the 3rd largest island country. Including all its islands, it lies between latitudes 0° and 12°S, and longitudes 140° and 160°E. It has an exclusive economic zone of .
Located north of the Australian mainland, the country's geography is diverse and, in places, extremely rugged. A spine of mountains, the New Guinea Highlands, runs the length of the island of New Guinea, forming a populous highlands region mostly covered with tropical rainforest, and the long Papuan Peninsula, known as the 'Bird's Tail'. Dense rainforests can be found in the lowland and coastal areas as well as very large wetland areas surrounding the Sepik and Fly rivers. This terrain has made it difficult for the country to develop transportation infrastructure. Some areas are accessible only on foot or by aeroplane. The highest peak is Mount Wilhelm at . Papua New Guinea is surrounded by coral reefs which are under close watch, in the interests of preservation.
The country is situated on the Pacific Ring of Fire, at the point of collision of several tectonic plates. There are a number of active volcanoes, and eruptions are frequent. Earthquakes are relatively common, sometimes accompanied by tsunamis.
The mainland of the country is the eastern half of New Guinea island, where the largest towns are also located, including Port Moresby (capital) and Lae; other major islands within Papua New Guinea include New Ireland, New Britain, Manus and Bougainville.
Papua New Guinea is one of the few regions close to the equator that experience snowfall, which occurs in the most elevated parts of the mainland.
The border between Papua New Guinea and Indonesia was confirmed by treaty with Australia before independence in 1974. The land border comprises a segment of the 141° E meridian from the north coast southwards to where it meets the Fly River flowing east, then a short curve of the river's thalweg to where it meets the 141°01'10" E meridian flowing west, then southwards to the south coast. The 141° E meridian formed the entire eastern boundary of Dutch New Guinea according to its 1828 annexation proclamation. In 1895 the Dutch and British agreed to a territorial exchange, bringing the entire left bank of the Fly River into British New Guinea and moving the southern border east to the Torasi Estuary.
The maritime boundary with Australia was confirmed by a treaty in 1978. In the Torres Strait it runs close to the mainland of New Guinea, keeping the adjacent North Western Torres Strait Islands (Dauan, Boigu and Saibai) under Australian sovereignty. Maritime boundaries with the Solomon Islands were confirmed by a 1989 treaty.
Papua New Guinea is part of the Australasian realm, which also includes Australia, New Zealand, eastern Indonesia, and several Pacific island groups, including the Solomon Islands and Vanuatu.
Geologically, the island of New Guinea is a northern extension of the Indo-Australian tectonic plate, forming part of a single land mass which is Australia-New Guinea (also called "Sahul" or "Meganesia"). It is connected to the Australian segment by a shallow continental shelf across the Torres Strait, which in former ages lay exposed as a land bridge, particularly during ice ages when sea levels were lower than at present.
Consequently, many species of birds and mammals found on New Guinea have close genetic links with corresponding species found in Australia. One notable feature in common for the two landmasses is the existence of several species of marsupial mammals, including some kangaroos and possums, which are not found elsewhere. Papua New Guinea is a megadiverse country.
Many of the other islands within PNG territory, including New Britain, New Ireland, Bougainville, the Admiralty Islands, the Trobriand Islands, and the Louisiade Archipelago, were never linked to New Guinea by land bridges. As a consequence, they have their own flora and fauna; in particular, they lack many of the land mammals and flightless birds that are common to New Guinea and Australia.
Australia and New Guinea are portions of the ancient supercontinent of Gondwana, which started to break into smaller continents in the Cretaceous period, 65–130 million years ago. Australia finally broke free from Antarctica about 45 million years ago. All the Australasian lands are home to the Antarctic flora, descended from the flora of southern Gondwana, including the coniferous podocarps and "Araucaria" pines, and the broad-leafed southern beech ("Nothofagus"). These plant families are still present in Papua New Guinea.
As the Indo-Australian Plate (which includes landmasses of India, Australia, and the Indian Ocean floor in between) drifts north, it collides with the Eurasian Plate. The collision of the two plates pushed up the Himalayas, the Indonesian islands, and New Guinea's Central Range. The Central Range is much younger and higher than the mountains of Australia, so high that it is home to rare equatorial glaciers. New Guinea is part of the humid tropics, and many Indomalayan rainforest plants spread across the narrow straits from Asia, mixing together with the old Australian and Antarctic floras.
PNG includes a number of terrestrial ecoregions:
Three new species of mammals were discovered in the forests of Papua New Guinea by an Australian-led expedition. A small wallaby, a large-eared mouse and shrew-like marsupial were discovered. The expedition was also successful in capturing photographs and video footage of some other rare animals such as the Tenkile tree kangaroo and the Weimang tree kangaroo.
At current rates of deforestation, more than half of Papua New Guinea's forests could be lost or seriously degraded by 2021, according to a new satellite study of the region. Nearly one quarter of Papua New Guinea's rainforests were damaged or destroyed between 1972 and 2002.
On 25 February 2018, an earthquake of magnitude 7.5 and depth of 35 kilometres struck the middle of Papua New Guinea. The worst of the damage was centred around the Southern Highlands region. As of 1 March there were 31 reported deaths, and that number was expected to rise.
Papua New Guinea is richly endowed with natural resources, including mineral and renewable resources, such as forests, marine (including a large portion of the world's major tuna stocks), and in some parts agriculture. The rugged terrain—including high mountain ranges and valleys, swamps and islands—and high cost of developing infrastructure, combined with other factors (including serious law and order problems in some centres and the system of customary land title) makes it difficult for outside developers. Local developers are handicapped by years of deficient investment in education, health, ICT and access to finance. Agriculture, for subsistence and cash crops, provides a livelihood for 85% of the population and continues to provide some 30% of GDP. Mineral deposits, including gold, oil, and copper, account for 72% of export earnings. Oil palm production has grown steadily over recent years (largely from estates and with extensive outgrower output), with palm oil now the main agricultural export. In households participating, coffee remains the major export crop (produced largely in the Highlands provinces); followed by cocoa and coconut oil/copra from the coastal areas, each largely produced by smallholders; tea, produced on estates; and rubber. The Iagifu/Hedinia Field was discovered in 1986 in the Papuan fold and thrust belt.
Former Prime Minister Sir Mekere Morauta tried to restore integrity to state institutions, stabilise the kina, restore stability to the national budget, privatise public enterprises where appropriate, and ensure ongoing peace on Bougainville following the 1997 agreement which ended Bougainville's secessionist unrest. The Morauta government had considerable success in attracting international support, specifically gaining the backing of the International Monetary Fund (IMF) and the World Bank in securing development assistance loans. Significant challenges face Prime Minister Sir Michael Somare, including gaining further investor confidence, continuing efforts to privatise government assets, and maintaining the support of members of Parliament.
In March 2006, the United Nations Development Programme Policy called for Papua New Guinea's designation of developing country to be downgraded to least-developed country because of protracted economic and social stagnation. However, an evaluation by the IMF in late 2008 found that "a combination of prudent fiscal and monetary policies, and high global prices for mineral commodity exports, have underpinned Papua New Guinea's recent buoyant economic growth and macroeconomic stability. By 2012 PNG had enjoyed a decade of positive economic growth, at over 6% since 2007, even during the Global Financial Crisis years of 2008/9. PNG's Real GDP growth rate as at 2011 was 8.9%," and 9.2% for 2012, according to the Asian Development Bank. As of 2019, PNG's real GDP growth rate has dropped to 3.8%, along with an inflation rate of 4.3%
This economic growth has been primarily attributed to strong commodity prices, particularly mineral but also agricultural, with the high demand for mineral products largely sustained even during the crisis by the buoyant Asian markets, a booming mining sector and by a buoyant outlook and the construction phase for natural gas exploration, production, and exportation in liquefied form (liquefied natural gas or "LNG") by LNG tankers (LNG carrier), all of which will require multibillion-dollar investments (exploration, production wells, pipelines, storage, liquefaction plants, port terminals, LNG tanker ships).
The first major gas project was the PNG LNG joint venture. ExxonMobil is operator of the joint venture, also comprising PNG company Oil Search, Santos, Kumul Petroleum Holdings (Papua New Guinea's national oil and gas company), JX Nippon Oil and Gas Exploration, the PNG government's Mineral Resources Development Company and Petromin PNG Holdings. The project is an integrated development that includes gas production and processing facilities in the Hela, Southern Highlands and Western Provinces of Papua New Guinea, including liquefaction and storage facilities (located northwest of Port Moresby) with capacity of 6.9 million tonnes per year. There are over of pipelines connecting the facilities. It is the largest private-sector investment in the history of PNG.
A second major project is based on initial rights held by the French oil and gas major Total S.A. and the US company InterOil Corp. (IOC), which have partly combined their assets after Total agreed in December 2013 to purchase 61.3% of IOC's Antelope and Elk gas field rights, with the plan to develop them starting in 2016, including the construction of a liquefaction plant to allow export of LNG. Total S.A. has separately another joint operating agreement with Oil Search.
Further gas and mineral projects are proposed (including the large Wafi-Golpu copper-gold mine), with extensive exploration ongoing across the country.
The PNG government's long-term Vision 2050 and shorter-term policy documents, including the 2013 Budget and the 2014 Responsible Sustainable Development Strategy, emphasise the need for a more diverse economy, based upon sustainable industries and avoiding the effects of Dutch disease from major resource extraction projects undermining other industries, as has occurred in many countries experiencing oil or other mineral booms, notably in Western Africa, undermining much of their agriculture sector, manufacturing and tourism, and with them broad-based employment prospects. Measures have been taken to mitigate these effects, including through the establishment of a sovereign wealth fund, partly to stabilise revenue and expenditure flows, but much will depend upon the readiness to make real reforms to effective use of revenue, tackling rampant corruption and empowering households and businesses to access markets, services and develop a more buoyant economy, with lower costs, especially for small to medium-size enterprises. One major project conducted through the PNG Department for Community Development suggested that other pathways to sustainable development should be considered.
The Institute of National Affairs, a PNG independent policy think tank, provides a report on the business and investment environment of Papua New Guinea every five years, based upon a survey of large and small, local and overseas companies, highlighting law and order problems and corruption, as the worst impediments, followed by the poor state of transport, power and communications infrastructure.
The PNG legislature has enacted laws in which a type of tenure called "customary land title" is recognised, meaning that the traditional lands of the indigenous peoples have some legal basis to inalienable tenure. This customary land notionally covers most of the usable land in the country (some 97% of total land area); alienated land is either held privately under state lease or is government land. Freehold title (also known as fee simple) can only be held by Papua New Guinean citizens.
Only some 3% of the land of Papua New Guinea is in private hands; this is privately held under 99-year state lease, or it is held by the State. There is virtually no freehold title; the few existing freeholds are automatically converted to state lease when they are transferred between vendor and purchaser. Unalienated land is owned under customary title by traditional landowners. The precise nature of the seisin varies from one culture to another. Many writers portray land as in the communal ownership of traditional clans; however, closer studies usually show that the smallest portions of land whose ownership cannot be further divided are held by the individual heads of extended families and their descendants or their descendants alone if they have recently died.
This is a matter of vital importance because a problem of economic development is identifying the membership of customary landowning groups and the owners. Disputes between mining and forestry companies and landowner groups often devolve on the issue of whether the companies entered into contractual relations for the use of land with the true owners. Customary property—usually land—cannot be devised by will. It can only be inherited according to the custom of the deceased's people. The Lands Act was amended in 2010 along with the Land Group Incorporation Act, intended to improve the management of state land, mechanisms for dispute resolution over land, and to enable customary landowners to be better able to access finance and possible partnerships over portions of their land, if they seek to develop it for urban or rural economic activities. The Land Group Incorporation Act requires more specific identification of the customary landowners than hitherto and their more specific authorisation before any land arrangements are determined; (a major issue in recent years has been a land grab, using, or rather misusing, the Lease-Leaseback provision under the Land Act, notably using 'Special Agricultural and Business Leases' (SABLs) to acquire vast tracts of customary land, purportedly for agricultural projects, but in an almost all cases as a back-door mechanism for securing tropical forest resources for logging—circumventing the more exacting requirements of the Forest Act, for securing Timber Permits (which must comply with sustainability requirements and be competitively secured, and with the customary landowners approval). Following a national outcry, these SABLs have been subject to a Commission of Inquiry, established in mid-2011, for which the report is still awaited for initial presentation to the Prime Minister and Parliament.
Papua New Guinea is one of the most heterogeneous nations in the world. There are hundreds of ethnic groups indigenous to Papua New Guinea, the majority being from the group known as Papuans, whose ancestors arrived in the New Guinea region tens of thousands of years ago. The other indigenous peoples are Austronesians, their ancestors having arrived in the region less than four thousand years ago.
There are also numerous people from other parts of the world now resident, including Chinese, Europeans, Australians, Indonesians, Filipinos, Polynesians, and Micronesians (the last four belonging to the Austronesian family). Around 40,000 expatriates, mostly from Australia and China, were living in Papua New Guinea in 1975. 20,000 people from Australia currently live in Papua New Guinea. They represent 0.25% of the total population of Papua New Guinea.
According to the CIA World Factbook (2018), Papua New Guinea has the second lowest urban population percentage in the world, with 13.2%, only behind Burundi. The geography and economy of Papua New Guinea are the main factors behind the low percentage. Papua New Guinea has an urbanisation rate of 2.51%, measured as the projected change in urban population from 2015 to 2020.
According to Statista (2017), here are the urban population percentages in Papua New Guinea from 2007 to 2017: 13.07, 13.06, 13.04, 13.02, 13, 12.98, 12.98, 12.99, 13.01, 13.05 and 13.1.
Papua New Guinea has more languages than any other country, with over 820 indigenous languages, representing 12% of the world's total, but most have fewer than 1,000 speakers. The most widely spoken indigenous language is Enga, with about 200,000 speakers, followed by Melpa and Huli. Indigenous languages are classified into two large groups, Austronesian languages and non-Austronesian, or Papuan, languages. There are four languages in Papua New Guinea with some statutory recognition: English, Tok Pisin, Hiri Motu, and, since 2015, sign language (which in practice means Papua New Guinean Sign Language).
English is the language of government and the education system, but it is not spoken widely.
The primary lingua franca of the country is Tok Pisin (commonly known in English as New Guinean Pidgin or Melanesian Pidgin), in which much of the debate in Parliament is conducted, many information campaigns and advertisements are presented, and until recently a national newspaper, "Wantok", was published. The only area where Tok Pisin is not prevalent is the southern region of Papua, where people often use the third official language, Hiri Motu.
Although it lies in the Papua region, Port Moresby has a highly diverse population which primarily uses Tok Pisin, and to a lesser extent English, with Motu spoken as the indigenous language in outlying villages. With an average of only 7,000 speakers per language, Papua New Guinea has a greater density of languages than any other nation on earth except Vanuatu.
Life expectancy in Papua New Guinea at birth was 64 years for men in 2016 and 68 for women.
Government expenditure health in 2014 accounted for 9.5% of total government spending, with total health expenditure equating to 4.3% of GDP. There were five physicians per 100,000 people in the early 2000s.
The 2010 maternal mortality rate per 100,000 births for Papua New Guinea was 250. This is compared with 311.9 in 2008 and 476.3 in 1990. The under-5 mortality rate, per 1,000 births is 69 and the neonatal mortality as a percentage of under-5s' mortality is 37. In Papua New Guinea, the number of midwives per 1,000 live births is 1 and the lifetime risk of death for pregnant women is 1 in 94.
The government and judiciary uphold the constitutional right to freedom of speech, thought, and belief, and no legislation to curb those rights has been adopted. The 2011 census found that 95.6% of citizens identified themselves as Christian, 1.4% were not Christian, and 3.1% gave no answer. Virtually no respondent identified as being nonreligious. Religious syncretism is high, with many citizens combining their Christian faith with some traditional indigenous religious practices.
Most Christians in Papua New Guinea are Protestants, constituting roughly 70% of the total population. They are mostly represented by the Evangelical Lutheran Church of Papua New Guinea, the Seventh-day Adventist Church, diverse Pentecostal denominations, the United Church in Papua New Guinea and the Solomon Islands, the Evangelical Alliance Papua New Guinea, and the Anglican Church of Papua New Guinea. Apart from Protestants, there is a notable Roman Catholic minority with approximately 25% of the population.
There are approximately 2,000 Muslims in the country. The majority belong to the Sunni group, while a small number are Ahmadi. Non-traditional Christian churches and non-Christian religious groups are active throughout the country. The Papua New Guinea Council of Churches has stated that both Muslim and Confucian missionaries are highly active.
Traditional religions are often animist. Some also tend to have elements of veneration of the dead, though generalisation is suspect given the extreme heterogeneity of Melanesian societies. Prevalent among traditional tribes is the belief in "masalai", or evil spirits, which are blamed for "poisoning" people, causing calamity and death, and the practice of "puripuri" (sorcery).
It is estimated that more than a thousand cultural groups exist in Papua New Guinea. Because of this diversity, many styles of cultural expression have emerged. Each group has created its own expressive forms in art, dance, weaponry, costumes, singing, music, architecture and much more.
Most of these cultural groups have their own language. People typically live in villages that rely on subsistence farming. In some areas people hunt and collect wild plants (such as yam roots and karuka) to supplement their diets. Those who become skilled at hunting, farming and fishing earn a great deal of respect.
On the Sepik river, there is a tradition of wood carving, often in the form of plants or animals, representing ancestor spirits.
Seashells are no longer the currency of Papua New Guinea, as they were in some regions—sea shells were abolished as currency in 1933. This tradition is still present in local customs. In some cultures, to get a bride, a groom must bring a certain number of golden-edged clam shells as a bride price. In other regions, the bride price is paid in lengths of shell money, pigs, cassowaries or cash. Elsewhere, it is brides who traditionally pay a dowry.
People of the highlands engage in colourful local rituals that are called "sing sings". They paint themselves and dress up with feathers, pearls and animal skins to represent birds, trees or mountain spirits. Sometimes an important event, such as a legendary battle, is enacted at such a musical festival.
The country possesses one (1) UNESCO World Heritage site, namely, Kuk Early Agricultural Site, which was inscribed in 2008. The country, however, has no elements inscribed yet in the UNESCO Intangible Cultural Heritage Lists, despite having one of the widest array of intangible cultural heritage elements in the world.
Sport is an important part of Papua New Guinean culture and rugby league is by far the most popular sport. In a nation where communities are far apart and many people live at a minimal subsistence level, rugby league has been described as a replacement for tribal warfare as a way of explaining the local enthusiasm for the game (a matter of life or death). Many Papua New Guineans have become instant celebrities by representing their country or playing in an overseas professional league. Even Australian rugby league players who have played in the annual State of Origin series, which is celebrated feverishly every year in PNG, are among the most well-known people throughout the nation.
State of Origin is a highlight of the year for most Papua New Guineans, although the support is so passionate that many people have died over the years in violent clashes supporting their team. The Papua New Guinea national rugby league team usually plays against the Australian Prime Minister's XIII (a selection of NRL players) each year, normally in Port Moresby.
Although not as popular, Australian rules football is more significant in another way, as the national team is ranked second, only after Australia. Other major sports which have a part in the Papua New Guinea sporting landscape are association football, rugby union, basketball and, in eastern Papua, cricket.
The capital city, Port Moresby, hosted the Pacific Games in 2015.
A large proportion of the population is illiterate, with women predominating in this area. Much of the education in PNG is provided by church institutions. This includes 500 schools of the Evangelical Lutheran Church of Papua New Guinea. Papua New Guinea has six universities apart from other major tertiary institutions. The two founding universities are the University of Papua New Guinea, based in the National Capital District, and the Papua New Guinea University of Technology, based outside of Lae, in Morobe Province.
The four other universities which were once colleges were established recently after gaining government recognition. These are the University of Goroka in the Eastern Highlands province, Divine Word University (run by the Catholic Church's Divine Word Missionaries) in Madang Province, Vudal University in East New Britain Province and Pacific Adventist University (run by the Seventh-day Adventist Church) in the National Capital District.
Papua New Guinea's "National Vision 2050" was adopted in 2009. This has led to the establishment of the Research, Science and Technology Council. At its gathering in November 2014, the Council re-emphasised the need to focus on sustainable development through science and technology.
"Vision 2050"'s medium-term priorities are:
According to Thomson Reuters' Web of Science, Papua New Guinea had the largest number of publications (110) among Pacific Island states in 2014, followed by Fiji (106). Nine out of ten scientific publications from Papua New Guinea focused on immunology, genetics, biotechnology and microbiology. Nine out of ten were also co-authored by scientists from other countries, mainly Australia, the United States of America, United Kingdom, Spain and Switzerland.
Forestry is an important economic resource for Papua New Guinea but the industry uses low and semi-intensive technological inputs. As a result, product ranges are limited to sawed timber, veneer, plywood, block board, moulding, poles and posts and wood chips. Only a few limited finished products are exported. Lack of automated machinery, coupled with inadequately trained local technical personnel, are some of the obstacles to introducing automated machinery and design. Policy-makers need to turn their attention to eliminating these barriers, in order for forestry to make a more efficient and sustainable contribution to national economic development.
In Papua New Guinea, renewable energy sources represent two-thirds of the total electricity supply. In 2015, the Secretariat of the Pacific Community observed that, 'while Fiji, Papua New Guinea, and Samoa are leading the way with large-scale hydropower projects, there is enormous potential to expand the deployment of other renewable energy options such as solar, wind, geothermal and ocean-based energy sources'. The European Union has funded the Renewable Energy in Pacific Island Countries Developing Skills and Capacity programme (EPIC). Since its inception in 2013, the programme has developed a master's programme in renewable energy management at the University of Papua New Guinea and helped to establish a Centre of Renewable Energy at the same university.
Papua New Guinea is one of the 15 beneficiaries of a programme on Adapting to Climate Change and Sustainable Energy worth €37.26 million. The programme resulted from the signing of an agreement in February 2014 between the European Union and the Pacific Islands Forum Secretariat. The other beneficiaries are the Cook Islands, Fiji, Kiribati, Marshall Islands, Federated States of Micronesia, Nauru, Niue, Palau, Samoa, Solomon Islands, Timor-Leste, Tonga, Tuvalu and Vanuatu.
Transport in Papua New Guinea is heavily limited by the country's mountainous terrain. As a result, air travel is the single most important form of transport for human and high density/value freight. Airplanes made it possible to open up the country during its early colonial period. Even today the two largest cities, Port Moresby and Lae, are only directly connected by planes. Port Moresby is not linked by road to any of the other major towns, and many remote villages can only be reached by light aircraft or on foot.
Jacksons International Airport is the major international airport in Papua New Guinea, located from Port Moresby. In addition to two international airfields, Papua New Guinea has 578 airstrips, most of which are unpaved.
Government
General information | https://en.wikipedia.org/wiki?curid=22943 |
Poseidon
Poseidon (; , ) was one of the Twelve Olympians in ancient Greek religion and myth, god of the sea, storms, earthquakes and
horses. In pre-Olympian Bronze Age Greece, he was venerated as a chief deity at Pylos and Thebes. His Roman equivalent is Neptune.
Poseidon was protector of seafarers, and of many Hellenic cities and colonies. In Homer's "Iliad", Poseidon supports the Greeks against the Trojans during the Trojan War and in the "Odyssey", during the sea-voyage from Troy back home to Ithaca, the Greek hero Odysseus provokes Poseidon's fury by blinding his son, the Cyclops Polyphemus, resulting in Poseidon punishing him with storms, the complete loss of his ship and companions, and a ten-year delay. Poseidon is also the subject of a Homeric hymn. In Plato's "Timaeus" and "Critias", the island of Atlantis was Poseidon's domain.
The earliest attested occurrence of the name, written in Linear B, is "Po-se-da-o" or "Po-se-da-wo-ne", which correspond to ("Poseidaōn") and ("Poseidawonos") in Mycenean Greek; in Homeric Greek it appears as ("Poseidaōn"); in Aeolic as ("Poteidaōn"); and in Doric as ("Poteidan"), ("Poteidaōn"), and ("Poteidas"). The form ("Poteidawon") appears in Corinth.
The origins of the name "Poseidon" are unclear. One theory breaks it down into an element meaning "husband" or "lord" (Greek ("posis"), from PIE "*pótis") and another element meaning "earth" "(" ("da"), Doric for " ("gē"))", producing something like lord or spouse of "Da", i.e. of the earth; this would link him with Demeter, "Earth-mother". Walter Burkert finds that "the second element "δᾶ-" remains hopelessly ambiguous" and finds a "husband of Earth" reading "quite impossible to prove." According to Robert Beekes,"Etymological Dictionary of Greek", "there is no indication that "δᾶ" means 'earth'".
Another, more plausible, theory interprets the second element as related to the (presumed) Doric word *δᾶϝον "dâwon", "water", Proto-Indo-European "*dah₂-" "water" or "*dʰenh₂-" "to run, flow", Sanskrit दन् "dā́-nu-" "fluid, drop, dew" and names of rivers such as Danube (< "*Danuvius") or Don. This would make *"Posei-dawōn" into the master of waters. It seems that Poseidon was originally a god of the waters. There is also the possibility that the word has Pre-Greek origin. Plato in his dialogue Cratylus gives two traditional etymologies: either the sea restrained Poseidon when walking as a "foot-bond" (ποσίδεσμον), or he "knew many things" (πολλά εἰδότος or πολλά εἰδῶν).
At least a few sources deem Poseidon as a "prehellenic" (i.e. Pelasgian) word, considering an Indo-European etymology "quite pointless".
The name of the Frisian and Scandinavian god Fosite or Forseti, who was venerated on the island of Heligoland, may have been derived from Poseidon. According to the German philologist, Hans Kuhn, the Germanic form "*Fosite" is linguistically identical to Greek Poseidon. Roman altars dedicated to Poseidon have been found in the Middle Rhine area.
Common epithets (or adjectives) applied to Poseidon are "Enosichthon" () "Earth Shaker" or "earth-shaking" and "Ennosigaios" (), used by Homer in the "Iliad" and by Nonnus in "Dionysiaca". Other epithets for Poseidon are "Hippeios" () "belonging to a horse", "Nauklarios" () "belonging to the ship-owners", "Pelagikos" () "belonging to the sea", "Petraios" () "rocky, stony", "Ptortheion" () "promotor of vegatation", "Thukios" () "full of seaweed", as well as several others.
Of the two phrases, "Enosichthon" has an older evidence of use, as it is identified in Linear B, as , "E-ne-si-da-o-ne",
The epithets "Ennosigaios" (and "Ennosidas") indicate the chthonic nature of Poseidon, that is to say, Poseidon was regarded as holding sway over land as well as the sea.
Another epithet of Poseidon was "Dark-Haired" (.
If surviving Linear B clay tablets can be trusted, the name "po-se-da-wo-ne" ("Poseidon") occurs with greater frequency than does "di-u-ja" ("Zeus"). A feminine variant, "po-se-de-ia", is also found, indicating a lost consort goddess, in effect the precursor of Amphitrite. Poseidon carries frequently the title "wa-na-ka" (wanax) in Linear B inscriptions, as king of the underworld. The chthonic nature of Poseidon-Wanax is also indicated by his title "E-ne-si-da-o-ne" in Mycenean Knossos and Pylos, a powerful attribute (earthquakes had accompanied the collapse of the Minoan palace-culture). In the cave of Amnisos (Crete) "Enesidaon" is related with the cult of Eileithyia, the goddess of childbirth. She was related with the annual birth of the divine child. During the Bronze Age, a goddess of nature, dominated both in Minoan and Mycenean cult, and "Wanax" ("wa-na-ka") was her male companion (paredros) in Mycenean cult. It is possible that Demeter appears as "Da-ma-te" in a Linear B inscription (PN EN 609), however the interpretation is still under dispute.
In Linear B inscriptions found at Pylos, "E-ne-si-da-o-ne" is related with Poseidon, and "Si-to Po-tini-ja" is probably related with Demeter. Tablets from Pylos record sacrificial goods destined for "the Two Queens and Poseidon" ("to the Two Queens and the King": "wa-na-soi", "wa-na-ka-te"). The "Two Queens" may be related with Demeter and Persephone, or their precursors, goddesses who were not associated with Poseidon in later periods.
The illuminating exception is the archaic and localised myth of the stallion Poseidon and mare Demeter at Phigalia in isolated and conservative Arcadia, noted by Pausanias (2nd century AD) as having fallen into desuetude; the stallion Poseidon pursues the mare-Demeter, and from the union she bears the horse Arion, and a daughter (Despoina), who obviously had the shape of a mare too. The violated Demeter was "Demeter Erinys" (furious) . In Arcadia, Demeter's mare-form was worshiped into historical times. Her "xoanon" of Phigaleia shows how the local cult interpreted her, as goddess of nature. A Medusa type with a horse's head with snaky hair, holding a dove and a dolphin, probably representing her power over air and water.
It seems that the Arcadian myth is related with the first Greek speaking people who entered the region during the Bronze Age. (Linear B represents an archaic Greek dialect). Their religious beliefs were mixed with the beliefs of the indigenous population. It is possible that the Greeks did not bring with them other gods except Zeus, Eos, and the Dioskouroi. The horse (numina) was related with the liquid element, and with the underworld. Poseidon appears as a beast (horse), which is the river spirit of the underworld, as it usually happens in northern-European folklore, and not unusually in Greece. Poseidon "Wanax", is the male companion (paredros) of the goddess of nature. In the relative Minoan myth, Pasiphaë is mating with the white bull, and she bears the hybrid creature Minotaur. The Bull was the old pre-Olympian Poseidon. The goddess of nature and her paredros survived in the Eleusinian cult, where the following words were uttered: "Mighty Potnia bore a strong son".
In the heavily sea-dependent Mycenaean culture, there is not sufficient evidence that Poseidon was connected with the sea. We do not know if "Posedeia" was a sea-goddess. Homer and Hesiod suggest that Poseidon became lord of the sea following the defeat of his father Cronus, when the world was divided by lot among his three sons; Zeus was given the sky, Hades the underworld, and Poseidon the sea, with the Earth and Mount Olympus belonging to all three. Walter Burkert suggests that the Hellene cult worship of Poseidon as a horse god may be connected to the introduction of the horse and war-chariot from Anatolia to Greece around 1600 BC.
It is almost sure that once Poseidon was worshiped as a horse, and this is evident by his cult in Peloponnesos. However he was originally a god of the waters, and therefore he became the "earth-shaker", because the Greeks believed that the cause of the earthquakes was the erosion of the rocks by the waters, by the rivers who they saw to disappear into the earth and then to burst out again. This is what the natural philosophers Thales, Anaximenes and Aristotle believed, which could not be different from the folklore belief. Later, when the Myceneans travelled along the sea, he was assigned a role as god of the sea.
In any case, the early importance of Poseidon can still be glimpsed in Homer's "Odyssey", where Poseidon rather than Zeus is the major mover of events. In Homer, Poseidon is the master of the sea.
Poseidon was a major civic god of several cities: in Athens, he was second only to Athena in importance, while in Corinth and many cities of Magna Graecia he was the chief god of the polis.
In his benign aspect, Poseidon was seen as creating new islands and offering calm seas. When offended or ignored, he supposedly struck the ground with his trident and caused chaotic springs, earthquakes, drownings and shipwrecks. Sailors prayed to Poseidon for a safe voyage, sometimes drowning horses as a sacrifice; in this way, according to a fragmentary papyrus, Alexander the Great paused at the Syrian seashore before the climactic battle of Issus, and resorted to prayers, "invoking Poseidon the sea-god, for whom he ordered a four-horse chariot to be cast into the waves."
According to Pausanias, Poseidon was one of the caretakers of the oracle at Delphi before Olympian Apollo took it over. Apollo and Poseidon worked closely in many realms: in colonization, for example, Delphic Apollo provided the authorization to go out and settle, while Poseidon watched over the colonists on their way, and provided the lustral water for the foundation-sacrifice. Xenophon's "Anabasis" describes a group of Spartan soldiers in 400–399 BC singing to Poseidon a paean—a kind of hymn normally sung for Apollo.
Like Dionysus, who inflamed the maenads, Poseidon also caused certain forms of mental disturbance. A Hippocratic text of ca 400 BC, "On the Sacred Disease" says that he was blamed for certain types of epilepsy.
Poseidon was known in various guises, denoted by epithets. In the town of Aegae in Euboea, he was known as "Poseidon Aegaeus" and had a magnificent temple upon a hill. Poseidon also had a close association with horses, known under the epithet "Poseidon Hippios", usually in Arcadia. He is more often regarded as the tamer of horses, but in some myths he is their father, either by spilling his seed upon a rock or by mating with a creature who then gave birth to the first horse. He was closely related with the springs, and with the strike of his trident, he created springs. Many springs like Hippocrene and Aganippe in Helikon are related with the word horse (hippos). (also Glukippe, Hyperippe). In the historical period, Poseidon was often referred to by the epithets "Enosichthon", "Seisichthon" and "Ennosigaios", and "Gaiēochos" all meaning "earth-shaker" and referring to his role in causing earthquakes.
Some other epithets of Poseidon are:
Poseidon was the second son of the Titans Cronus and Rhea. In most accounts he is swallowed by Cronus at birth and is later saved, along with his other brothers and sisters, by Zeus.
However, in some versions of the story, he, like his brother Zeus, did not share the fate of his other brother and sisters who were eaten by Cronus. He was saved by his mother Rhea, who concealed him among a flock of lambs and pretended to have given birth to a colt, which she gave to Cronus to devour.
According to John Tzetzes the "kourotrophos", or nurse of Poseidon was Arne, who denied knowing where he was, when Cronus came searching; according to Diodorus Siculus Poseidon was raised by the Telchines on Rhodes, just as Zeus was raised by the Korybantes on Crete.
According to a single reference in the "Iliad", when the world was divided by lot in three, Zeus received the sky, Hades the underworld and Poseidon the sea.
In Homer's "Odyssey" ("Book V", ln. 398), Poseidon has a home in "Aegae".
Athena became the patron goddess of the city of Athens after a competition with Poseidon. Yet Poseidon remained a numinous presence on the Acropolis in the form of his surrogate, Erechtheus. At the dissolution festival at the end of the year in the Athenian calendar, the Skira, the priests of Athena and the priest of Poseidon would process under canopies to Eleusis. They agreed that each would give the Athenians one gift and the Athenians would choose whichever gift they preferred. Poseidon struck the ground with his trident and a spring sprang up; the water was salty and not very useful, whereas Athena offered them an olive tree.
The Athenians or their king, Cecrops, accepted the olive tree and along with it Athena as their patron, for the olive tree brought wood, oil and food. After the fight, infuriated at his loss, Poseidon sent a monstrous flood to the Attic Plain, to punish the Athenians for not choosing him. The depression made by Poseidon's trident and filled with salt water was surrounded by the northern hall of the Erechtheum, remaining open to the air. "In cult, Poseidon was identified with Erechtheus," Walter Burkert noted; "the myth turns this into a temporal-causal sequence: in his anger at losing, Poseidon led his son Eumolpus against Athens and killed Erectheus."
The contest of Athena and Poseidon was the subject of the reliefs on the western pediment of the Parthenon, the first sight that greeted the arriving visitor.
This myth is construed by Robert Graves and others as reflecting a clash between the inhabitants during Mycenaean times and newer immigrants. Athens at its height was a significant sea power, at one point defeating the Persian fleet at Salamis Island in a sea battle.
Poseidon and Apollo, having offended Zeus by their rebellion in Hera's scheme, were temporarily stripped of their divine authority and sent to serve King Laomedon of Troy. He had them build huge walls around the city and promised to reward them well, a promise he then refused to fulfill. In vengeance, before the Trojan War, Poseidon sent a sea monster to attack Troy. The monster was later killed by Heracles.
Poseidon was said to have had many lovers of both sexes (see expandable list below). His consort was Amphitrite, a nymph and ancient sea-goddess, daughter of Nereus and Doris. Together they had a son named Triton, a merman.
Poseidon was the father of many heroes. He is thought to have fathered the famed Theseus.
A mortal woman named Tyro was married to Cretheus (with whom she had one son, Aeson), but loved Enipeus, a river god. She pursued Enipeus, who refused her advances. One day, Poseidon, filled with lust for Tyro, disguised himself as Enipeus, and from their union were born the heroes Pelias and Neleus, twin boys. Poseidon also had an affair with Alope, his granddaughter through Cercyon, his son and King of Eleusis, begetting the Attic hero Hippothoon. Cercyon had his daughter buried alive but Poseidon turned her into the spring, Alope, near Eleusis.Poseidon rescued Amymone from a lecherous satyr and then fathered a child, Nauplius, by her.
After having raped Caeneus, Poseidon fulfilled her request and changed her into a male warrior.
A mortal woman named Cleito once lived on an isolated island; Poseidon fell in love with the human mortal and created a dwelling sanctuary at the top of a hill near the middle of the island and surrounded the dwelling with rings of water and land to protect her. She gave birth to five sets of twin boys; the firstborn, Atlas, became the first ruler of Atlantis.
Not all of Poseidon's children were human. In an archaic myth, Poseidon once pursued Demeter. She spurned his advances, turning herself into a mare so that she could hide in a herd of horses; he saw through the deception and became a stallion and captured her. Their child was a horse, Arion, which was capable of human speech. Poseidon also raped Medusa on the floor of a temple to Athena. Medusa was then changed into a monster by Athena. When she was later beheaded by the hero Perseus, Chrysaor and Pegasus emerged from her neck.
His other children include Polyphemus (the Cyclops) and, finally, Alebion and Bergion and Otos and Ephialtae (the giants).
Male lovers
In Greek art, Poseidon rides a chariot that was pulled by a hippocampus or by horses that could ride on the sea. He was associated with dolphins and three-pronged fish spears (tridents). He lived in a palace on the ocean floor, made of coral and gems.
In the "Iliad" Poseidon favors the Greeks, and on several occasion takes an active part in the battle against the Trojan forces. However, in Book XX he rescues Aeneas after the Trojan prince is laid low by Achilles.
In the "Odyssey", Poseidon is notable for his hatred of Odysseus who blinded the god's son, the Cyclops Polyphemus. The enmity of Poseidon prevents Odysseus's return home to Ithaca for many years. Odysseus is even told, notwithstanding his ultimate safe return, that to placate the wrath of Poseidon will require one more voyage on his part.
In the "Aeneid", Neptune is still resentful of the wandering Trojans, but is not as vindictive as Juno, and in Book I he rescues the Trojan fleet from the goddess's attempts to wreck it, although his primary motivation for doing this is his annoyance at Juno's having intruded into his domain.
A hymn to Poseidon included among the Homeric Hymns is a brief invocation, a seven-line introduction that addresses the god as both "mover of the earth and barren sea, god of the deep who is also lord of Helicon and wide Aegae, and specifies his twofold nature as an Olympian: "a tamer of horses and a saviour of ships."
Poseidon appears in "Percy Jackson and the Olympians" as the father of Percy Jackson and Tyson the Cyclops. He also appears in the ABC television series "Once Upon a Time" as the guest star of the second half of season four played by Ernie Hudson. In this version, Poseidon is portrayed as the father of the Sea Witch Ursula.
Bibliography of reconstruction:
Bibliography of reconstruction: | https://en.wikipedia.org/wiki?curid=22948 |
Population
In biology, a population is all the organisms of the same group or species who live in a particular geographical area and are capable of interbreeding. The area of a sexual population is the area where inter-breeding is possible between any pair within the area and more probable than cross-breeding with individuals from other areas.
In sociology, population refers to a collection of humans. Demography is a social science which entails the statistical study of populations.
Population, in simpler terms, is the number of people in a city or town, region, country or world; population is usually determined by a process called census (a process of collecting, analyzing, compiling and publishing data).
In population genetics a sexual population is a set of organisms in which any pair of members can breed together. This means that they can regularly exchange gametes to produce normally-fertile offspring, and such a breeding group is also known therefore as a "gamodeme". This also implies that all members belong to the same species.
If the gamodeme is very large (theoretically, approaching infinity), and all gene alleles are uniformly distributed by the gametes within it, the gamodeme is said to be panmictic. Under this state, allele (gamete) frequencies can be converted to genotype (zygote) frequencies by expanding an appropriate quadratic equation, as shown by Sir Ronald Fisher in his establishment of quantitative genetics.
This seldom occurs in nature: localization of gamete exchange – through dispersal limitations, preferential mating, cataclysm, or other cause – may lead to small actual gamodemes which exchange gametes reasonably uniformly within themselves but are virtually separated from their neighboring gamodemes. However, there may be low frequencies of exchange with these neighbors. This may be viewed as the breaking up of a large sexual population (panmictic) into smaller overlapping sexual populations. This failure of panmixia leads to two important changes in overall population structure: (1) the component gamodemes vary (through gamete sampling) in their allele frequencies when compared with each other and with the theoretical panmictic original (this is known as dispersion, and its details can be estimated using expansion of an appropriate binomial equation); and (2) the level of homozygosity rises in the entire collection of gamodemes. The overall rise in homozygosity is quantified by the inbreeding coefficient ("f" or "φ"). Note that all homozygotes are increased in frequency – both the deleterious and the desirable. The mean phenotype of the gamodemes collection is lower than that of the panmictic original – which is known as inbreeding depression. It is most important to note, however, that some dispersion lines will be superior to the panmictic original, while some will be about the same, and some will be inferior. The probabilities of each can be estimated from those binomial equations. In plant and animal breeding, procedures have been developed which deliberately utilize the effects of dispersion (such as line breeding, pure-line breeding, backcrossing). It can be shown that dispersion-assisted selection leads to the greatest genetic advance ("ΔG"=change in the phenotypic mean), and is much more powerful than selection acting without attendant dispersion. This is so for both allogamous (random fertilization) and autogamous (self-fertilization) gamodemes.
In ecology, the population of a certain species in a certain area can be estimated using the Lincoln Index.
According to the United States Census Bureau the world's population was about 7.55 billion in 2019 and that the 7 billion number was surpassed on 12 March 2012. According to a separate estimate by the United Nations, Earth's population exceeded seven billion in October 2011, a milestone that offers unprecedented challenges and opportunities to all of humanity, according to UNFPA.
According to papers published by the United States Census Bureau, the world population hit 6.5 billion on 24 February 2006. The United Nations Population Fund designated 12 October 1999 as the approximate day on which world population reached 6 billion. This was about 12 years after the world population reached 5 billion in 1987, and six years after the world population reached 5.5 billion in 1993. The population of countries such as Nigeria is not even known to the nearest million, so there is a considerable margin of error in such estimates.
Researcher Carl Haub calculated that a total of over 100 billion people have probably been born in the last 2000 years.
Population growth increased significantly as the Industrial Revolution gathered pace from 1700 onwards. The last 50 years have seen a yet more rapid increase in the rate of population growth due to medical advances and substantial increases in agricultural productivity, particularly beginning in the 1960s, made by the Green Revolution. In 2017 the United Nations Population Division projected that the world's population will reach about 9.8 billion in 2050 and 11.2 billion in 2100.
In the future, the world's population is expected to peak, after which it will decline due to economic reasons, health concerns, land exhaustion and environmental hazards. According to one report, it is very likely that the world's population will stop growing before the end of the 21st century. Further, there is some likelihood that population will actually decline before 2100. Population has already declined in the last decade or two in Eastern Europe, the Baltics and in the Commonwealth of Independent States.
The population pattern of less-developed regions of the world in recent years has been marked by gradually declining birth rates. These followed an earlier sharp reduction in death rates. This transition from high birth and death rates to low birth and death rates is often referred to as the demographic transition.
Human population control is the practice of altering the rate of growth of a human population. Historically, human population control has been implemented with the goal of limiting the rate of population growth. In the period from the 1950s to the 1980s, concerns about global population growth and its effects on poverty, environmental degradation, and political stability led to efforts to reduce population growth rates. While population control can involve measures that improve people's lives by giving them greater control of their reproduction, a few programs, most notably the Chinese government's one-child per family policy, have resorted to coercive measures.
In the 1970s, tension grew between population control advocates and women's health activists who advanced women's reproductive rights as part of a human rights-based approach. Growing opposition to the narrow population control focus led to a significant change in population control policies in the early 1980s. | https://en.wikipedia.org/wiki?curid=22949 |
Psychological egoism
Psychological egoism is the view that humans are always motivated by self-interest and selfishness, even in what seem to be acts of altruism. It claims that, when people choose to help others, they do so ultimately because of the personal benefits that they themselves expect to obtain, directly or indirectly, from so doing. This is a descriptive rather than normative view, since it only makes claims about how things are, not how they ought to be. It is, however, related to several other normative forms of egoism, such as ethical egoism and rational egoism.
A specific form of psychological egoism is psychological hedonism, the view that the ultimate motive for all voluntary human action is the desire to experience pleasure or to avoid pain. Many discussions of psychological egoism focus on this type, but the two are not the same: theorists have explained behavior motivated by self-interest without using pleasure and pain as the final causes of behavior. Psychological hedonism argues actions are caused by both a need for pleasure immediately and in the future. However, immediate gratification can be sacrificed for a chance of greater, future pleasure. Further, humans are not motivated to strictly avoid pain and only pursue pleasure, but, instead, humans will endure pain to achieve the greatest net pleasure. Accordingly, all actions are tools for increasing pleasure or decreasing pain, even those defined as altruistic and those that do not cause an immediate change in satisfaction levels.
Beginning with ancient philosophy, Epicureanism claims humans live to maximize pleasure. Epicurus argued the theory of human behavior being motivated by pleasure alone is evidenced from infancy to adulthood. Humanity performs altruistic, honorable, and virtuous acts not for the sake of another or because of a moral code but rather to increase the well being of the self.
In modern philosophy, Jeremy Bentham asserted, like Epicurus, that human behavior is governed by a need to increase pleasure and decrease pain. Bentham explicitly described what types and qualities of pain and pleasure exist, and how human motives are singularly explained using psychological hedonism. Bentham attempted to quantify psychological hedonism. Bentham endeavored to find the ideal human behavior based on hedonic calculus or the measurement of relative gains and losses in pain and pleasure to determine the most pleasurable action a human could choose in a situation.
From an evolutionary perspective, Herbert Spencer, a psychological egoist, argued that all animals primarily seek to survive and protect their lineage. Essentially, the need for the individual and for the individual's immediate family to live supersedes the others' need to live. All species attempt to maximize their own chances of survival and, therefore, well being. Spencer asserted the best adapted creatures will have their pleasure levels outweigh their pain levels in their environments. Thus, pleasure meant an animal was fulfilling its egoist goal of self survival, and pleasure would always be pursued because species constantly strive for survival.
Whether or not Sigmund Freud was a psychological egoist, his concept of the pleasure principle borrowed much from psychological egoism and psychological hedonism in particular. The pleasure principle rules the behavior of the Id which is an unconscious force driving humans to release tension from unfulfilled desires. When Freud introduced Thanatos and its opposing force, Eros, the pleasure principle emanating from psychological hedonism became aligned with the Eros, which drives a person to satiate sexual and reproductive desires. Alternatively, Thanatos seeks the cessation of pain through death and the end of the pursuit of pleasure: thus a hedonism rules Thanatos, but it centers on the complete avoidance of pain rather than psychological hedonist function which pursues pleasure and avoids pain. Therefore, Freud believed in qualitatively different hedonisms where the total avoidance of pain hedonism and the achievement of the greatest net pleasure hedonism are separate and associated with distinct functions and drives of the human psyche. Although Eros and Thanatos are ruled by qualitatively different types of hedonism, Eros remains under the rule of Jeremy Bentham's quantitative psychological hedonism because Eros seeks the greatest net pleasure.
Traditional behaviorism dictates all human behavior is explained by classical conditioning and operant conditioning. Operant conditioning works through reinforcement and punishment which adds or removes pleasure and pain to manipulate behavior. Using pleasure and pain to control behavior means behaviorists assumed the principles of psychological hedonism could be applied to predicting human behavior. For example, Thorndike's law of effect states that behaviors associated with pleasantness will be learned and those associated with pain will be extinguished. Often, behaviorist experiments using humans and animals are built around the assumption that subjects will pursue pleasure and avoid pain. Although psychological hedonism is incorporated into the fundamental principles and experimental designs of behaviorism, behaviorism itself explains and interprets only observable behavior and therefore does not theorize about the ultimate cause of human behavior. Thus, behaviorism uses but does not strictly support psychological hedonism over other understandings of the ultimate drive of human behavior.
Psychological egoism is controversial. Proponents cite evidence from introspection: reflection on one's own actions may reveal their motives and intended results to be based on self-interest. Psychological egoists and hedonists have found through numerous observations of natural human behavior that behavior can be manipulated through reward and punishment both of which have direct effects of pain and pleasure. Also, the work of some social scientists has empirically supported this theory. Further, they claim psychological egoism posits a theory that is a more parsimonious explanation than competing theories.
Opponents have argued that psychological egoism is not more parsimonious than other theories. For example, a theory that claims altruism occurs for the sake of altruism explains altruism with less complexity than the egoistic approach. The psychological egoist asserts humans act altruistically for selfish reasons even when cost of the altruistic action is far outweighed by the reward of acting selfishly because altruism is performed to fulfill the desire of a person to act altruistically. Other critics argue that it is false either because it is an over-simplified interpretation of behavior or that there exists empirical evidence of altruistic behaviour. Recently, some have argued that evolutionary theory provides evidence against it.
Critics have stated that proponents of psychological egoism often confuse the satisfaction of their own desires with the satisfaction of their own "self-regarding" desires. Even though it is true that every human being seeks his own satisfaction, this sometimes may only be achieved via the well-being of his neighbor. An example of this situation could be phoning for an ambulance when a car accident has happened. In this case, the caller desires the well-being of the victim, even though the desire itself is the caller's own.
To counter this critique, psychological egoism asserts that all such desires for the well being of others are ultimately derived from self-interest. For example, German philosopher Friedrich Nietzsche was a psychological egoist for some of his career, though he is said to have repudiated that later in his campaign against morality. He argues in §133 of "The Dawn" that in such cases compassionate impulses arise out of the projection of our identity unto the object of our feeling. He gives some hypothetical examples as illustrations to his thesis: that of a person, feeling horrified after witnessing a personal feud, coughing blood, or that of the impulse felt to save a person who is drowning in the water. In such cases, according to Nietzsche, there comes into play unconscious fears regarding our own safety. The suffering of another person is felt as a threat to our own happiness and sense of safety, because it reveals our own vulnerability to misfortunes, and thus, by relieving it, one could also ameliorate those personal sentiments. Essentially, proponents argue that altruism is rooted in self-interest whereas opponents claim altruism occurs for altruism's sake or is caused by a non-selfish reason.
David Hume once wrote, "What interest can a fond mother have in view, who loses her health by assiduous attendance on her sick child, and afterwards languishes and dies of grief, when freed, by its death [the child's], from the slavery of that attendance?". It seems incorrect to describe such a mother's goal as self-interested.
Psychological egoists, however, respond that helping others in such ways is ultimately motivated by some form of self-interest, such as non-sensory satisfaction, the expectation of reciprocation, the desire to gain respect or reputation, or by the expectation of a reward in a putative afterlife. The helpful action is merely instrumental to these ultimately selfish goals.
In the ninth century, Mohammed Ibn Al-Jahm Al-Barmaki (محمد بن الجـَهْم البَرمَكي) has been quoted saying:
"No one deserves thanks from another about something he has done for him or goodness he has done, he is either willing to get a reward from God, therefore he wanted to serve himself, or he wanted to get a reward from people, therefore, he has done that to get profit for himself, or to be mentioned and praised by people, therefore, to it is also for himself, or due to his mercy and tenderheartedness, so he has simply done that goodness to pacify these feelings and treat himself."
This sort of explanation appears to be close to the view of La Rochefoucauld (and perhaps Hobbes).
According to psychological hedonism, the ultimate egoistic motive is to gain good feelings of pleasure and avoid bad feelings of pain. Other, less restricted forms of psychological egoism may allow the ultimate goal of a person to include such things as avoiding punishments from oneself or others (such as guilt or shame) and attaining rewards (such as pride, self-worth, power or reciprocal beneficial action).
Some psychologists explain empathy in terms of psychological hedonism. According to the "merge with others hypothesis", empathy increases the more an individual feels like they are one with another person, and decreases as the oneness decreases. Therefore, altruistic actions emanating from empathy and empathy itself are caused by making others' interests our own, and the satisfaction of their desires becomes our own, not just theirs. Both cognitive studies and neuropsychological experiments have provided evidence for this theory: as humans increase our oneness with others our empathy increases, and as empathy increases our inclination to act altruistically increases. Neuropsychological studies have linked mirror neurons to humans experiencing empathy. Mirror neurons are activated both when a human (or animal) performs an action and when they observe another human (or animal) performs the same action. Researchers have found that the more these mirror neurons fire the more human subjects report empathy. From a neurological perspective, scientists argue that when a human empathizes with another, the brain operates as if the human is actually participating in the actions of the other person. Thus, when performing altruistic actions motivated by empathy, humans experience someone else's pleasure of being helped. Therefore, in performing acts of altruism, people act in their own self interests even at a neurological level.
Even accepting the theory of universal positivity, it is difficult to explain, for example, the actions of a soldier who sacrifices his life by jumping on a grenade in order to save his comrades. In this case, there is simply no time to experience positivity toward one's actions, although a psychological egoist may argue that the soldier experiences moral positivity in knowing that he is sacrificing his life to ensure the survival of his comrades, or that he is avoiding negativity associated with the thought of all his comrades dying. Psychological egoists argue that although some actions may not clearly cause physical nor social positivity, nor avoid negativity, one's current contemplation or reactionary mental expectation of these is the main factor of the decision. When a dog is first taught to sit, it is given a biscuit. This is repeated until, finally, the dog sits without requiring a biscuit. Psychological egoists could claim that such actions which do not 'directly' result in positivity, or reward, are not dissimilar from the actions of the dog. In this case, the action (sitting on command) will have become a force of habit, and breaking such a habit would result in mental discomfort. This basic theory of conditioning behaviour, applied to other seemingly ineffective positive actions, can be used to explain moral responses that are instantaneous and instinctive such as the soldier jumping on the grenade.
Psychological egoism has been accused of being circular: "If a person willingly performs an act, that means he derives personal enjoyment from it; therefore, people only perform acts that give them personal enjoyment." In particular, seemingly altruistic acts must be performed because people derive enjoyment from them and are therefore, in reality, egoistic. This statement is circular because its conclusion is identical to its hypothesis: it assumes that people only perform acts that give them personal enjoyment, and concludes that people only perform acts that give them personal enjoyment. This objection was tendered by William Hazlitt and Thomas Macaulay in the 19th century, and has been restated many times since. An earlier version of the same objection was made by Joseph Butler in 1726.
Joel Feinberg, in his 1958 paper "Psychological Egoism", embraces a similar critique by drawing attention to the infinite regress of psychological egoism. He expounds it in the following cross-examination:
In their 1998 book, "Unto Others", Sober and Wilson detailed an evolutionary argument based on the likelihood for egoism to evolve under the pressures of natural selection. Specifically, they focus on the human behavior of parental care. To set up their argument, they propose two potential psychological mechanisms for this. The hedonistic mechanism is based on a parent's ultimate desire for pleasure or the avoidance of pain and a belief that caring for its offspring will be instrumental to that. The altruistic mechanism is based on an altruistic ultimate desire to care for its offspring.
Sober and Wilson argue that when evaluating the likelihood of a given trait to evolve, three factors must be considered: availability, reliability and energetic efficiency. The genes for a given trait must first be "available" in the gene pool for selection. The trait must then "reliably" produce an increase in fitness for the organism. The trait must also operate with "energetic efficiency" to not limit the fitness of the organism. Sober and Wilson argue that there is neither reason to suppose that an altruistic mechanism should be any less available than a hedonistic one nor reason to suppose that the content of thoughts and desires (hedonistic vs. altruistic) should impact energetic efficiency. As availability and energetic efficiency are taken to be equivalent for both mechanisms it follows that the more reliable mechanism will then be the more likely mechanism.
For the hedonistic mechanism to produce the behavior of caring for offspring, the parent must believe that the caring behavior will produce pleasure or avoidance of pain for the parent. Sober and Wilson argue that the belief also must be true and constantly reinforced, or it would not be likely enough to persist. If the belief fails then the behavior is not produced. The altruistic mechanism does not rely on belief; therefore, they argue that it would be less likely to fail than the alternative, i.e. more reliable.
In philosopher Derek Parfit's 2011 book "On What Matters", Volume 1, Parfit presents an argument against psychological egoism that centers around an apparent equivocation between different senses of the word "want": | https://en.wikipedia.org/wiki?curid=22951 |
Plato
Plato ( ; "Plátōn", in Classical Attic; 428/427 or 424/423 – 348/347 BC) was an Athenian philosopher during the Classical period in Ancient Greece, founder of the Platonist school of thought, and the Academy, the first institution of higher learning in the Western world.
He is widely considered the pivotal figure in the history of Ancient Greek and Western philosophy, along with his teacher, Socrates, and his most famous student, Aristotle. Plato has also often been cited as one of the founders of Western religion and spirituality. The so-called Neoplatonism of philosophers like Plotinus and Porphyry influenced Saint Augustine and thus Christianity. Alfred North Whitehead once noted: "the safest general characterization of the European philosophical tradition is that it consists of a series of footnotes to Plato."
Plato was the innovator of the written dialogue and dialectic forms in philosophy. Plato is also considered the founder of Western political philosophy. His most famous contribution is the theory of Forms known by pure reason, in which Plato presents a solution to the problem of universals known as Platonism (also ambiguously called either Platonic realism or Platonic idealism). He is also the namesake of Platonic love and the Platonic solids.
His own most decisive philosophical influences are usually thought to have been along with Socrates, the pre-Socratics Pythagoras, Heraclitus and Parmenides, although few of his predecessors' works remain extant and much of what we know about these figures today derives from Plato himself. Unlike the work of nearly all of his contemporaries, Plato's entire body of work is believed to have survived intact for over 2,400 years. Although their popularity has fluctuated over the years, the works of Plato have never been without readers since the time they were written.
Due to a lack of surviving accounts, little is known about Plato's early life and education. Plato belonged to an aristocratic and influential family. According to a disputed tradition, reported by doxographer Diogenes Laërtius, Plato's father Ariston traced his descent from the king of Athens, Codrus, and the king of Messenia, Melanthus.
Plato's mother was Perictione, whose family boasted of a relationship with the famous Athenian lawmaker and lyric poet Solon, one of the seven sages, who repealed the laws of Draco (except for the death penalty for homicide). Perictione was sister of Charmides and niece of Critias, both prominent figures of the Thirty Tyrants, known as the Thirty, the brief oligarchic regime (404–403 BC), which followed on the collapse of Athens at the end of the Peloponnesian War (431–404 BC). According to some accounts, Ariston tried to force his attentions on Perictione, but failed in his purpose; then the god Apollo appeared to him in a vision, and as a result, Ariston left Perictione unmolested.
The exact time and place of Plato's birth are unknown. Based on ancient sources, most modern scholars believe that he was born in Athens or Aegina between 429 and 423 BC, not long after the start of the Peloponnesian War. The traditional date of Plato's birth during the 87th or 88th Olympiad, 428 or 427 BC, is based on a dubious interpretation of Diogenes Laërtius, who says, "When [Socrates] was gone, [Plato] joined Cratylus the Heracleitean and Hermogenes, who philosophized in the manner of Parmenides. Then, at twenty-eight, Hermodorus says, [Plato] went to Euclides in Megara." However, as Debra Nails argues, the text does not state that Plato left for Megara immediately after joining Cratylus and Hermogenes. In his "Seventh Letter", Plato notes that his coming of age coincided with the taking of power by the Thirty, remarking, "But a youth under the age of twenty made himself a laughingstock if he attempted to enter the political arena." Thus, Nails dates Plato's birth to 424/423.
According to Neanthes, Plato was six years younger than Isocrates, and therefore was born the same year the prominent Athenian statesman Pericles died (429 BC). Jonathan Barnes regards 428 BC as the year of Plato's birth. The grammarian Apollodorus of Athens in his "Chronicles" argues that Plato was born in the 88th Olympiad. Both the "Suda" and Sir Thomas Browne also claimed he was born during the 88th Olympiad. Another legend related that, when Plato was an infant, bees settled on his lips while he was sleeping: an augury of the sweetness of style in which he would discourse about philosophy.
Besides Plato himself, Ariston and Perictione had three other children; two sons, Adeimantus and Glaucon, and a daughter Potone, the mother of Speusippus (the nephew and successor of Plato as head of the Academy). The brothers Adeimantus and Glaucon are mentioned in the "Republic" as sons of Ariston, and presumably brothers of Plato, though some have argued they were uncles. In a scenario in the "Memorabilia", Xenophon confused the issue by presenting a Glaucon much younger than Plato.
Ariston appears to have died in Plato's childhood, although the precise dating of his death is difficult. Perictione then married Pyrilampes, her mother's brother, who had served many times as an ambassador to the Persian court and was a friend of Pericles, the leader of the democratic faction in Athens. Pyrilampes had a son from a previous marriage, Demus, who was famous for his beauty. Perictione gave birth to Pyrilampes' second son, Antiphon, the half-brother of Plato, who appears in "Parmenides".
In contrast to his reticence about himself, Plato often introduced his distinguished relatives into his dialogues, or referred to them with some precision. In addition to Adeimantus and Glaucon in the "Republic", Charmides has a dialogue named after him; and Critias speaks in both "Charmides" and "Protagoras". These and other references suggest a considerable amount of family pride and enable us to reconstruct Plato's family tree. According to Burnet, "the opening scene of the "Charmides" is a glorification of the whole [family] connection ... Plato's dialogues are not only a memorial to Socrates, but also the happier days of his own family."
The fact that the philosopher in his maturity called himself "Platon" is indisputable, but the origin of this name remains mysterious. "Platon" is a nickname from the adjective "platýs" () 'broad'. Although "Platon" was a fairly common name (31 instances are known from Athens alone), the name does not occur in Plato's known family line. The sources of Diogenes Laërtius account for this by claiming that his wrestling coach, Ariston of Argos, dubbed him "broad" on account of his chest and shoulders, or that Plato derived his name from the breadth of his eloquence, or his wide forehead. While recalling a moral lesson about frugal living Seneca mentions the meaning of Plato's name: "His very name was given him because of his broad chest."
His true name was supposedly Aristocles (), meaning 'best reputation'. According to Diogenes Laërtius, he was named after his grandfather, as was common in Athenian society. But there is only one inscription of an Aristocles, an early archon of Athens in 605/4 BC. There is no record of a line from Aristocles to Plato's father, Ariston. Recently a scholar has argued that even the name Aristocles for Plato was a much later invention. However, another scholar claims that "there is good reason for not dismissing [the idea that Aristocles was Plato's given name] as a mere invention of his biographers", noting how prevalent that account is in our sources.
Ancient sources describe him as a bright though modest boy who excelled in his studies. Apuleius informs us that Speusippus praised Plato's quickness of mind and modesty as a boy, and the "first fruits of his youth infused with hard work and love of study". His father contributed all which was necessary to give to his son a good education, and, therefore, Plato must have been instructed in grammar, music, and gymnastics by the most distinguished teachers of his time. Plato invokes Damon many times in the "Republic". Plato was a wrestler, and Dicaearchus went so far as to say that Plato wrestled at the Isthmian games. Plato had also attended courses of philosophy; before meeting Socrates, he first became acquainted with Cratylus and the Heraclitean doctrines.
Ambrose believed that Plato met Jeremiah in Egypt and was influenced by his ideas. Augustine initially accepted this claim, but later rejected it, arguing in "The City of God" that "Plato was born a hundred years after Jeremiah prophesied."
Plato may have travelled in Italy, Sicily, Egypt and Cyrene. Said to have returned to Athens at the age of forty, Plato founded one of the earliest known organized schools in Western Civilization on a plot of land in the Grove of Hecademus or Academus. The Academy was a large enclosure of ground about six stadia outside of Athens proper. One story is that the name of the Academy comes from the ancient hero, Academus; still another story is that the name came from a supposed former owner of the plot of land, an Athenian citizen whose name was (also) Academus; while yet another account is that it was named after a member of the army of Castor and Pollux, an Arcadian named Echedemus. The Academy operated until it was destroyed by Lucius Cornelius Sulla in 84 BC. Many intellectuals were schooled in the Academy, the most prominent one being Aristotle.
Throughout his later life, Plato became entangled with the politics of the city of Syracuse. According to Diogenes Laërtius, Plato initially visited Syracuse while it was under the rule of Dionysius. During this first trip Dionysius's brother-in-law, Dion of Syracuse, became one of Plato's disciples, but the tyrant himself turned against Plato. Plato almost faced death, but he was sold into slavery. Then Anniceris bought Plato's freedom for twenty minas, and sent him home. After Dionysius's death, according to Plato's "Seventh Letter", Dion requested Plato return to Syracuse to tutor Dionysius II and guide him to become a philosopher king. Dionysius II seemed to accept Plato's teachings, but he became suspicious of Dion, his uncle. Dionysius expelled Dion and kept Plato against his will. Eventually Plato left Syracuse. Dion would return to overthrow Dionysius and ruled Syracuse for a short time before being usurped by Calippus, a fellow disciple of Plato.
According to Seneca, Plato died at the age of 81 on the same day he was born. The Suda indicates that he lived to 82 years, while Neanthes claims an age of 84. A variety of sources have given accounts of his death. One story, based on a mutilated manuscript, suggests Plato died in his bed, whilst a young Thracian girl played the flute to him. Another tradition suggests Plato died at a wedding feast. The account is based on Diogenes Laërtius's reference to an account by Hermippus, a third-century Alexandrian. According to Tertullian, Plato simply died in his sleep.
Plato owned an estate at Iphistiadae, which by will he left to a certain youth named Adeimantus, presumably a younger relative, as Plato had an elder brother or uncle by this name.
Although Socrates influenced Plato directly as related in the dialogues, the influence of Pythagoras upon Plato, or in a broader sense, the Pythagoreans, such as Archytas also appears to have been significant. Aristotle claimed that the philosophy of Plato closely followed the teachings of the Pythagoreans, and Cicero repeats this claim: "They say Plato learned all things Pythagorean." It is probable that both were influenced by Orphism, and both believed in metempsychosis, transmigration of the soul.
Pythagoras held that all things are number, and the cosmos comes from numerical principles. He introduced the concept of form as distinct from matter, and that the physical world is an imitation of an eternal mathematical world. These ideas were very influential on Heraclitus, Parmenides and Plato.
George Karamanolis notes thatNumenius accepted both Pythagoras and Plato as the two authorities one should follow in philosophy, but he regarded Plato's authority as subordinate to that of Pythagoras, whom he considered to be the source of all true philosophy—including Plato's own. For Numenius it is just that Plato wrote so many philosophical works, whereas Pythagoras' views were originally passed on only orally.
According to R. M. Hare, this influence consists of three points:
Plato may have studied under the mathematician Theodorus of Cyrene, and has a dialogue named for and whose central character is the mathematician Theaetetus. While not a mathematician, Plato was considered an accomplished teacher of mathematics. Eudoxus of Cnidus, the greatest mathematician in Classical Greece, who contributed much of what is found in Euclid's "Elements", was taught by Archytas and Plato. Plato helped to distinguish between pure and applied mathematics by widening the gap between "arithmetic", now called number theory and "logistic", now called arithmetic.
In the dialogue "Timaeus" Plato associated each of the four classical elements (earth, air, water, and fire) with a regular solid (cube, octahedron, icosahedron, and tetrahedron respectively) due to their shape, the so-called Platonic solids. The fifth regular solid, the dodecahedron, was supposed to be the element which made up the heavens.
The two philosophers Heraclitus and Parmenides, following the way initiated by pre-Socratic Greek philosophers like Pythagoras, depart from mythology and begin the metaphysical tradition that strongly influenced Plato and continues today.
The surviving fragments written by Heraclitus suggest the view that all things are continuously changing, or becoming. His image of the river, with ever-changing waters, is well known. According to some ancient traditions like that of Diogenes Laërtius, Plato received these ideas through Heraclitus' disciple Cratylus, who held the more radical view that continuous change warrants scepticism because we cannot define a thing that does not have a permanent nature.
Parmenides adopted an altogether contrary vision, arguing for the idea of changeless Being and the view that change is an illusion. John Palmer notes "Parmenides' distinction among the principal modes of being and his derivation of the attributes that must belong to what must be, simply as such, qualify him to be seen as the founder of metaphysics or ontology as a domain of inquiry distinct from theology."
These ideas about change and permanence, or becoming and Being, influenced Plato in formulating his theory of Forms.
Plato's most self-critical dialogue is called "Parmenides", featuring Parmenides and his student Zeno, who following Parmenides' denial of change argued forcefully with his paradoxes to deny the existence of motion.
Plato's "Sophist" dialogue includes an Eleatic stranger, a follower of Parmenides, as a foil for his arguments against Parmenides. In the dialogue Plato distinguishes nouns and verbs, providing some of the earliest treatment of subject and predicate. He also argues that motion and rest both "are", against followers of Parmenides who say rest is but motion is not.
Plato was one of the devoted young followers of Socrates. The precise relationship between Plato and Socrates remains an area of contention among scholars.
Plato never speaks in his own voice in his dialogues, and speaks as Socrates in all but "the Laws". In the "Second Letter", it says, "no writing of Plato exists or ever will exist, but those now said to be his are those of a Socrates become beautiful and new"; if the Letter is Plato's, the final qualification seems to call into question the dialogues' historical fidelity. In any case, Xenophon's "Memorabilia" and Aristophanes's "The Clouds" seem to present a somewhat different portrait of Socrates from the one Plato paints. Some have called attention to the problem of taking Plato's Socrates to be his mouthpiece, given Socrates' reputation for irony and the dramatic nature of the dialogue form.
Aristotle attributes a different doctrine with respect to Forms to Plato and Socrates. Aristotle suggests that Socrates' idea of forms can be discovered through investigation of the natural world, unlike Plato's Forms that exist beyond and outside the ordinary range of human understanding. In the dialogues of Plato though, Socrates sometimes seems to support a mystical side, discussing reincarnation and the mystery religions, this is generally attributed to Plato. Regardless, this view of Socrates cannot be dismissed out of hand, as we cannot be sure of the differences between the views of Plato and Socrates. In the "Meno" Plato refers to the Eleusinian Mysteries, telling Meno he would understand Socrates's answers better if he could stay for the initiations next week. It is possible that Plato and Socrates took part in the Eleusinian Mysteries.
In Plato's dialogues, Socrates and his company of disputants had something to say on many subjects, including several aspects of metaphysics. These include religion and science, human nature, love, and sexuality. More than one dialogue contrasts perception and reality, nature and custom, and body and soul.
"Platonism" and its theory of Forms (or theory of Ideas) denies the reality of the material world, considering it only an image or copy of the real world. The theory of Forms is first introduced in the "Phaedo" dialogue (also known as "On the Soul"), wherein Socrates refutes the pluralism of the likes of Anaxagoras, then the most popular response to Heraclitus and Parmenides, while giving the "Opposites Argument" in support of the Forms.
According to this theory of Forms there are at least two worlds: the apparent world of concrete objects, grasped by the senses, which constantly changes, and an unchanging and unseen world of Forms or abstract objects, grasped by pure reason (). which ground what is apparent.
It can also be said there are three worlds, with the apparent world consisting of both the world of material objects and of mental images, with the "third realm" consisting of the Forms. Thus, though there is the term "Platonic idealism", this refers to Platonic Ideas or the Forms, and not to some platonic kind of idealism, an 18th-century view which sees matter as unreal in favour of mind. For Plato, though grasped by the mind, only the Forms are truly real.
Plato's Forms thus represent types of things, as well as properties, patterns, and relations, to which we refer as objects. Just as individual tables, chairs, and cars refer to objects in this world, 'tableness', 'chairness', and 'carness', as well as e. g. justice, truth, and beauty refer to objects in another world. One of Plato's most cited examples for the Forms were the truths of geometry, such as the Pythagorean theorem.
In other words, the Forms are universals given as a solution to the problem of universals, or the problem of "the One and the Many", e. g. how one predicate "red" can apply to many red objects. For Plato this is because there is one abstract object or Form of red, redness itself, in which the several red things "participate". As Plato's solution is that universals are Forms and that Forms are real if anything is, Plato's philosophy is unambiguously called Platonic realism. According to Aristotle, Plato's best known argument in support of the Forms was the "one over many" argument.
Aside from being immutable, timeless, changeless, and one over many, the Forms also provide definitions and the standard against which all instances are measured. In the dialogues Socrates regularly asks for the meaning – in the sense of intensional definitions – of a general term (e. g. justice, truth, beauty), and criticizes those who instead give him particular, extensional examples, rather than the quality shared by all examples.
There is thus a world of perfect, eternal, and changeless meanings of predicates, the Forms, existing in the realm of Being outside of space and time; and the imperfect sensible world of becoming, subjects somehow in a state between being and nothing, that partakes of the qualities of the Forms, and is its instantiation.
Plato advocates a belief in the immortality of the soul, and several dialogues end with long speeches imagining the afterlife. In the "Timaeus", Socrates locates the parts of the soul within the human body: Reason is located in the head, spirit in the top third of the torso, and the appetite in the middle third of the torso, down to the navel.
Several aspects of epistemology are also discussed by Socrates, such as wisdom. More than one dialogue contrasts knowledge and opinion. Plato's epistemology involves Socrates arguing that knowledge is not empirical, and that it comes from divine insight. The Forms are also responsible for both knowledge or certainty, and are grasped by pure reason.
In several dialogues, Socrates inverts the common man's intuition about what is knowable and what is real. Reality is unavailable to those who use their senses. Socrates says that he who sees with his eyes is blind. While most people take the objects of their senses to be real if anything is, Socrates is contemptuous of people who think that something has to be graspable in the hands to be real. In the "Theaetetus", he says such people are "eu amousoi" (εὖ ἄμουσοι), an expression that means literally, "happily without the muses". In other words, such people are willingly ignorant, living without divine inspiration and access to higher insights about reality.
In Plato's dialogues, Socrates always insists on his ignorance and humility, that he knows nothing, so called Socratic irony. Several dialogues refute a series of viewpoints, but offer no positive position of its own, ending in aporia.
In several of Plato's dialogues, Socrates promulgates the idea that knowledge is a matter of recollection of the state before one is born, and not of observation or study. Keeping with the theme of admitting his own ignorance, Socrates regularly complains of his forgetfulness. In the "Meno", Socrates uses a geometrical example to expound Plato's view that knowledge in this latter sense is acquired by recollection. Socrates elicits a fact concerning a geometrical construction from a slave boy, who could not have otherwise known the fact (due to the slave boy's lack of education). The knowledge must be present, Socrates concludes, in an eternal, non-experiential form.
In other dialogues, the "Sophist", "Statesman", "Republic", and the "Parmenides", Plato himself associates knowledge with the apprehension of unchanging Forms and their relationships to one another (which he calls "expertise" in Dialectic), including through the processes of "collection" and "division". More explicitly, Plato himself argues in the "Timaeus" that knowledge is always proportionate to the realm from which it is gained. In other words, if one derives one's account of something experientially, because the world of sense is in flux, the views therein attained will be mere opinions. And opinions are characterized by a lack of necessity and stability. On the other hand, if one derives one's account of something by way of the non-sensible forms, because these forms are unchanging, so too is the account derived from them. That apprehension of forms is required for knowledge may be taken to cohere with Plato's theory in the "Theaetetus" and "Meno". Indeed, the apprehension of Forms may be at the base of the "account" required for justification, in that it offers foundational knowledge which itself needs no account, thereby avoiding an infinite regression.
Many have interpreted Plato as stating—even having been the first to write—that knowledge is justified true belief, an influential view that informed future developments in epistemology. This interpretation is partly based on a reading of the "Theaetetus" wherein Plato argues that knowledge is distinguished from mere true belief by the knower having an "account" of the object of her or his true belief. And this theory may again be seen in the "Meno", where it is suggested that true belief can be raised to the level of knowledge if it is bound with an account as to the question of "why" the object of the true belief is so.
Many years later, Edmund Gettier famously demonstrated the problems of the justified true belief account of knowledge. That the modern theory of justified true belief as knowledge which Gettier addresses is equivalent to Plato's is accepted by some scholars but rejected by others. Plato himself also identified problems with the "justified true belief" definition in the "Theaetetus", concluding that justification (or an "account") would require knowledge of "difference", meaning that the definition of knowledge is circular.
Several dialogues discuss ethics including virtue and vice, pleasure and pain, crime and punishment, and justice and medicine. Plato views "The Good" as the supreme Form, somehow existing even "beyond being".
Socrates propounded a moral intellectualism which claimed nobody does bad on purpose, and to know what is good results in doing what is good; that knowledge is virtue. In the "Protagoras" dialogue it is argued that virtue is innate and cannot be learned.
Socrates presents the famous Euthyphro dilemma in the dialogue of the same name.
The dialogues also discuss politics. Some of Plato's most famous doctrines are contained in the "Republic" as well as in the "Laws" and the "Statesman". Because these doctrines are not spoken directly by Plato and vary between dialogues, they cannot be straightforwardly assumed as representing Plato's own views.
Socrates asserts that societies have a tripartite class structure corresponding to the appetite/spirit/reason structure of the individual soul. The appetite/spirit/reason are analogous to the castes of society.
According to this model, the principles of Athenian democracy (as it existed in his day) are rejected as only a few are fit to rule. Instead of rhetoric and persuasion, Socrates says reason and wisdom should govern. As Socrates puts it:
Socrates describes these "philosopher kings" as "those who love the sight of truth" and supports the idea with the analogy of a captain and his ship or a doctor and his medicine. According to him, sailing and health are not things that everyone is qualified to practice by nature. A large part of the "Republic" then addresses how the educational system should be set up to produce these philosopher kings.
In addition, the ideal city is used as an image to illuminate the state of one's soul, or the will, reason, and desires combined in the human body. Socrates is attempting to make an image of a rightly ordered human, and then later goes on to describe the different kinds of humans that can be observed, from tyrants to lovers of money in various kinds of cities. The ideal city is not promoted, but only used to magnify the different kinds of individual humans and the state of their soul. However, the philosopher king image was used by many after Plato to justify their personal political beliefs. The philosophic soul according to Socrates has reason, will, and desires united in virtuous harmony. A philosopher has the moderate love for wisdom and the courage to act according to wisdom. Wisdom is knowledge about the Good or the right relations between all that exists.
Wherein it concerns states and rulers, Socrates asks which is better—a bad democracy or a country reigned by a tyrant. He argues that it is better to be ruled by a bad tyrant, than by a bad democracy (since here all the people are now responsible for such actions, rather than one individual committing many bad deeds.) This is emphasised within the "Republic" as Socrates describes the event of mutiny on board a ship. Socrates suggests the ship's crew to be in line with the democratic rule of many and the captain, although inhibited through ailments, the tyrant. Socrates' description of this event is parallel to that of democracy within the state and the inherent problems that arise.
According to Socrates, a state made up of different kinds of souls will, overall, decline from an aristocracy (rule by the best) to a timocracy (rule by the honourable), then to an oligarchy (rule by the few), then to a democracy (rule by the people), and finally to tyranny (rule by one person, rule by a tyrant). Aristocracy in the sense of government (politeia) is advocated in Plato's Republic. This regime is ruled by a philosopher king, and thus is grounded on wisdom and reason.
The aristocratic state, and the man whose nature corresponds to it, are the objects of Plato's analyses throughout much of the "Republic", as opposed to the other four types of states/men, who are discussed later in his work. In Book VIII, Socrates states in order the other four imperfect societies with a description of the state's structure and individual character. In timocracy the ruling class is made up primarily of those with a warrior-like character. Oligarchy is made up of a society in which wealth is the criterion of merit and the wealthy are in control. In democracy, the state bears resemblance to ancient Athens with traits such as equality of political opportunity and freedom for the individual to do as he likes. Democracy then degenerates into tyranny from the conflict of rich and poor. It is characterized by an undisciplined society existing in chaos, where the tyrant rises as popular champion leading to the formation of his private army and the growth of oppression.
Several dialogues tackle questions about art, including rhetoric and rhapsody. Socrates says that poetry is inspired by the muses, and is not rational. He speaks approvingly of this, and other forms of divine madness (drunkenness, eroticism, and dreaming) in the "Phaedrus", and yet in the "Republic" wants to outlaw Homer's great poetry, and laughter as well. In "Ion", Socrates gives no hint of the disapproval of Homer that he expresses in the "Republic". The dialogue "Ion" suggests that Homer's "Iliad" functioned in the ancient Greek world as the Bible does today in the modern Christian world: as divinely inspired literature that can provide moral guidance, if only it can be properly interpreted.
For a long time, Plato's unwritten doctrines had been controversial. Many modern books on Plato seem to diminish its importance; nevertheless, the first important witness who mentions its existence is Aristotle, who in his "Physics" writes: "It is true, indeed, that the account he gives there [i.e. in "Timaeus"] of the participant is different from what he says in his so-called "unwritten teachings" ()." The term "" literally means "unwritten doctrines" and it stands for the most fundamental metaphysical teaching of Plato, which he disclosed only orally, and some say only to his most trusted fellows, and which he may have kept secret from the public. The importance of the unwritten doctrines does not seem to have been seriously questioned before the 19th century.
A reason for not revealing it to everyone is partially discussed in "Phaedrus" where Plato criticizes the written transmission of knowledge as faulty, favouring instead the spoken logos: "he who has knowledge of the just and the good and beautiful ... will not, when in earnest, write them in ink, sowing them through a pen with words, which cannot defend themselves by argument and cannot teach the truth effectually." The same argument is repeated in Plato's "Seventh Letter": "every serious man in dealing with really serious subjects carefully avoids writing." In the same letter he writes: "I can certainly declare concerning all these writers who claim to know the subjects that I seriously study ... there does not exist, nor will there ever exist, any treatise of mine dealing therewith." Such secrecy is necessary in order not "to expose them to unseemly and degrading treatment".
It is, however, said that Plato once disclosed this knowledge to the public in his lecture "On the Good" (), in which the Good () is identified with the One (the Unity, ), the fundamental ontological principle. The content of this lecture has been transmitted by several witnesses. Aristoxenus describes the event in the following words: "Each came expecting to learn something about the things that are generally considered good for men, such as wealth, good health, physical strength, and altogether a kind of wonderful happiness. But when the mathematical demonstrations came, including numbers, geometrical figures and astronomy, and finally the statement Good is One seemed to them, I imagine, utterly unexpected and strange; hence some belittled the matter, while others rejected it." Simplicius quotes Alexander of Aphrodisias, who states that "according to Plato, the first principles of everything, including the Forms themselves are One and Indefinite Duality (), which he called Large and Small ()", and Simplicius reports as well that "one might also learn this from Speusippus and Xenocrates and the others who were present at Plato's lecture on the Good".
Their account is in full agreement with Aristotle's description of Plato's metaphysical doctrine. In "Metaphysics" he writes: "Now since the Forms are the causes of everything else, he [i.e. Plato] supposed that their elements are the elements of all things. Accordingly the material principle is the Great and Small [i.e. the Dyad], and the essence is the One (), since the numbers are derived from the Great and Small by participation in the One". "From this account it is clear that he only employed two causes: that of the essence, and the material cause; for the Forms are the cause of the essence in everything else, and the One is the cause of it in the Forms. He also tells us what the material substrate is of which the Forms are predicated in the case of sensible things, and the One in that of the Forms—that it is this the duality (the Dyad, ), the Great and Small (). Further, he assigned to these two elements respectively the causation of good and of evil".
The most important aspect of this interpretation of Plato's metaphysics is the continuity between his teaching and the Neoplatonic interpretation of Plotinus or Ficino which has been considered erroneous by many but may in fact have been directly influenced by oral transmission of Plato's doctrine. A modern scholar who recognized the importance of the unwritten doctrine of Plato was Heinrich Gomperz who described it in his speech during the 7th International Congress of Philosophy in 1930. All the sources related to the have been collected by Konrad Gaiser and published as "Testimonia Platonica". These sources have subsequently been interpreted by scholars from the German "Tübingen School of interpretation" such as Hans Joachim Krämer or Thomas A. Szlezák.
The trial of Socrates and his death sentence is the central, unifying event of Plato's dialogues. It is relayed in the dialogues "Apology", "Crito", and "Phaedo". "Apology" is Socrates' defence speech, and "Crito" and "Phaedo" take place in prison after the conviction.
"Apology" is among the most frequently read of Plato's works. In the "Apology", Socrates tries to dismiss rumours that he is a sophist and defends himself against charges of disbelief in the gods and corruption of the young. Socrates insists that long-standing slander will be the real cause of his demise, and says the legal charges are essentially false. Socrates famously denies being wise, and explains how his life as a philosopher was launched by the Oracle at Delphi. He says that his quest to resolve the riddle of the oracle put him at odds with his fellow man, and that this is the reason he has been mistaken for a menace to the city-state of Athens.
In "Apology", Socrates is presented as mentioning Plato by name as one of those youths close enough to him to have been corrupted, if he were in fact guilty of corrupting the youth, and questioning why their fathers and brothers did not step forward to testify against him if he was indeed guilty of such a crime. Later, Plato is mentioned along with Crito, Critobolus, and Apollodorus as offering to pay a fine of 30 minas on Socrates' behalf, in lieu of the death penalty proposed by Meletus. In the "Phaedo", the title character lists those who were in attendance at the prison on Socrates' last day, explaining Plato's absence by saying, "Plato was ill".
If Plato's important dialogues do not refer to Socrates' execution explicitly, they allude to it, or use characters or themes that play a part in it. Five dialogues foreshadow the trial: In the "Theaetetus" and the "Euthyphro" Socrates tells people that he is about to face corruption charges. In the "Meno", one of the men who brings legal charges against Socrates, Anytus, warns him about the trouble he may get into if he does not stop criticizing important people. In the "Gorgias", Socrates says that his trial will be like a doctor prosecuted by a cook who asks a jury of children to choose between the doctor's bitter medicine and the cook's tasty treats. In the "Republic", Socrates explains why an enlightened man (presumably himself) will stumble in a courtroom situation. Plato's support of aristocracy and distrust of democracy is also taken to be partly rooted in a democracy having killed Socrates. In the "Protagoras", Socrates is a guest at the home of Callias, son of Hipponicus, a man whom Socrates disparages in the "Apology" as having wasted a great amount of money on sophists' fees.
Two other important dialogues, the "Symposium" and the "Phaedrus", are linked to the main storyline by characters. In the "Apology", Socrates says Aristophanes slandered him in a comic play, and blames him for causing his bad reputation, and ultimately, his death. In the "Symposium", the two of them are drinking together with other friends. The character Phaedrus is linked to the main story line by character (Phaedrus is also a participant in the "Symposium" and the "Protagoras") and by theme (the philosopher as divine emissary, etc.) The "Protagoras" is also strongly linked to the "Symposium" by characters: all of the formal speakers at the "Symposium" (with the exception of Aristophanes) are present at the home of Callias in that dialogue. Charmides and his guardian Critias are present for the discussion in the "Protagoras". Examples of characters crossing between dialogues can be further multiplied. The "Protagoras" contains the largest gathering of Socratic associates.
In the dialogues Plato is most celebrated and admired for, Socrates is concerned with human and political virtue, has a distinctive personality, and friends and enemies who "travel" with him from dialogue to dialogue. This is not to say that Socrates is consistent: a man who is his friend in one dialogue may be an adversary or subject of his mockery in another. For example, Socrates praises the wisdom of Euthyphro many times in the "Cratylus", but makes him look like a fool in the "Euthyphro". He disparages sophists generally, and Prodicus specifically in the "Apology", whom he also slyly jabs in the "Cratylus" for charging the hefty fee of fifty drachmas for a course on language and grammar. However, Socrates tells Theaetetus in his namesake dialogue that he admires Prodicus and has directed many pupils to him. Socrates' ideas are also not consistent within or between or among dialogues.
"Mythos" and "logos" are terms that evolved along classical Greek history. In the times of Homer and Hesiod (8th century BC) they were essentially synonyms, and contained the meaning of 'tale' or 'history'. Later came historians like Herodotus and Thucydides, as well as philosophers like Heraclitus and Parmenides and other Presocratics who introduced a distinction between both terms; mythos became more a "nonverifiable account", and logos a "rational account". It may seem that Plato, being a disciple of Socrates and a strong partisan of philosophy based on "logos", should have avoided the use of myth-telling. Instead he made an abundant use of it. This fact has produced analytical and interpretative work, in order to clarify the reasons and purposes for that use.
Plato, in general, distinguished between three types of myth. First there were the false myths, like those based on stories of gods subject to passions and sufferings, because reason teaches that God is perfect. Then came the myths based on true reasoning, and therefore also true. Finally there were those non verifiable because beyond of human reason, but containing some truth in them. Regarding the subjects of Plato's myths they are of two types, those dealing with the origin of the universe, and those about morals and the origin and fate of the soul.
It is generally agreed that the main purpose for Plato in using myths was didactic. He considered that only a few people were capable or interested in following a reasoned philosophical discourse, but men in general are attracted by stories and tales. Consequently, then, he used the myth to convey the conclusions of the philosophical reasoning. Some of Plato's myths were based in traditional ones, others were modifications of them, and finally he also invented altogether new myths. Notable examples include the story of Atlantis, the Myth of Er, and the Allegory of the Cave.
The theory of Forms is most famously captured in his Allegory of the Cave, and more explicitly in his analogy of the sun and the divided line. The Allegory of the Cave is a paradoxical analogy wherein Socrates argues that the invisible world is the most intelligible ('noeton') and that the visible world ("(h)oraton") is the least knowable, and the most obscure.
Socrates says in the "Republic" that people who take the sun-lit world of the senses to be good and real are living pitifully in a den of evil and ignorance. Socrates admits that few climb out of the den, or cave of ignorance, and those who do, not only have a terrible struggle to attain the heights, but when they go back down for a visit or to help other people up, they find themselves objects of scorn and ridicule.
According to Socrates, physical objects and physical events are "shadows" of their ideal or perfect forms, and exist only to the extent that they instantiate the perfect versions of themselves. Just as shadows are temporary, inconsequential epiphenomena produced by physical objects, physical objects are themselves fleeting phenomena caused by more substantial causes, the ideals of which they are mere instances. For example, Socrates thinks that perfect justice exists (although it is not clear where) and his own trial would be a cheap copy of it.
The Allegory of the Cave is intimately connected to his political ideology, that only people who have climbed out of the cave and cast their eyes on a vision of goodness are fit to rule. Socrates claims that the enlightened men of society must be forced from their divine contemplation and be compelled to run the city according to their lofty insights. Thus is born the idea of the "philosopher-king", the wise person who accepts the power thrust upon him by the people who are wise enough to choose a good master. This is the main thesis of Socrates in the "Republic", that the most wisdom the masses can muster is the wise choice of a ruler.
A ring which could make one invisible, the Ring of Gyges is considered in the "Republic" for its ethical consequences.
He also compares the soul (Psyche) to a chariot. In this allegory he introduces a triple soul which composed of a Charioteer and two horses. Charioteer is a symbol of intellectual and logical part of the soul (logistikon), and two horses represents moral virtues (thymoeides) and passionate instincts (epithymetikon), Respectively.
Socrates employs a dialectic method which proceeds by questioning. The role of dialectic in Plato's thought is contested but there are two main interpretations: a type of reasoning and a method of intuition. Simon Blackburn adopts the first, saying that Plato's dialectic is "the process of eliciting the truth by means of questions aimed at opening out what is already implicitly known, or at exposing the contradictions and muddles of an opponent's position." A similar interpretation has been put forth by Louis Hartz, who suggests that elements of the dialectic are borrowed from Hegel. According to this view, opposing arguments improve upon each other, and prevailing opinion is shaped by the synthesis of many conflicting ideas over time. Each new idea exposes a flaw in the accepted model, and the epistemological substance of the debate continually approaches the truth. Hartz's is a teleological interpretation at the core, in which philosophers will ultimately exhaust the available body of knowledge and thus reach "the end of history." Karl Popper, on the other hand, claims that dialectic is the art of intuition for "visualising the divine originals, the Forms or Ideas, of unveiling the Great Mystery behind the common man's everyday world of appearances."
Plato often discusses the father-son relationship and the question of whether a father's interest in his sons has much to do with how well his sons turn out. In ancient Athens, a boy was socially located by his family identity, and Plato often refers to his characters in terms of their paternal and fraternal relationships. Socrates was not a family man, and saw himself as the son of his mother, who was apparently a midwife. A divine fatalist, Socrates mocks men who spent exorbitant fees on tutors and trainers for their sons, and repeatedly ventures the idea that good character is a gift from the gods. Plato's dialogue "Crito" reminds Socrates that orphans are at the mercy of chance, but Socrates is unconcerned. In the "Theaetetus", he is found recruiting as a disciple a young man whose inheritance has been squandered. Socrates twice compares the relationship of the older man and his boy lover to the father-son relationship, and in the "Phaedo", Socrates' disciples, towards whom he displays more concern than his biological sons, say they will feel "fatherless" when he is gone.
Though Plato agreed with Aristotle that women were inferior to men, he thought because of this women needed an education. Plato thought weak men who live poor lives would be reincarnated as women. "Humans have a twofold nature, the superior kind should be such as would from then on be called "man".'
Plato never presents himself as a participant in any of the dialogues, and with the exception of the "Apology", there is no suggestion that he heard any of the dialogues firsthand. Some dialogues have no narrator but have a pure "dramatic" form (examples: "Meno", "Gorgias", "Phaedrus", "Crito", "Euthyphro"), some dialogues are narrated by Socrates, wherein he speaks in first person (examples: "Lysis", "Charmides", "Republic"). One dialogue, "Protagoras", begins in dramatic form but quickly proceeds to Socrates' narration of a conversation he had previously with the sophist for whom the dialogue is named; this narration continues uninterrupted till the dialogue's end.
Two dialogues "Phaedo" and "Symposium" also begin in dramatic form but then proceed to virtually uninterrupted narration by followers of Socrates. "Phaedo", an account of Socrates' final conversation and hemlock drinking, is narrated by Phaedo to Echecrates in a foreign city not long after the execution took place. The "Symposium" is narrated by Apollodorus, a Socratic disciple, apparently to Glaucon. Apollodorus assures his listener that he is recounting the story, which took place when he himself was an infant, not from his own memory, but as remembered by Aristodemus, who told him the story years ago.
The "Theaetetus" is a peculiar case: a dialogue in dramatic form embedded within another dialogue in dramatic form. In the beginning of the "Theaetetus", Euclides says that he compiled the conversation from notes he took based on what Socrates told him of his conversation with the title character. The rest of the "Theaetetus" is presented as a "book" written in dramatic form and read by one of Euclides' slaves. Some scholars take this as an indication that Plato had by this date wearied of the narrated form. With the exception of the "Theaetetus", Plato gives no explicit indication as to how these orally transmitted conversations came to be written down.
Thirty-five dialogues and thirteen letters (the "Epistles") have traditionally been ascribed to Plato, though modern scholarship doubts the authenticity of at least some of these. Plato's writings have been published in several fashions; this has led to several conventions regarding the naming and referencing of Plato's texts.
The usual system for making unique references to sections of the text by Plato derives from a 16th-century edition of Plato's works by Henricus Stephanus known as Stephanus pagination.
One tradition regarding the arrangement of Plato's texts is according to tetralogies. This scheme is ascribed by Diogenes Laërtius to an ancient scholar and court astrologer to Tiberius named Thrasyllus.
No one knows the exact order Plato's dialogues were written in, nor the extent to which some might have been later revised and rewritten. The works are usually grouped into "Early" (sometimes by some into "Transitional"), "Middle", and "Late" period. This choice to group chronologically is thought worthy of criticism by some (Cooper "et al"), given that it is recognized that there is no absolute agreement as to the true chronology, since the facts of the temporal order of writing are not confidently ascertained. Chronology was not a consideration in ancient times, in that groupings of this nature are "virtually absent" (Tarrant) in the extant writings of ancient Platonists.
Whereas those classified as "early dialogues" often conclude in aporia, the so-called "middle dialogues" provide more clearly stated positive teachings that are often ascribed to Plato such as the theory of Forms. The remaining dialogues are classified as "late" and are generally agreed to be difficult and challenging pieces of philosophy. This grouping is the only one proven by stylometric analysis. Among those who classify the dialogues into periods of composition, Socrates figures in all of the "early dialogues" and they are considered the most faithful representations of the historical Socrates.
The following represents one relatively common division. It should, however, be kept in mind that many of the positions in the ordering are still highly disputed, and also that the very notion that Plato's dialogues can or should be "ordered" is by no means universally accepted. Increasingly in the most recent Plato scholarship, writers are sceptical of the notion that the order of Plato's writings can be established with any precision, though Plato's works are still often characterized as falling at least roughly into three groups.
Early: "Apology", "Charmides", "Crito", "Euthyphro", "Gorgias", "(Lesser) Hippias (minor)", "(Greater) Hippias (major)", "Ion", "Laches", "Lysis", "Protagoras"
Middle: "Cratylus", "Euthydemus", "Meno", "Parmenides", "Phaedo", "Phaedrus", "Republic", "Symposium", "Theaetetus"
Late: "Critias", "Sophist", "Statesman / Politicus", "Timaeus", "Philebus", "Laws."
A significant distinction of the early Plato and the later Plato has been offered by scholars such as E.R. Dodds and has been summarized by Harold Bloom in his book titled "Agon": "E.R. Dodds is the classical scholar whose writings most illuminated the Hellenic descent (in) "The Greeks and the Irrational" ... In his chapter on Plato and the Irrational Soul ... Dodds traces Plato's spiritual evolution from the pure rationalist of the "Protagoras" to the transcendental psychologist, influenced by the Pythagoreans and Orphics, of the later works culminating in the "Laws"."
Lewis Campbell was the first to make exhaustive use of stylometry to prove objectively that the "Critias", "Timaeus", "Laws", "Philebus", "Sophist", and "Statesman" were all clustered together as a group, while the "Parmenides", "Phaedrus", "Republic", and "Theaetetus" belong to a separate group, which must be earlier (given Aristotle's statement in his "Politics" that the "Laws" was written after the "Republic"; cf. Diogenes Laërtius "Lives" 3.37). What is remarkable about Campbell's conclusions is that, in spite of all the stylometric studies that have been conducted since his time, perhaps the only chronological fact about Plato's works that can now be said to be "proven" by stylometry is the fact that "Critias", "Timaeus", "Laws", "Philebus", "Sophist", and "Statesman" are the latest of Plato's dialogues, the others earlier.
"Protagoras" is often considered one of the last of the "early dialogues". Three dialogues are often considered "transitional" or "pre-middle": "Euthydemus", "Gorgias", and "Meno". Proponents of dividing the dialogues into periods often consider the "Parmenides" and "Theaetetus" to come late in the middle period and be transitional to the next, as they seem to treat the theory of Forms critically ("Parmenides") or only indirectly ("Theaetetus"). Ritter's stylometric analysis places "Phaedrus" as probably after "Theaetetus" and "Parmenides", although it does not relate to the theory of Forms in the same way. The first book of the "Republic" is often thought to have been written significantly earlier than the rest of the work, although possibly having undergone revisions when the later books were attached to it.
While looked to for Plato's "mature" answers to the questions posed by his earlier works, those answers are difficult to discern. Some scholars indicate that the theory of Forms is absent from the late dialogues, its having been refuted in the "Parmenides", but there isn't total consensus that the "Parmenides" actually refutes the theory of Forms.
Jowett mentions in his Appendix to Menexenus, that works which bore the character of a writer were attributed to that writer even when the actual author was unknown.
For below:
(*) if there is no consensus among scholars as to whether Plato is the author, and (‡) if most scholars agree that Plato is "not" the author of the work.
"First Alcibiades" (*), "Second Alcibiades" (‡), "Clitophon" (*), "Epinomis" (‡), "Epistles" (*), "Hipparchus" (‡), "Menexenus" (*), "Minos" (‡), "(Rival) Lovers" (‡), "Theages" (‡)
The following works were transmitted under Plato's name, most of them already considered spurious in antiquity, and so were not included by Thrasyllus in his tetralogical arrangement. These works are labelled as "Notheuomenoi" ("spurious") or "Apocrypha".
Some 250 known manuscripts of Plato survive. The texts of Plato as received today apparently represent the complete written philosophical work of Plato and are generally good by the standards of textual criticism. No modern edition of Plato in the original Greek represents a single source, but rather it is reconstructed from multiple sources which are compared with each other. These sources are medieval manuscripts written on vellum (mainly from 9th to 13th century AD Byzantium), papyri (mainly from late antiquity in Egypt), and from the independent "testimonia" of other authors who quote various segments of the works (which come from a variety of sources). The text as presented is usually not much different from what appears in the Byzantine manuscripts, and papyri and testimonia just confirm the manuscript tradition. In some editions however the readings in the papyri or testimonia are favoured in some places by the editing critic of the text. Reviewing editions of papyri for the "Republic" in 1987, Slings suggests that the use of papyri is hampered due to some poor editing practices.
In the first century AD, Thrasyllus of Mendes had compiled and published the works of Plato in the original Greek, both genuine and spurious. While it has not survived to the present day, all the extant medieval Greek manuscripts are based on his edition.
The oldest surviving complete manuscript for many of the dialogues is the Clarke Plato (Codex Oxoniensis Clarkianus 39, or Codex Boleianus MS E.D. Clarke 39), which was written in Constantinople in 895 and acquired by Oxford University in 1809. The Clarke is given the siglum "B" in modern editions. "B" contains the first six tetralogies and is described internally as being written by "John the Calligrapher" on behalf of Arethas of Caesarea. It appears to have undergone corrections by Arethas himself. For the last two tetralogies and the apocrypha, the oldest surviving complete manuscript is Codex Parisinus graecus 1807, designated "A", which was written nearly contemporaneously to "B", circa 900 AD. "A" must be a copy of the edition edited by the patriarch, Photios, teacher of Arethas."A" probably had an initial volume containing the first 7 tetralogies which is now lost, but of which a copy was made, Codex Venetus append. class. 4, 1, which has the siglum "T". The oldest manuscript for the seventh tetralogy is Codex Vindobonensis 54. suppl. phil. Gr. 7, with siglum "W", with a supposed date in the twelfth century. In total there are fifty-one such Byzantine manuscripts known, while others may yet be found.
To help establish the text, the older evidence of papyri and the independent evidence of the testimony of commentators and other authors (i.e., those who quote and refer to an old text of Plato which is no longer extant) are also used. Many papyri which contain fragments of Plato's texts are among the Oxyrhynchus Papyri. The 2003 Oxford Classical Texts edition by Slings even cites the Coptic translation of a fragment of the "Republic" in the Nag Hammadi library as evidence. Important authors for testimony include Olympiodorus the Younger, Plutarch, Proclus, Iamblichus, Eusebius, and Stobaeus.
During the early Renaissance, the Greek language and, along with it, Plato's texts were reintroduced to Western Europe by Byzantine scholars. In September or October 1484 Filippo Valori and Francesco Berlinghieri printed 1025 copies of Ficino's translation, using the printing press at the Dominican convent S.Jacopo di Ripoli. Cosimo had been influenced toward studying Plato by the many Byzantine Platonists in Florence during his day, including George Gemistus Plethon.
The 1578 edition of Plato's complete works published by Henricus Stephanus (Henri Estienne) in Geneva also included parallel Latin translation and running commentary by Joannes Serranus (Jean de Serres). It was this edition which established standard Stephanus pagination, still in use today.
The Oxford Classical Texts offers the current standard complete Greek text of Plato's complete works. In five volumes edited by John Burnet, its first edition was published 1900–1907, and it is still available from the publisher, having last been printed in 1993. The second edition is still in progress with only the first volume, printed in 1995, and the "Republic", printed in 2003, available. The "Cambridge Greek and Latin Texts" and "Cambridge Classical Texts and Commentaries" series includes Greek editions of the "Protagoras", "Symposium", "Phaedrus", "Alcibiades", and "Clitophon", with English philological, literary, and, to an extent, philosophical commentary. One distinguished edition of the Greek text is E. R. Dodds' of the "Gorgias", which includes extensive English commentary.
The modern standard complete English edition is the 1997 Hackett "Plato, Complete Works", edited by John M. Cooper. For many of these translations Hackett offers separate volumes which include more by way of commentary, notes, and introductory material. There is also the "Clarendon Plato Series" by Oxford University Press which offers English translations and thorough philosophical commentary by leading scholars on a few of Plato's works, including John McDowell's version of the "Theaetetus". Cornell University Press has also begun the "Agora" series of English translations of classical and medieval philosophical texts, including a few of Plato's.
Despite Plato's prominence as a philosopher, he is not without criticism. The most famous criticism of Platonism is the Third Man Argument. Plato actually considered this objection with "large" rather than man in the "Parmenides" dialogue.
Many recent philosophers have diverged from what some would describe as the ontological models and moral ideals characteristic of traditional Platonism. A number of these postmodern philosophers have thus appeared to disparage Platonism from more or less informed perspectives. Friedrich Nietzsche notoriously attacked Plato's "idea of the good itself" along with many fundamentals of Christian morality, which he interpreted as "Platonism for the masses" in one of his most important works, "Beyond Good and Evil" (1886). Martin Heidegger argued against Plato's alleged obfuscation of "Being" in his incomplete tome, "Being and Time" (1927), and the philosopher of science Karl Popper argued in "The Open Society and Its Enemies" (1945) that Plato's alleged proposal for a utopian political regime in the "Republic" was prototypically totalitarian.
The Dutch historian of science Eduard Jan Dijksterhuis criticizes Plato, stating that he was guilty of "constructing an imaginary nature by reasoning from preconceived principles and forcing reality more or less to adapt itself to this construction." Dijksterhuis adds that one of the errors into which Plato had "fallen in an almost grotesque manner, consisted in an over-estimation of what unaided thought, i.e. without recourse to experience, could achieve in the field of natural science."
Plato's Academy mosaic was created in the villa of T. Siminius Stephanus in Pompeii, around 100 BC to 100 CE. "The School of Athens" fresco by Raphael features Plato also as a central figure. The Nuremberg Chronicle depicts Plato and other as anachronistic schoolmen.
Plato's thought is often compared with that of his most famous student, Aristotle, whose reputation during the Western Middle Ages so completely eclipsed that of Plato that the Scholastic philosophers referred to Aristotle as "the Philosopher". However, in the Byzantine Empire, the study of Plato continued.
The only Platonic work known to western scholarship was "Timaeus", until translations were made after the fall of Constantinople, which occurred during 1453. George Gemistos Plethon brought Plato's original writings from Constantinople in the century of its fall. It is believed that Plethon passed a copy of the "Dialogues" to Cosimo de' Medici when in 1438 the Council of Ferrara, called to unify the Greek and Latin Churches, was adjourned to Florence, where Plethon then lectured on the relation and differences of Plato and Aristotle, and fired Cosimo with his enthusiasm; Cosimo would supply Marsilio Ficino with Plato's text for translation to Latin. During the early Islamic era, Persian and Arab scholars translated much of Plato into Arabic and wrote commentaries and interpretations on Plato's, Aristotle's and other Platonist philosophers' works (see Al-Farabi, Avicenna, Averroes, Hunayn ibn Ishaq). Many of these comments on Plato were translated from Arabic into Latin and as such influenced Medieval scholastic philosophers.
During the Renaissance, with the general resurgence of interest in classical civilization, knowledge of Plato's philosophy would become widespread again in the West. Many of the greatest early modern scientists and artists who broke with Scholasticism and fostered the flowering of the Renaissance, with the support of the Plato-inspired Lorenzo (grandson of Cosimo), saw Plato's philosophy as the basis for progress in the arts and sciences. His political views, too, were well-received: the vision of wise philosopher-kings of the "Republic" matched the views set out in works such as Machiavelli's "The Prince". More problematic was Plato's belief in metempsychosis as well as his ethical views (on polyamory and euthanasia in particular), which did not match those of Christianity. It was Plethon's student Bessarion who reconciled Plato with Christian theology, arguing that Plato's views were only ideals, unattainable due to the fall of man. The Cambridge Platonists were around in the 17th century.
By the 19th century, Plato's reputation was restored, and at least on par with Aristotle's. Notable Western philosophers have continued to draw upon Plato's work since that time. Plato's influence has been especially strong in mathematics and the sciences. Plato's resurgence further inspired some of the greatest advances in logic since Aristotle, primarily through Gottlob Frege and his followers Kurt Gödel, Alonzo Church, and Alfred Tarski. Albert Einstein suggested that the scientist who takes philosophy seriously would have to avoid systematization and take on many different roles, and possibly appear as a Platonist or Pythagorean, in that such a one would have "the viewpoint of logical simplicity as an indispensable and effective tool of his research."
The political philosopher and professor Leo Strauss is considered by some as the prime thinker involved in the recovery of Platonic thought in its more political, and less metaphysical, form. Strauss' political approach was in part inspired by the appropriation of Plato and Aristotle by medieval Jewish and Islamic political philosophers, especially Maimonides and Al-Farabi, as opposed to the Christian metaphysical tradition that developed from Neoplatonism. Deeply influenced by Nietzsche and Heidegger, Strauss nonetheless rejects their condemnation of Plato and looks to the dialogues for a solution to what all three latter day thinkers acknowledge as 'the crisis of the West.'
W. V. O. Quine dubbed the problem of negative existentials "Plato's beard". Noam Chomsky dubbed the problem of knowledge Plato's problem. One author calls the definist fallacy the Socratic fallacy.
More broadly, platonism (sometimes distinguished from Plato's particular view by the lowercase) refers to the view that there are many abstract objects. Still to this day, platonists take number and the truths of mathematics as the best support in favour of this view. Most mathematicians think, like platonists, that numbers and the truths of mathematics are perceived by reason rather than the senses yet exist independently of minds and people, that is to say, they are discovered rather than invented.
Contemporary platonism is also more open to the idea of there being infinitely many abstract objects, as numbers or propositions might qualify as abstract objects, while ancient Platonism seemed to resist this view, possibly because of the need to overcome the problem of "the One and the Many". Thus e. g. in the Parmenides dialogue, Plato denies there are Forms for more mundane things like hair and mud. However, he repeatedly does support the idea that there are Forms of artifacts, e. g. the Form of Bed. Contemporary platonism also tends to view abstract objects as unable to cause anything, but it's unclear whether the ancient Platonists felt this way.
Primary sources (Greek and Roman)
Secondary sources | https://en.wikipedia.org/wiki?curid=22954 |
Sample space
In probability theory, the sample space (also called sample description space or possibility space) of an experiment or random trial is the set of all possible outcomes or results of that experiment. A sample space is usually denoted using set notation, and the possible ordered outcomes are listed as elements in the set. It is common to refer to a sample space by the labels "S", Ω, or "U" (for "universal set"). The elements of a sample space may be numbers, words, letters, or symbols. They can also be finite, countably infinite, or uncountably infinite.
For example, if the experiment is tossing a coin, the sample space is typically the set {head, tail}, commonly written {H, T}. For tossing two coins, the corresponding sample space would be {(head,head), (head,tail), (tail,head), (tail,tail)}, commonly written {HH, HT, TH, TT}. If the sample space is unordered, it becomes .
For tossing a single six-sided die, the typical sample space is {1, 2, 3, 4, 5, 6} (in which the result of interest is the number of pips facing up).
A subset of the sample space is an event, denoted by E. Referring to the experiment of tossing the coin, the possible events include E={H} and E={T}.
A well-defined sample space is one of three basic elements in a probabilistic model (a probability space); the other two are a well-defined set of possible events (a sigma-algebra) and a probability assigned to each event (a probability measure function).
Another way to look as a sample space is visually. The sample space is typically represented by a rectangle, and the outcomes of the sample space denoted by points within the rectangle. The events are represented by ovals, and the points enclosed within the oval make up the event.
A set formula_1 with outcomes formula_2 (i.e. formula_3) must meet some conditions in order to be a sample space:
For instance, in the trial of tossing a coin, we could have as a sample space formula_10, where formula_11 stands for "heads" and formula_12 for "tails". Another possible sample space could be formula_13. Here, formula_14 stands for "rains" and formula_15 "not rains". Obviously, formula_16 is a better choice than formula_17 as we do not care about how the weather affects the tossing of a coin.
For many experiments, there may be more than one plausible sample space available, depending on what result is of interest to the experimenter. For example, when drawing a card from a standard deck of fifty-two playing cards, one possibility for the sample space could be the various ranks (Ace through King), while another could be the suits (clubs, diamonds, hearts, or spades). A more complete description of outcomes, however, could specify both the denomination and the suit, and a sample space describing each individual card can be constructed as the Cartesian product of the two sample spaces noted above (this space would contain fifty-two equally likely outcomes). Still other sample spaces are possible, such as {right-side up, up-side down} if some cards have been flipped when shuffling.
Some treatments of probability assume that the various outcomes of an experiment are always defined so as to be equally likely. For any sample space with N equally likely outcomes, each outcome is assigned the probability 1/N. However, there are experiments that are not easily described by a sample space of equally likely outcomes—for example, if one were to toss a thumb tack many times and observe whether it landed with its point upward or downward, there is no symmetry to suggest that the two outcomes should be equally likely.
Though most random phenomena do not have equally likely outcomes, it can be helpful to define a sample space in such a way that outcomes are at least approximately equally likely, since this condition significantly simplifies the computation of probabilities for events within the sample space. If each individual outcome occurs with the same probability, then the probability of any event becomes simply:
For example, if two dice are thrown to generate two uniformly distributed integers, D1 and D2, each in the range [1...6], the 36 ordered pairs (D1 , D2) constitute a sample space of equally likely events. In this case, the above formula applies, such that the probability of a certain sum, say D1 + D2 = 5 is easily shown to be 4/36, since 4 of the 36 outcomes produce 5 as a sum. On the other hand, the sample space of the 11 possible sums, {2, ...,12} are not equally likely outcomes, so the formula would give an incorrect result (1/11).
Another example is having four pens in a bag. One pen is red, one is green, one is blue, and one is purple. Each pen has the same chance of being taken out of the bag. The sample space S={red, green, blue, purple}, consists of equally likely events. Here, P(red)=P(blue)=P(green)=P(purple)=1/4.
In statistics, inferences are made about characteristics of a population by studying a sample of that population's individuals. In order to arrive at a sample that presents an unbiased estimate of the true characteristics of the population, statisticians often seek to study a simple random sample—that is, a sample in which every individual in the population is equally likely to be included. The result of this is that every possible combination of individuals who could be chosen for the sample has an equal chance to be the sample that is selected (that is, the space of simple random samples of a given size from a given population is composed of equally likely outcomes).
In an elementary approach to probability, any subset of the sample space is usually called an event. However, this gives rise to problems when the sample space is continuous, so that a more precise definition of an event is necessary. Under this definition only measurable subsets of the sample space, constituting a σ-algebra over the sample space itself, are considered events.
An example of an infinitely large sample space is measuring the lifetime of a light bulb. The corresponding sample space would be [0, infinity). | https://en.wikipedia.org/wiki?curid=22958 |
Elementary event
In probability theory, an elementary event (also called an atomic event or sample point) is an event which contains only a single outcome in the sample space. Using set theory terminology, an elementary event is a singleton. Elementary events and their corresponding outcomes are often written interchangeably for simplicity, as such an event corresponds to precisely one outcome.
The following are examples of elementary events:
Elementary events may occur with probabilities that are between zero and one (inclusively). In a discrete probability distribution whose sample space is finite, each elementary event is assigned a particular probability. In contrast, in a continuous distribution, individual elementary events must all have a probability of zero because there are infinitely many of them— then non-zero probabilities can only be assigned to non-elementary events.
Some "mixed" distributions contain both stretches of continuous elementary events and some discrete elementary events; the discrete elementary events in such distributions can be called atoms or atomic events and can have non-zero probabilities.
Under the measure-theoretic definition of a probability space, the probability of an elementary event need not even be defined. In particular, the set of events on which probability is defined may be some σ-algebra on "S" and not necessarily the full power set. | https://en.wikipedia.org/wiki?curid=22960 |
Event (probability theory)
In probability theory, an event is a set of outcomes of an experiment (a subset of the sample space) to which a probability is assigned. A single outcome may be an element of many different events, and different events in an experiment are usually not equally likely, since they may include very different groups of outcomes. An event defines a complementary event, namely the complementary set (the event "not" occurring), and together these define a Bernoulli trial: did the event occur or not?
Typically, when the sample space is finite, any subset of the sample space is an event ("i"."e". all elements of the power set of the sample space are defined as events). However, this approach does not work well in cases where the sample space is uncountably infinite. So, when defining a probability space it is possible, and often necessary, to exclude certain subsets of the sample space from being events (see "Events in probability spaces", below).
If we assemble a deck of 52 playing cards with no jokers, and draw a single card from the deck, then the sample space is a 52-element set, as each card is a possible outcome. An event, however, is any subset of the sample space, including any singleton set (an elementary event), the empty set (an impossible event, with probability zero) and the sample space itself (a certain event, with probability one). Other events are proper subsets of the sample space that contain multiple elements. So, for example, potential events include:
Since all events are sets, they are usually written as sets (e.g. {1, 2, 3}), and represented graphically using Venn diagrams. In the situation where each outcome in the sample space Ω is equally likely, the probability formula_1 of an event "A" is the following :
This rule can readily be applied to each of the example events above.
Defining all subsets of the sample space as events works well when there are only finitely many outcomes, but gives rise to problems when the sample space is infinite. For many standard probability distributions, such as the normal distribution, the sample space is the set of real numbers or some subset of the real numbers. Attempts to define probabilities for all subsets of the real numbers run into difficulties when one considers 'badly behaved' sets, such as those that are nonmeasurable. Hence, it is necessary to restrict attention to a more limited family of subsets. For the standard tools of probability theory, such as joint and conditional probabilities, to work, it is necessary to use a σ-algebra, that is, a family closed under complementation and countable unions of its members. The most natural choice of σ-algebra is the Borel measurable set derived from unions and intersections of intervals. However, the larger class of Lebesgue measurable sets proves more useful in practice.
In the general measure-theoretic description of probability spaces, an event may be defined as an element of a selected σ-algebra of subsets of the sample space. Under this definition, any subset of the sample space that is not an element of the σ-algebra is not an event, and does not have a probability. With a reasonable specification of the probability space, however, all "events of interest" are elements of the σ-algebra.
Even though events are subsets of some sample space Ω, they are often written as predicates or indicators involving random variables. For example, if "X" is a real-valued random variable defined on the sample space Ω, the event
can be written more conveniently as, simply,
This is especially common in formulas for a probability, such as
The set "u" < "X" ≤ "v" is an example of an inverse image under the mapping "X" because formula_6 if and only if formula_7. | https://en.wikipedia.org/wiki?curid=22961 |
Pig Latin
Pig Latin is a language game or argot in which words in English are altered, usually by adding a fabricated suffix or by moving the onset or initial consonant or consonant cluster of a word to the end of the word and adding a vocalic syllable to create such a suffix. For example, "Wikipedia" would become "Ikipediaway" (the "W" is moved from the beginning and has "ay" appended to create a suffix). The objective is to conceal the words from others not familiar with the rules. The reference to Latin is a deliberate misnomer; Pig Latin is simply a form of argot or jargon unrelated to Latin, and the name is used for its English connotations as a strange and foreign-sounding language. It is most often used by young children as a fun way to confuse people unfamiliar with Pig Latin.
Early mentions of pig Latin or hog Latin describe what we would today call dog Latin, a type of parody Latin. Examples of this predate even Shakespeare, whose 1598 play, "Love's Labour's Lost", includes a reference to dog Latin:
An 1866 article describes a "hog latin" that has some similarities to current Pig Latin. The article says, "He adds as many new letters as the boys in their 'hog latin,' which is made use of to mystify eavesdroppers. A boy asking a friend to go with him says, 'Wig-ge you-ge go-ge wig-ge me-ge?' The other, replying in the negative, says, 'No-ge, I-ge wo-ge.' ". This is similar to Língua do Pê.
Another early mention of the name was in "Putnam's Magazine" in May 1869 "I had plenty of ammunition in reserve, to say nothing, Tom, of our pig Latin. 'Hoggibus, Piggibus et shotam damnabile grunto,' and all that sort of thing," although the jargon is dog Latin.
"The Atlantic" January 1895 also included a mention of the subject: "They all spoke a queer jargon which they themselves had invented. It was something like the well-known 'pig Latin' that all sorts of children like to play with."
The modern version of Pig Latin appears in a 1919 Columbia Records album containing what sounds like the modern variation, by a singer named Arthur Fields. The song, called Pig Latin Love, is followed by the subtitle "I-Yay Ove-Lay oo-yay earie-day". The Three Stooges used it on multiple occasions, most notably "Tassels in the Air", a 1938 short where Moe Howard attempts to teach Curley Howard how to use it, thereby conveying the rules to the audience. In an earlier (1934) episode, "Three Little Pigskins", Larry Fine attempts to impress a woman with his skill in Pig Latin, but it turns out that she knows it, too. No explanation of the rules is given. A few months prior in 1934, in the "Our Gang" short film "Washee Ironee", Spanky tries to speak to an Asian boy by using Pig Latin. Ginger Rogers sang a verse of We're in the Money in pig Latin in an elaborate Busby Berkeley production number in the film Gold Diggers of 1933, (). The film, the third highest grossing of that year, was inducted into the National Film Registry and that song included in the all-time top 100 movie songs by the American Film Institute. Merle Travis ends his song "When My Baby Double Talks To Me" with the phrase, "What a aybybay", where the last word is Pig Latin for "baby".
A 1947 newspaper question and answer column describes the pig Latin as we understand it today. It describes moving the first letter to the end of a word and then adding "ay".
Two Pig Latin words that have entered into mainstream American English are "" or "icksnay", the Pig Latin version of "" (itself a borrowing of German "nichts"), which is used as a general negative; and "", Pig Latin for "", meaning "go away" or "get out of here".
For words that begin with consonant sounds, all letters before the initial vowel are placed at the end of the word sequence. Then, "ay" is added, as in the following examples:
When words begin with consonant clusters (multiple consonants that form one sound), the whole sound is added to the end when speaking or writing.
For words that begin with vowel sounds, the vowel is left alone, and most commonly 'yay' is added to the end. But in different parts of the world, there are different 'dialects' of sorts. Some people may add 'way' or just 'ay' or other endings. Examples are:
An alternative convention for words beginning with vowel sounds, one removes the initial vowel(s) along with the first consonant or consonant cluster. This usually only works for words with more than one syllable and offers a variant of the words in keeping with the mysterious, unrecognizable sounds of the converted words. Examples are:
Sentence structure remains the same as it would in English. Pronunciation of some words may be a little difficult for beginners, but people can easily understand Pig Latin with practice.
In the German-speaking area, varieties of Pig Latin include Kedelkloppersprook, which originated around Hamburg harbour, and Mattenenglisch that was used in the "Matte", the traditional working-class neighborhood of Bern. Though Mattenenglisch has fallen out of use since the mid-20th century, it is still cultivated by voluntary associations. A characteristic of the Mattenenglisch Pig Latin is the complete substitution of the first vowel by "i", in addition to the usual moving of the initial consonant cluster and the adding of "ee".
The Greek equivalent of Pig Latin is Korakistika (which can be translated as the ‘Language of Blackbirds.’) and involves the insertion of the syllable /ka/ and less frequently of other syllables between word syllables. In Cyprus, there was a similar language game that involved the pattern verevereve, varavarava vuruvuruvu, etc; the vowel depends on the vowel that precedes the pattern.
The Swedish equivalent of Pig Latin is Fikonspråket ("Fig language" – see Language game § List of common language games).
The Finnish Pig Latin is called Kontinkieli ("container language"). After each word you add the word kontti "container", then switch the first syllables, So every sentence is converted to twice as many pseudo-words. For example,"wikipedia" --> "wikipedia kontti" --> "kokipedia wintti". So converting the sentence "I love you" ("Minä rakastan sinua") would result in "konä mintti kokastan rantti konua sintti".
Another equivalent of Pig Latin is used throughout the Serbo-Croatian-speaking parts of the Balkans. It is called "Šatra" () or "Šatrovački" () and was used in crime-related and street language. For instance, the slang name for marijuana ("trava", meaning "grass" — accusative case "travu") becomes "vutra"; the slang name for cocaine ("belo" — meaning "white") turns to "lobe", a pistol ("pištolj") turns to "štoljpi", bro (vocative case "brate") becomes "tebra". In the past few years it has become widely used among teenage immigrants in former Yugoslavian countries.
In Italian, the "alfabeto farfallino" uses a similar encoding.
In Spanish language, Jeringonza (or Jeringoso) is a language game used in Spain and all over Hispanic America. It consists of adding the letter "p" after each vowel of a word, and repeating the vowel. For example, "Carlos" turns into "Cápar-lopos". Variants of this language game add other syllables instead of p+vowel, such as adding "ti", "cuti" or "chi" before each syllable (thus giving "ticar-tilos" for the previous example).
French has the "loucherbem" (or "louchébem", or "largonji") coded language, which supposedly was originally used by butchers ("boucher" in French). In "loucherbem", the leading consonant cluster is moved to the end of the word (as in Pig Latin) and replaced by an "L", and then a suffix is added at the end of the word (-"oche", -"em", -"oque", etc., depending on the word). Example: "combien" (how much) = "lombienquès". Similar coded languages are "verlan" and "langue de feu" (see . A few louchébem words have become usual French words: "fou" (crazy) = "loufoque", "portefeuille" (wallet) = "larfeuille", "en douce" (on the quiet) = "en loucedé". Also similar is the widely used French argot "verlan", in which the syllables of words are transposed. Verlan is a French slang that is quite similar to English pig Latin. It is spoken by separating a word into syllables and reversing the syllables.
Verlan was first documented as being used as far back as the 19th century. Back in the 19th century it was spoken as code by criminals in effort to conceal illicit activities within conversations around other people, even the police. Currently, Verlan has been increasingly used in areas just outside major cities mainly populated by migrant workers. This language has served as a language bridge between many of these migrant workers from multiple countries and origins and has been so widely and readily used that it has spread into advertising, film scripts, French rap and hip-hop music, media, in some French dictionaries and in some cases, words that have been Verlanned have actually replaced their original words. The new uses of Verlan and how it has become incorporated into the French culture has all happened within just a few decades.
Here is an example of some French words that have been Verlanned and their English meaning: | https://en.wikipedia.org/wiki?curid=22973 |
Polish language
Polish (, , , or simply , ) is a West Slavic language of the Lechitic group. It is spoken primarily in Poland and serves as the native language of the Poles. In addition to being an official language of Poland, it is also used by Polish minorities in other countries. There are over 50 million Polish-language speakers around the world and it is one of the official languages of the European Union.
Polish is written with the standardized Polish alphabet, which has nine additions to the letters of the basic Latin script ("ą", "ć", "ę", "ł", "ń", "ó", "ś", "ź", "ż"). Among the major languages, it is most closely related to Slovak and Czech, but differs from other Slavic varieties in terms of pronunciation and general grammar. In addition, Polish was profoundly influenced by Latin and other Italic languages like Italian and French as well as Germanic languages (most notably German), which contributed to a large number of loanwords and similar grammatical structures. Polish currently has the largest number of speakers of the West Slavic group and is also the second most widely spoken Slavic language.
Historically, Polish was a "lingua franca", important both diplomatically and academically in Central and Eastern Europe. Today, Polish is spoken by over 38.5 million people as their first language in Poland. It is also spoken as a second language in Northern Czech Republic and Slovakia, western parts of Belarus and Ukraine as well as in Central-Eastern Lithuania and Latvia. Because of the emigration from Poland during different time periods, most notably after World War II, millions of Polish speakers can be found in countries such as Canada, Argentina, Brazil, Israel, Australia, the United Kingdom and the United States.
Polish began to emerge as a distinct language around the 10th century, the process largely triggered by the establishment and development of the Polish state. Mieszko I, ruler of the Polans tribe from the Greater Poland region, united a few culturally and linguistically related tribes from the basins of the Vistula and Oder before eventually accepting baptism in 966. With Christianity, Poland also adopted the Latin alphabet, which made it possible to write down Polish, which until then had existed only as a spoken language.
The precursor to modern Polish is the Old Polish language. Ultimately, Polish is thought to descend from the unattested Proto-Slavic language. Polish was a "lingua franca" from 1500–1700 in Central and parts of Eastern Europe, because of the political, cultural, scientific and military influence of the former Polish–Lithuanian Commonwealth. Although not closely related to it, Polish shares many linguistic affinities with Ukrainian, an East Slavic language with which it has been in prolonged historical contact and in a state of mutual influence. The Polish influence on Ukrainian is particularly marked in western Ukraine, which was under Polish cultural domination.
The Book of Henryków (Polish: , ), contains the earliest known sentence written in the Polish language: "Day, ut ia pobrusa, a ti poziwai" (in modern orthography: "Daj, uć ja pobrusza, a ti pocziwaj," the corresponding sentence in modern Polish: "Daj, niech ja pomielę, a ty odpoczywaj" or "Pozwól, że ja będę mełł, a ty odpocznij", English: "Come, let me grind, and you take a rest"), written around 1270.
The medieval recorder of this phrase, the Cistercian monk Peter of the Henryków monastery, noted that "Hoc est in polonico" ("This is in Polish").
Polish, along with Czech and Slovak, forms the West Slavic dialect continuum. The three languages constitute Ausbau languages, i.e. lects that are considered distinct not on purely linguistic grounds, but rather due to sociopolitical and cultural factors. Since the idioms have separately standardized norms and longstanding literary traditions, being the official languages of independent states, they are generally treated as autonomous languages, with the distinction between Polish and Czech-Slovak dialects being drawn along national lines.
Poland is the most linguistically homogeneous European country; nearly 97% of Poland's citizens declare Polish as their first language. Elsewhere, Poles constitute large minorities in Lithuania, Belarus, and Ukraine. Polish is the most widely used minority language in Lithuania's Vilnius County (26% of the population, according to the 2001 census results, with Vilnius having been part of Poland from 1922 until 1939) and is found elsewhere in southeastern Lithuania. In Ukraine, it is most common in western Lviv and Volyn Oblasts, while in West Belarus it is used by the significant Polish minority, especially in the Brest and Grodno regions and in areas along the Lithuanian border. There are significant numbers of Polish speakers among Polish emigrants and their descendants in many other countries.
In the United States, Polish Americans number more than 11 million but most of them cannot speak Polish fluently. According to the 2000 United States Census, 667,414 Americans of age five years and over reported Polish as the language spoken at home, which is about 1.4% of people who speak languages other than English, 0.25% of the US population, and 6% of the Polish-American population. The largest concentrations of Polish speakers reported in the census (over 50%) were found in three states: Illinois (185,749), New York (111,740), and New Jersey (74,663). Enough people in these areas speak Polish that PNC Financial Services (which has a large number of branches in all of these areas) offer services available in Polish at all of their cash machines in addition to English and Spanish.
According to the 2011 census there are now over 500,000 people in England and Wales who consider Polish to be their "main" language. In Canada, there is a significant Polish Canadian population: There are 242,885 speakers of Polish according to the 2006 census, with a particular concentration in Toronto (91,810 speakers) and Montreal.
The geographical distribution of the Polish language was greatly affected by the territorial changes of Poland immediately after World War II and Polish population transfers (1944–46). Poles settled in the "Recovered Territories" in the west and north, which had previously been mostly German-speaking. Some Poles remained in the previously Polish-ruled territories in the east that were annexed by the USSR, resulting in the present-day Polish-speaking minorities in Lithuania, Belarus, and Ukraine, although many Poles were expelled or emigrated from those areas to areas within Poland's new borders. To the east of Poland, the most significant Polish minority lives in a long, narrow strip along either side of the Lithuania-Belarus border. Meanwhile, the flight and expulsion of Germans (1944–50), as well as the expulsion of Ukrainians and Operation Vistula, the 1947 forced resettlement of Ukrainian minorities to the Recovered Territories in the west of the country, contributed to the country's linguistic homogeneity.
The Polish language became far more homogeneous in the second half of the 20th century, in part due to the mass migration of several million Polish citizens from the eastern to the western part of the country after the Soviet annexation of the Kresy (Eastern Borderlands) in 1939, and the annexation of former German territory after World War II. This tendency toward homogeneity also stems from the vertically integrated nature of the Polish People's Republic. In addition, Polish linguistics has been characterized by a strong strive towards promoting prescriptive ideas of language intervention and usage uniformity, along with normatively-oriented notions of language "correctness" (unusual by Western standards).
The inhabitants of different regions of Poland speak Polish somewhat differently, although the differences between modern-day vernacular varieties and standard Polish () appear relatively slight. Most of the middle aged and young speak vernaculars close to standard Polish, while the traditional dialects are preserved among older people in rural areas. First-language speakers of Polish have no trouble understanding each other, and non-native speakers may have difficulty recognizing the regional and social differences. The modern standard dialect, often termed as "correct Polish", is spoken or at least understood throughout the entire country.
Polish has traditionally been described as consisting of four or five main regional dialects:
Kashubian, spoken in Pomerania west of Gdańsk on the Baltic Sea, is thought of either as a fifth Polish dialect or a distinct language, depending on the criteria used. It contains a number of features not found elsewhere in Poland, e.g. nine distinct oral vowels (vs. the five of standard Polish) and (in the northern dialects) phonemic word stress, an archaic feature preserved from Common Slavic times and not found anywhere else among the West Slavic languages. However, it "lacks most of the linguistic and social determinants of language-hood".
Many linguistic sources about the Slavic languages describe Silesian as a dialect of Polish. However, many Silesians consider themselves a separate ethnicity and have been advocating for the recognition of a Silesian language. According to the last official census in Poland in 2011, over half a million people declared Silesian as their native language. Many sociolinguists (e.g. Tomasz Kamusella, Agnieszka Pianka, Alfred F. Majewicz, Tomasz Wicherkiewicz) assume that extralinguistic criteria decide whether a lect is an independent language or a dialect: speakers of the speech variety or/and political decisions, and this is dynamic (i.e. it changes over time). Also, research organizations such as SIL International and resources for the academic field of linguistics such as Ethnologue, Linguist List and others, for example the Ministry of Administration and Digitization recognized the Silesian language. In July 2007, the Silesian language was recognized by ISO, and was attributed an ISO code of szl.
Some additional characteristic but less widespread regional dialects include:
Polish has six oral vowels (all monophthongs) and two nasal vowels. The oral vowels are (spelled "i"), (spelled "y"), (spelled "e"), (spelled "a"), (spelled "o") and (spelled "u" or "ó"). The nasal vowels are (spelled "ę") and (spelled "ą").
The Polish consonant system shows more complexity: its characteristic features include the series of affricate and palatal consonants that resulted from four Proto-Slavic palatalizations and two further palatalizations that took place in Polish and Belarusian. The full set of consonants, together with their most common spellings, can be presented as follows (although other phonological analyses exist):
Neutralization occurs between voiced–voiceless consonant pairs in certain environments: at the end of words (where devoicing occurs), and in certain consonant clusters (where assimilation occurs). For details, see "Voicing and devoicing" in the article on Polish phonology.
Most Polish words are paroxytones (that is, the stress falls on the second-to-last syllable of a polysyllabic word), although there are exceptions.
Polish permits complex consonant clusters, which historically often arose from the disappearance of yers. Polish can have word-initial and word-medial clusters of up to four consonants, whereas word-final clusters can have up to five consonants. Examples of such clusters can be found in words such as "bezwzględny" ('absolute' or 'heartless', 'ruthless'), "źdźbło" ('blade of grass'), ('shock'), and "krnąbrność" ('disobedience'). A popular Polish tongue-twister (from a verse by Jan Brzechwa) is ('In Szczebrzeszyn a beetle buzzes in the reed').
Unlike languages such as Czech, Polish does not have syllabic consonants – the nucleus of a syllable is always a vowel.
The consonant is restricted to positions adjacent to a vowel. It also cannot precede "i" or "y".
The predominant stress pattern in Polish is penultimate stress – in a word of more than one syllable, the next-to-last syllable is stressed. Alternating preceding syllables carry secondary stress, e.g. in a four-syllable word, where the primary stress is on the third syllable, there will be secondary stress on the first.
Each vowel represents one syllable, although the letter "i" normally does not represent a vowel when it precedes another vowel (it represents , palatalization of the preceding consonant, or both depending on analysis). Also the letters "u" and "i" sometimes represent only semivowels when they follow another vowel, as in "autor" ('author'), mostly in loanwords (so not in native "nauka" 'science, the act of learning', for example, nor in nativized "Mateusz" 'Matthew').
Some loanwords, particularly from the classical languages, have the stress on the antepenultimate (third-from-last) syllable. For example, "fizyka" () ('physics') is stressed on the first syllable. This may lead to a rare phenomenon of minimal pairs differing only in stress placement, for example "muzyka" 'music' vs. "muzyka" - genitive singular of "muzyk" 'musician'. When additional syllables are added to such words through inflection or suffixation, the stress normally becomes regular. For example, "uniwersytet" (, 'university') has irregular stress on the third (or antepenultimate) syllable, but the genitive "uniwersytetu" () and derived adjective "uniwersytecki" () have regular stress on the penultimate syllables. Over time, loanwords become nativized to have penultimate stress.
Another class of exceptions is verbs with the conditional endings "-by, -bym, -byśmy", etc. These endings are not counted in determining the position of the stress; for example, zrobiłbym" ('I would do') is stressed on the first syllable, and "zrobilibyśmy" ('we would do') on the second. According to prescriptive authorities, the same applies to the first and second person plural past tense endings "-śmy, -ście", although this rule is often ignored in colloquial speech (so "zrobiliśmy" 'we did' should be prescriptively stressed on the second syllable, although in practice it is commonly stressed on the third as "zrobiliśmy"). These irregular stress patterns are explained by the fact that these endings are detachable clitics rather than true verbal inflections: for example, instead of kogo zobaczyliście?" ('whom did you see?') it is possible to say "kogoście zobaczyli?" – here "kogo" retains its usual stress (first syllable) in spite of the attachment of the clitic. Reanalysis of the endings as inflections when attached to verbs causes the different colloquial stress patterns. These stress patterns are however nowadays sanctioned as part of the colloquial norm of standard Polish.
Some common word combinations are stressed as if they were a single word. This applies in particular to many combinations of preposition plus a personal pronoun, such as do niej" ('to her'), na nas" ('on us'), "przeze mnie" ('because of me'), all stressed on the bolded syllable.
The Polish alphabet derives from the Latin script, but includes certain additional letters formed using diacritics. The Polish alphabet was one of three major forms of Latin-based orthography developed for Slavic languages, the others being Czech orthography and Croatian orthography, the last of these being a 19th-century invention trying to make a compromise between the first two. Kashubian uses a Polish-based system, Slovak uses a Czech-based system, and Slovene follows the Croatian one; the Sorbian languages blend the Polish and the Czech ones.
The diacritics used in the Polish alphabet are the "kreska" (graphically similar to the acute accent) in the letters "ć, ń, ó, ś, ź" and through the letter in "ł"; the "kropka" (superior dot) in the letter "ż", and the "ogonek" ("little tail") in the letters "ą, ę". The letters "q, v, x" are used only in foreign words and names.
Polish orthography is largely phonemic—there is a consistent correspondence between letters (or digraphs and trigraphs) and phonemes (for exceptions see below). The letters of the alphabet and their normal phonemic values are listed in the following table.
The following digraphs and trigraphs are used:
Voiced consonant letters frequently come to represent voiceless sounds (as shown in the tables); this occurs at the end of words and in certain clusters, due to the neutralization mentioned in the "Phonology" section above. Occasionally also voiceless consonant letters can represent voiced sounds in clusters.
The spelling rule for the palatal sounds , , , and is as follows: before the vowel "i" the plain letters "s, z, c, dz, n" are used; before other vowels the combinations "si, zi, ci, dzi, ni" are used; when not followed by a vowel the diacritic forms "ś, ź, ć, dź, ń" are used. For example, the "s" in "siwy" ("grey-haired"), the "si" in "siarka" ("sulphur") and the "ś" in "święty" ("holy") all represent the sound . The exceptions to the above rule are certain loanwords from Latin, Italian, French, Russian or English—where "s" before "i" is pronounced as "s", e.g. "sinus", "sinologia", "do re mi fa sol la si do", "Saint-Simon i saint-simoniści", "Sierioża", "Siergiej", "Singapur", "singiel". In other loanwords the vowel "i" is changed to "y", e.g. "Syria", "Sybir", "synchronizacja", "Syrakuzy".
The following table shows the correspondence between the sounds and spelling:
Digraphs and trigraphs are used:
Similar principles apply to , , and , except that these can only occur before vowels, so the spellings are "k, g, (c)h, l" before "i", and "ki, gi, (c)hi, li" otherwise. Most Polish speakers, however, do not consider palatalisation of "k, g, (c)h" or "l" as creating new sounds.
Except in the cases mentioned above, the letter "i" if followed by another vowel in the same word usually represents , yet a palatalisation of the previous consonant is always assumed.
The letters "ą" and "ę", when followed by plosives and affricates, represent an oral vowel followed by a nasal consonant, rather than a nasal vowel. For example, "ą" in "dąb" ("oak") is pronounced , and "ę" in "tęcza" ("rainbow") is pronounced (the nasal assimilates to the following consonant). When followed by "l" or "ł" (for example "przyjęli", "przyjęły"), "ę" is pronounced as just "e". When "ę" is at the end of the word it is often pronounced as just .
Note that, depending on the word, the phoneme can be spelt "h" or "ch", the phoneme can be spelt "ż" or "rz", and can be spelt "u" or "ó". In several cases it determines the meaning, for example: "może" ("maybe") and "morze" ("sea").
In occasional words, letters that normally form a digraph are pronounced separately. For example, "rz" represents , not , in words like "zamarzać" ("freeze") and in the name "Tarzan".
Notice that doubled letters represent separate occurrences of the sound in question; for example "Anna" is pronounced in Polish (the double "n" is often pronounced as a lengthened single "n").
There are certain clusters where a written consonant would not be pronounced. For example, the "ł" in the words "mógł" ("could") and "jabłko" ("apple") might be omitted in ordinary speech, leading to the pronunciations "muk" and "japko" or "jabko".
Polish is a highly fusional language with relatively free word order, although the dominant arrangement is subject–verb–object (SVO). There are no articles, and subject pronouns are often dropped.
Nouns belong to one of three genders: masculine, feminine and neuter. A distinction is also made between animate and inanimate masculine nouns in the singular, and between masculine personal and non-masculine-personal nouns in the plural. There are seven cases: nominative, genitive, dative, accusative, instrumental, locative and vocative.
Adjectives agree with nouns in terms of gender, case and number. Attributive adjectives most commonly precede the noun, although in certain cases, especially in fixed phrases (like "język polski", "Polish (language)"), the noun may come first; the rule of thumb is that generic descriptive adjective normally precedes (e.g. "piękny kwiat", “beautiful flower”) while categorising adjective often follows the noun (e.g. "węgiel kamienny", “black
coal”). Most short adjectives and their derived adverbs form comparatives and superlatives by inflection (the superlative is formed by prefixing "naj-" to the comparative).
Verbs are of imperfective or perfective aspect, often occurring in pairs. Imperfective verbs have a present tense, past tense, compound future tense (except for "być" "to be", which has a simple future "będę" etc., this in turn being used to form the compound future of other verbs), subjunctive/conditional (formed with the detachable particle "by"), imperatives, an infinitive, present participle, present gerund and past participle. Perfective verbs have a simple future tense (formed like the present tense of imperfective verbs), past tense, subjunctive/conditional, imperatives, infinitive, present gerund and past participle. Conjugated verb forms agree with their subject in terms of person, number, and (in the case of past tense and subjunctive/conditional forms) gender.
Passive-type constructions can be made using the auxiliary "być" or "zostać" ("become") with the passive participle. There is also an impersonal construction where the active verb is used (in third person singular) with no subject, but with the reflexive pronoun "się" present to indicate a general, unspecified subject (as in "pije się wódkę" "vodka is being drunk"—note that "wódka" appears in the accusative). A similar sentence type in the past tense uses the passive participle with the ending "-o", as in "widziano ludzi" ("people were seen"). As in other Slavic languages, there are also subjectless sentences formed using such words as "można" ("it is possible") together with an infinitive.
Yes-no questions (both direct and indirect) are formed by placing the word "czy" at the start. Negation uses the word "nie", before the verb or other item being negated; "nie" is still added before the verb even if the sentence also contains other negatives such as "nigdy" ("never") or "nic" ("nothing"), effectively creating a double negative.
Cardinal numbers have a complex system of inflection and agreement. Zero and cardinal numbers higher than five (except for those ending with the digit 2, 3 or 4 but not ending with 12, 13 or 14) govern the genitive case rather than the nominative or accusative. Special forms of numbers (collective numerals) are used with certain classes of noun, which include "dziecko" ("child") and exclusively plural nouns such as "drzwi" ("door").
Polish has, over the centuries, borrowed a number of words from other languages. When borrowing, pronunciation was adapted to Polish phonemes and spelling was altered to match Polish orthography. In addition, word endings are liberally applied to almost any word to produce verbs, nouns, adjectives, as well as adding the appropriate endings for cases of nouns, adjectives, diminutives, double-diminutives, augmentatives, etc.
Depending on the historical period, borrowing has proceeded from various languages. Notable influences have been Latin (10th–18th centuries), Czech (10th and 14th–15th centuries), Italian (16th–17th centuries), French (17th–19th centuries), German (13–15th and 18th–20th centuries), Hungarian (15th–16th centuries) and Turkish (17th century). Currently, English words are the most common imports to Polish.
The Latin language, for a very long time the only official language of the Polish state, has had a great influence on Polish. Many Polish words were direct borrowings or calques (e.g. "rzeczpospolita" from "res publica") from Latin. Latin was known to a larger or smaller degree by most of the numerous szlachta in the 16th to 18th centuries (and it continued to be extensively taught at secondary schools until World War II). Apart from dozens of loanwords, its influence can also be seen in a number of verbatim Latin phrases in Polish literature (especially from the 19th century and earlier).
During the 12th and 13th centuries, Mongolian words were brought to the Polish language during wars with the armies of Genghis Khan and his descendants, e.g. "dzida" (spear) and "szereg" (a line or row).
Words from Czech, an important influence during the 10th and 14th–15th centuries include "sejm", "hańba" and "brama".
In 1518, the Polish king Sigismund I the Old married Bona Sforza, the niece of the Holy Roman emperor Maximilian, who introduced Italian cuisine to Poland, especially vegetables. Hence, words from Italian include "pomidor" from "pomodoro" (tomato), "kalafior" from "cavolfiore" (cauliflower), and "pomarańcza", a portmanteau from Italian "pomo" (pome) plus "arancio" (orange). A later word of Italian origin is "autostrada" (from Italian "autostrada", highway).
In the 18th century, with the rising prominence of France in Europe, French supplanted Latin as an important source of words. Some French borrowings also date from the Napoleonic era, when the Poles were enthusiastic supporters of Napoleon. Examples include "ekran" (from French "écran", screen), "abażur" ("abat-jour", lamp shade), "rekin" ("requin", shark), "meble" ("meuble", furniture), "bagaż" ("bagage", luggage), "walizka" ("valise", suitcase), "fotel" ("fauteuil", armchair), "plaża" ("plage", beach) and "koszmar" ("cauchemar", nightmare). Some place names have also been adapted from French, such as the Warsaw borough of Żoliborz ("joli bord" = beautiful riverside), as well as the town of Żyrardów (from the name Girard, with the Polish suffix -ów attached to refer to the founder of the town).
Many words were borrowed from the German language from the sizable German population in Polish cities during medieval times. German words found in the Polish language are often connected with trade, the building industry, civic rights and city life. Some words were assimilated verbatim, for example "handel" (trade) and "dach" (roof); others are pronounced the same, but differ in writing "schnur"—"sznur" (cord). As a result of being neighbours with Germany, Polish has many German expressions which have become literally translated (calques). The regional dialects of Upper Silesia and Masuria (Modern Polish East Prussia) have noticeably more German loanwords than other varieties.
The contacts with Ottoman Turkey in the 17th century brought many new words, some of them still in use, such as: "jar" ("yar" deep valley), "szaszłyk" ("şişlik" shish kebab), "filiżanka" ("fincan" cup), "arbuz" ("karpuz" watermelon), "dywan" ("divan" carpet), etc.
From the founding of the Kingdom of Poland in 1025 through the early years of the Polish-Lithuanian Commonwealth created in 1569, Poland was the most tolerant country of Jews in Europe. Known as the "paradise for the Jews", it became a shelter for persecuted and expelled European Jewish communities and the home to the world's largest Jewish community of the time. As a result, many Polish words come from Yiddish, spoken by the large Polish Jewish population that existed until the Holocaust. Borrowed Yiddish words include "bachor" (an unruly boy or child), "bajzel" (slang for mess), "belfer" (slang for teacher), "ciuchy" (slang for clothing), "cymes" (slang for very tasty food), "geszeft" (slang for business), "kitel" (slang for apron), "machlojka" (slang for scam), "mamona" (money), "manele" (slang for oddments), "myszygene" (slang for lunatic), "pinda" (slang for girl, pejoratively), "plajta" (slang for bankruptcy), "rejwach" (noise), "szmal" (slang for money), and "trefny" (dodgy).
The mountain dialects of the Górale in southern Poland, have quite a number of words borrowed from Hungarian (e.g. "baca", "gazda", "juhas", "hejnał") and Romanian as a result of historical contacts with Hungarian-dominated Slovakia and Wallachian herders who travelled north along the Carpathians.
Thieves' slang includes such words as "kimać" (to sleep) or "majcher" (knife) of Greek origin, considered then unknown to the outside world.
In addition, Turkish and Tatar have exerted influence upon the vocabulary of war, names of oriental costumes etc. Russian borrowings began to make their way into Polish from the second half of the 19th century on.
Polish has also received an intensive number of English loanwords, particularly after World War II. Recent loanwords come primarily from the English language, mainly those that have Latin or Greek roots, for example (computer), (from 'corruption', but sense restricted to 'bribery') etc. Concatenation of parts of words (e.g. "auto-moto"), which is not native to Polish but common in English, for example, is also sometimes used. When borrowing English words, Polish often changes their spelling. For example, Latin suffix '-tio' corresponds to "-cja". To make the word plural, "-cja" becomes "-cje". Examples of this include "inauguracja" (inauguration), "dewastacja" (devastation), "recepcja" (reception), "konurbacja" (conurbation) and "konotacje" (connotations). Also, the digraph "qu" becomes "kw" ("kwadrant" = quadrant; "kworum" = quorum).
The Polish language has influenced others. Particular influences appear in other Slavic languages and in German — due to their proximity and shared borders. Examples of loanwords include German "Grenze" (border), Dutch and Afrikaans "grens" from Polish "granica"; German "Peitzker" from Polish "piskorz" (weatherfish); German "Zobel", French "zibeline", Swedish "sobel", and English "sable" from Polish "soból"; and "ogonek" ("little tail") — the word describing a diacritic hook-sign added below some letters in various alphabets. "," a Polish, Slovak and Ruthenian word for "mop" or "rag", became part of Yiddish. The Polish language exerted significant lexical influence upon Ukrainian, particularly in the fields of abstract and technical terminology; for example, the Ukrainian word "panstvo" (country) is derived from Polish . The extent of Polish influence is particularly noticeable in Western Ukrainian dialects.
There is a substantial number of Polish words which officially became part of Yiddish, once the main language of European Jews. These include basic items, objects or terms such as a bread bun (Polish "bułka", Yiddish בולקע "bulke"), a fishing rod ("wędka", ווענטקע "ventke"), an oak ("dąb", דעמב "demb"), a meadow ("łąka", לאָנקע "lonke"), a moustache ("wąsy", וואָנצעס "vontses") and a bladder ("pęcherz", פּענכער "penkher").
Quite a few culinary loanwords exist in German and in other languages, some of which describe distinctive features of Polish cuisine. These include German and English "Quark" from "twaróg" (a kind of fresh cheese) and German "Gurke", English "gherkin" from "ogórek" (cucumber). The word "pierogi" (Polish dumplings) has spread internationally, as well as "pączki" (Polish donuts) and kiełbasa (sausage, e.g. "kolbaso" in Esperanto). As far as "pierogi" concerned, the original Polish word is already in plural (sing. "pieróg", plural "pierogi"; stem "pierog-", plural ending "-i"; NB. "o" becomes "ó" in a closed syllable, like here in singular), yet it is commonly used with the English plural ending "-s" in Canada and United States of America, "pierogis", thus making it a "double plural". A similar situation happened with the Polish loanword from English "czipsy" ("potato chips")—from English "chips" being already plural in the original ("chip" + "-s"), yet it has obtained the Polish plural ending "-y".
The word "spruce" entered the English language from the Polish name of Prusy (a historical region, today part of Poland). It became "spruce" because in Polish, "z Prus", sounded like "spruce" in English (transl. "from Prussia") and was a generic term for commodities brought to England by Hanseatic merchants and because the tree was believed to have come from Polish Ducal Prussia. However, it can be argued that the word is actually derived from the Old French term "Pruce", meaning literally Prussia. | https://en.wikipedia.org/wiki?curid=22975 |
Pulp magazine
Pulp magazines (often referred to as "the pulps") were inexpensive fiction magazines that were published from 1896 to the late 1950s. The term "pulp" derives from the cheap wood pulp paper on which the magazines were printed. In contrast, magazines printed on higher-quality paper were called "glossies" or "slicks". The typical pulp magazine had 128 pages; it was wide by high, and thick, with ragged, untrimmed edges.
The pulps gave rise to the term pulp fiction in reference to run-of-the-mill, low-quality literature. Pulps were the successors to the penny dreadfuls, dime novels, and short-fiction magazines of the 19th century. Although many respected writers wrote for pulps, the magazines were best known for their lurid, exploitative, and sensational subject matter. Modern superhero comic books are sometimes considered descendants of "hero pulps"; pulp magazines often featured illustrated novel-length stories of heroic characters, such as Flash Gordon, The Shadow, Doc Savage, and The Phantom Detective.
The first "pulp" was Frank Munsey's revamped "Argosy" magazine of 1896, with about 135,000 words (192 pages) per issue, on pulp paper with untrimmed edges, and no illustrations, even on the cover. The steam-powered printing press had been in widespread use for some time, enabling the boom in dime novels; prior to Munsey, however, no one had combined cheap printing, cheap paper and cheap authors in a package that provided affordable entertainment to young working-class people. In six years, "Argosy" went from a few thousand copies per month to over half a million.
Street & Smith, a dime novel and boys' weekly publisher, was next on the market. Seeing "Argosy"'s success, they launched "The Popular Magazine" in 1903, which they billed as the "biggest magazine in the world" by virtue of its being two pages (the interior sides of the front and back cover) longer than "Argosy". Due to differences in page layout however, the magazine had substantially less text than "Argosy". "The Popular Magazine" did introduce color covers to pulp publishing, and the magazine began to take off when in 1905 the publishers acquired the rights to serialize "Ayesha", by H. Rider Haggard, a sequel to his popular novel "She". Haggard's Lost World genre influenced several key pulp writers, including Edgar Rice Burroughs, Robert E. Howard, Talbot Mundy and Abraham Merritt. In 1907, the cover price rose to 15 cents and 30 pages were added to each issue; along with establishing a stable of authors for each magazine, this change proved successful and circulation began to approach that of "Argosy". Street and Smith's next innovation was the introduction of specialized genre pulps, with each magazine focusing on a particular genre, such as detective stories, romance, etc.
At their peak of popularity in the 1920s-1940s, the most successful pulps could sell up to one million copies per issue. In 1934, Frank Gruber (writer) said there were some 150 pulp titles. The most successful pulp magazines were "Argosy", "Adventure", "Blue Book" and "Short Stories", collectively described by some pulp historians as "The Big Four". Among the best-known other titles of this period were "Amazing Stories", "Black Mask", "Dime Detective", "Flying Aces", "Horror Stories", "Love Story Magazine", "Marvel Tales", "Oriental Stories", "Planet Stories", "Spicy Detective", "Startling Stories", "Thrilling Wonder Stories", "Unknown", "Weird Tales" and "Western Story Magazine".
During the economic hardships of the Great Depression, pulps provided affordable content to the masses, and were one of the primary forms of entertainment, along with film and radio.
Although pulp magazines were primarily an American phenomenon, there were also a number of British pulp magazines published between the Edwardian era and World War II. Notable UK pulps included "Pall Mall Magazine", "The Novel Magazine", "Cassell's Magazine", "The Story-Teller", "The Sovereign Magazine", "Hutchinson's Adventure-Story" and "Hutchinson's Mystery-Story". The German fantasy magazine "Der Orchideengarten" had a similar format to American pulp magazines, in that it was printed on rough pulp paper and heavily illustrated.
During the Second World War paper shortages had a serious impact on pulp production, starting a steady rise in costs and the decline of the pulps. Beginning with "Ellery Queen's Mystery Magazine" in 1941, pulp magazines began to switch to digest size; smaller, thicker magazines. In 1949, Street & Smith closed most of their pulp magazines in order to move upmarket and produce slicks.
Competition from comic-books and paperback novels further eroded the pulps’ marketshare, but it was the widespread expansion of television that sounded the death knell of the pulps. In a more affluent post-war America, the price gap compared to slick magazines was far less significant. In the 1950s, men's adventure magazines began to replace the pulp.
The 1957 liquidation of the American News Company, then the primary distributor of pulp magazines, has sometimes been taken as marking the end of the "pulp era"; by that date, many of the famous pulps of the previous generation, including "Black Mask," "The Shadow," "Doc Savage," and "Weird Tales," were defunct. Almost all of the few remaining pulp magazines are science fiction or mystery magazines now in formats similar to "digest size", such as "Analog Science Fiction and Fact" and "Ellery Queen's Mystery Magazine". The format is still in use for some lengthy serials, like the German science fiction weekly "Perry Rhodan" (over 3,000 issues as of 2019).
Over the course of their evolution, there were a huge number of pulp magazine titles; Harry Steeger of Popular Publications claimed that his company alone had published over 300, and at their peak they were publishing 42 titles per month. Many titles of course survived only briefly. While the most popular titles were monthly, many were bimonthly and some were quarterly.
The collapse of the pulp industry changed the landscape of publishing because pulps were the single largest sales outlet for short stories. Combined with the decrease in slick magazine fiction markets, writers attempting to support themselves by creating fiction switched to novels and book-length anthologies of shorter pieces. Some ex-pulp writers like Hugh B. Cave and Robert Leslie Bellem moved on to writing for television by the 1950s.
Pulp magazines often contained a wide variety of genre fiction, including, but not limited to,
The American Old West was a mainstay genre of early turn of the 20th century novels as well as later pulp magazines, and lasted longest of all the traditional pulps. In many ways, the later men's adventure ("the sweats") was the replacement of pulps.
Many classic science fiction and crime novels were originally serialized in pulp magazines such as "Weird Tales", "Amazing Stories", and "Black Mask".
While the majority of pulp magazines were anthology titles featuring many different authors, characters and settings, some of the most enduring magazines were those that featured a single recurring character. These were often referred to as "hero pulps" because the recurring character was almost always a larger-than-life hero in the mold of Doc Savage or The Shadow.
Popular pulp characters that headlined in their own magazines:
Popular pulp characters who appeared in anthology titles such as "All-Story" or "Weird Tales":
Pulp covers were printed in color on higher-quality (slick) paper. They were famous for their half-dressed damsels in distress, usually awaiting a rescuing hero. Cover art played a major part in the marketing of pulp magazines. The early pulp magazines could boast covers by some distinguished American artists; "The Popular Magazine" had covers by N.C. Wyeth, and Edgar Franklin Wittmack contributed cover art to "Argosy" and "Short Stories". Later, many artists specialized in creating covers mainly for the pulps; a number of the most successful cover artists became as popular as the authors featured on the interior pages. Among the most famous pulp artists were Walter Baumhofer, Earle K. Bergey, Margaret Brundage, Edd Cartier, Virgil Finlay, Frank R. Paul, Norman Saunders, Nick Eggenhofer, (who specialized in Western illustrations), Hugh J. Ward, George Rozen, and Rudolph Belarski. Covers were important enough to sales that sometimes they would be designed first; authors would then be shown the cover art and asked to write a story to match.
Later pulps began to feature interior illustrations, depicting elements of the stories. The drawings were printed in black ink on the same cream-colored paper used for the text, and had to use specific techniques to avoid blotting on the coarse texture of the cheap pulp. Thus, fine lines and heavy detail were usually not an option. Shading was by crosshatching or pointillism, and even that had to be limited and coarse. Usually the art was black lines on the paper's background, but Finlay and a few others did some work that was primarily white lines against large dark areas.
Another way pulps kept costs down was by paying authors less than other markets; thus many eminent authors started out in the pulps before they were successful enough to sell to better-paying markets, and similarly, well-known authors whose careers were slumping or who wanted a few quick dollars could bolster their income with sales to pulps. Additionally, some of the earlier pulps solicited stories from amateurs who were quite happy to see their words in print and could thus be paid token amounts.
There were also career pulp writers, capable of turning out huge amounts of prose on a steady basis, often with the aid of dictation to stenographers, machines or typists. Before he became a novelist, Upton Sinclair was turning out at least 8,000 words per day seven days a week for the pulps, keeping two stenographers fully employed. Pulps would often have their authors use multiple pen names so that they could use multiple stories by the same person in one issue, or use a given author's stories in three or more successive issues, while still appearing to have varied content. One advantage pulps provided to authors was that they paid "upon acceptance" for material instead of on publication; since a story might be accepted months or even years before publication, to a working writer this was a crucial difference in cash flow.
Some pulp editors became known for cultivating good fiction and interesting features in their magazines. Preeminent pulp magazine editors included Arthur Sullivant Hoffman ("Adventure)", Robert H. Davis ("All-Story Weekly"), Harry E. Maule ("Short Stories"), Donald Kennicott ("Blue Book"), Joseph T. Shaw ("Black Mask"), Farnsworth Wright ("Weird Tales", "Oriental Stories"), John W. Campbell ("Astounding Science Fiction", "Unknown") and Daisy Bacon ("Love Story Magazine", "Detective Story Magazine").
Well-known authors who wrote for pulps include:
Sinclair Lewis, first American winner of the Nobel Prize in Literature, worked as an editor for "Adventure", writing filler paragraphs (brief facts or amusing anecdotes designed to fill small gaps in page layout), advertising copy and a few stories.
The term "pulp fiction" can also refer to mass market paperbacks since the 1950s. The Browne Popular Culture Library News noted:
Many of the paperback houses that contributed to the decline of the genre–Ace, Dell, Avon, among others–were actually started by pulp magazine publishers. They had the presses, the expertise, and the newsstand distribution networks which made the success of the mass-market paperback possible. These pulp-oriented paperback houses mined the old magazines for reprints. This kept pulp literature, if not pulp magazines, alive. "The Return of the Continental Op" reprints material first published in "Black Mask"; "Five Sinister Characters" contains stories first published in "Dime Detective"; and "The Pocket Book of Science Fiction" collects material from "Thrilling Wonder Stories", "Astounding Science Fiction" and "Amazing Stories". But note that mass market paperbacks are not pulps.
In 1992, Rich W. Harvey came out with a magazine called "Pulp Adventures" reprinting old classics. It came out regularly until 2001, and then started up again in 2014.
In 1994, Quentin Tarantino directed the film "Pulp Fiction". The working title of the film was "Black Mask", in homage to the pulp magazine of that name, and it embodied the seedy, violent, often crime-related spirit found in pulp magazines.
In 1997 C. Cazadessus Jr. launched PULPDOM, a continuation of his Hugo Award-winning ERB-dom which began in 1960. It ran for 75 issues and featured articles about the content and selected fiction from the pulps. It became PULPDOM ONLINE in 2013 and continues quarterly publication.
After the year 2000, several small independent publishers released magazines which published short fiction, either short stories or novel-length presentations, in the tradition of the pulp magazines of the early 20th century. These included "Blood 'N Thunder", "High Adventure" and a short-lived magazine which revived the title "Argosy". These specialist publications, printed in limited press runs, were pointedly not printed on the brittle, high-acid wood pulp paper of the old publications and were not mass market publications targeted at a wide audience. In 2004, Lost Continent Library published "Secret of the Amazon Queen" by E.A. Guest, their first contribution to a "New Pulp Era", featuring the hallmarks of pulp fiction for contemporary mature readers: violence, horror and sex. E.A. Guest was likened to a blend of pulp era icon Talbot Mundy and Stephen King by real-life explorer David Hatcher Childress.
In 2002, the tenth issue of "McSweeney's Quarterly" was guest edited by Michael Chabon. Published as "McSweeney's Mammoth Treasury of Thrilling Tales", it is a collection of "pulp fiction" stories written by such current well-known authors as Stephen King, Nick Hornby, Aimee Bender and Dave Eggers. Explaining his vision for the project, Chabon wrote in the introduction, "I think that we have forgotten how much fun reading a short story can be, and I hope that if nothing else, this treasury goes some small distance toward reminding us of that lost but fundamental truth."
The Scottish publisher DC Thomson publishes "My Weekly Compact Novel" every week. It is literally a pulp novel, though it does not fall into the hard-edged genre most associated with pulp fiction.
From 2006 through 2019, Anthony Tollin's imprint Sanctum Books has reprinted all 182 DOC SAVAGE pulp novels, all 24 of Paul Ernst's AVENGER novels, the 14 WHISPERER novels from the original pulp series and all but three novels of the entire run of THE SHADOW (most of his publications featuring two novels in one book).
In 2010, Pro Se Press released three new pulp magazines "Fantasy & Fear", "Masked Gun Mystery" and "Peculiar Adventures". In 2011, they amalgamated the three titles into one magazine "Pro Se Presents" which came out regularly until Winter/Spring 2014. | https://en.wikipedia.org/wiki?curid=22977 |
Phoneme
In phonology and linguistics, a phoneme is a unit of sound that distinguishes one word from another in a particular language.
For example, in most dialects of English, with the notable exception of the west midlands and the north-west of England, the sound patterns ("sin") and ("sing") are two separate words that are distinguished by the substitution of one phoneme, , for another phoneme, . Two words like this that differ in meaning through the contrast of a single phoneme form a "minimal pair". If, in another language, any two sequences differing only by pronunciation of the final sounds [n] or [ŋ] are perceived as being the same in meaning, then these two sounds are interpreted as variants of a single phoneme in that language.
Phonemes that are established by the use of minimal pairs, such as "tap" vs "tab" or "pat" vs "bat", are written between slashes: , . To show pronunciation, linguists use square brackets: (indicating an aspirated "p" in "pat").
Within linguistics, there are differing views as to exactly what phonemes are and how a given language should be analyzed in "phonemic" (or "phonematic") terms. However, a phoneme is generally regarded as an abstraction of a set (or equivalence class) of speech sounds ("phones") that are perceived as equivalent to each other in a given language. For example, the English "k" sounds in the words "kill" and "skill" are not identical (as described below), but they are distributional variants of a single phoneme . Speech sounds that differ but do not create a meaningful change in the word are known as "allophones" of the same phoneme. Allophonic variation may be conditioned, in which case a certain phoneme is realized as a certain allophone in particular phonological environments, or it may otherwise be free, and may vary by speaker or by dialect. Therefore, phonemes are often considered to constitute an abstract underlying representation for segments of words, while speech sounds make up the corresponding phonetic realization, or the surface form.
Phonemes are conventionally placed between slashes in transcription, whereas speech sounds (phones) are placed between square brackets. Thus, represents a sequence of three phonemes, , , (the word "push" in Standard English), and represents the phonetic sequence of sounds (aspirated "p"), , (the usual pronunciation of "push"). This should not be confused with the similar convention of the use of angle brackets to enclose the units of orthography, graphemes. For example, ⟨f⟩ represents the written letter (grapheme) "f".
The symbols used for particular phonemes are often taken from the International Phonetic Alphabet (IPA), the same set of symbols most commonly used for phones. (For computer-typing purposes, systems such as X-SAMPA exist to represent IPA symbols using only ASCII characters.) However, descriptions of particular languages may use different conventional symbols to represent the phonemes of those languages. For languages whose writing systems employ the phonemic principle, ordinary letters may be used to denote phonemes, although this approach is often hampered by the complexity of the relationship between orthography and pronunciation (see below).
A phoneme is a sound or a group of different sounds perceived to have the same function by speakers of the language or dialect in question. An example is the English phoneme , which occurs in words such as cat", kit", "scat", "skit". Although most native speakers do not notice this, in most English dialects, the "c/k" sounds in these words are not identical: in , the sound is aspirated, but in , it is unaspirated. The words, therefore, contain different "speech sounds", or "phones", transcribed for the aspirated form and for the unaspirated one. These different sounds are nonetheless considered to belong to the same phoneme, because if a speaker used one instead of the other, the meaning of the word would not change: using the aspirated form in "skill" might sound odd, but the word would still be recognized. By contrast, some other sounds would cause a change in meaning if substituted: for example, substitution of the sound would produce the different word "still", and that sound must therefore be considered to represent a different phoneme (the phoneme ).
The above shows that in English, and are allophones of a single phoneme . In some languages, however, and are perceived by native speakers as different sounds, and substituting one for the other can change the meaning of a word. In those languages, therefore, the two sounds represent different phonemes. For example, in Icelandic, is the first sound of "kátur", meaning "cheerful", but is the first sound of "gátur", meaning "riddles". Icelandic, therefore, has two separate phonemes and .
A pair of words like "kátur" and "gátur" (above) that differ only in one phone is called a minimal pair for the two alternative phones in question (in this case, and ). The existence of minimal pairs is a common test to decide whether two phones represent different phonemes or are allophones of the same phoneme.
To take another example, the minimal pair tip" and dip" illustrates that in English, and belong to separate phonemes, and ; since both words have different meanings, English-speakers must be conscious of the distinction between the two sounds.
In other languages, however, including Korean, both sounds and occur, but no such minimal pair exists. The lack of minimal pairs distinguishing and in Korean provides evidence that they are allophones of a single phoneme . The word is pronounced , for example. That is, when they hear this word, Korean-speakers perceive the same sound in both the beginning and middle of the word, but English-speakers perceive different sounds in these two locations.
Signed languages, such as American Sign Language (ASL), also have minimal pairs, differing only in (exactly) one of the signs' parameters: handshape, movement, location, palm orientation, and nonmanual signal or marker. A minimal pair may exist in the signed language if the basic sign stays the same, but one of the parameters changes.
However, the absence of minimal pairs for a given pair of phones does not always mean that they belong to the same phoneme: they may be so dissimilar phonetically that it is unlikely for speakers to perceive them as the same sound. For example, English has no minimal pair for the sounds (as in hat") and (as in "bang), and the fact that they can be shown to be in complementary distribution could be used to argue for their being allophones of the same phoneme. However, they are so dissimilar phonetically that they are considered separate phonemes.
Phonologists have sometimes had recourse to "near minimal pairs" to show that speakers of the language perceive two sounds as significantly different even if no exact minimal pair exists in the lexicon. It is virtually impossible to find a minimal pair to distinguish English from , yet it seems uncontroversial to claim that the two consonants are distinct phonemes. The two words 'pressure' and 'pleasure' can serve as a near minimal pair.
Besides segmental phonemes such as vowels and consonants, there are also suprasegmental features of pronunciation (such as tone and stress, syllable boundaries and other forms of juncture, nasalization and vowel harmony), which, in many languages, can change the meaning of words and so are phonemic.
"Phonemic stress" is encountered in languages such as English. For example, the word "invite" stressed on the second syllable is a verb, but when stressed on the first syllable (without changing any of the individual sounds), it becomes a noun. The position of the stress in the word affects the meaning, so a full phonemic specification (providing enough detail to enable the word to be pronounced unambiguously) would include indication of the position of the stress: for the verb, for the noun. In other languages, such as French, word stress cannot have this function (its position is generally predictable) and is therefore not phonemic (and is not usually indicated in dictionaries).
"Phonemic tones" are found in languages such as Mandarin Chinese, in which a given syllable can have five different tonal pronunciations:
Here, the character 媽 (pronounced "mā", high level pitch) means "mother"; 麻 ("má", rising pitch) means "hemp"; 馬 ("mǎ", falling then rising) means "horse"; 罵 ("mà", falling) means "scold", and 嗎 ("ma", neutral tone) is an interrogative particle. The tone "phonemes" in such languages are sometimes called "tonemes". Languages such as English do not have phonemic tone, although they use intonation for functions such as emphasis and attitude.
When a phoneme has more than one allophone, the one actually heard at a given occurrence of that phoneme may be dependent on the phonetic environment (surrounding sounds) – allophones which normally cannot appear in the same environment are said to be in complementary distribution. In other cases the choice of allophone may be dependent on the individual speaker or other unpredictable factors – such allophones are said to be in free variation.
The term "phonème" (from Ancient Greek φώνημα "phōnēma", "sound made, utterance, thing spoken, speech, language") was reportedly first used by A. Dufriche-Desgenettes in 1873, but it referred only to a speech sound. The term "phoneme" as an abstraction was developed by the Polish linguist Jan Niecisław Baudouin de Courtenay and his student Mikołaj Kruszewski during 1875–1895. The term used by these two was "fonema", the basic unit of what they called "psychophonetics". Daniel Jones became the first linguist in the western world to use the term "phoneme" in its current sense, employing the word in his article "The phonetic structure of the Sechuana Language". The concept of the phoneme was then elaborated in the works of Nikolai Trubetzkoy and others of the Prague School (during the years 1926–1935), and in those of structuralists like Ferdinand de Saussure, Edward Sapir, and Leonard Bloomfield. Some structuralists (though not Sapir) rejected the idea of a cognitive or psycholinguistic function for the phoneme.
Later, it was used and redefined in generative linguistics, most famously by Noam Chomsky and Morris Halle, and remains central to many accounts of the development of modern phonology. As a theoretical concept or model, though, it has been supplemented and even replaced by others.
Some linguists (such as Roman Jakobson and Morris Halle) proposed that phonemes may be further decomposable into features, such features being the true minimal constituents of language. Features overlap each other in time, as do suprasegmental phonemes in oral language and many phonemes in sign languages. Features could be characterized in different ways: Jakobson and colleagues defined them in acoustic terms, Chomsky and Halle used a predominantly articulatory basis, though retaining some acoustic features, while Ladefoged's system is a purely articulatory system apart from the use of the acoustic term 'sibilant'.
In the description of some languages, the term chroneme has been used to indicate contrastive length or "duration" of phonemes. In languages in which tones are phonemic, the tone phonemes may be called tonemes. Though not all scholars working on such languages use these terms, they are by no means obsolete.
By analogy with the phoneme, linguists have proposed other sorts of underlying objects, giving them names with the suffix "-eme", such as "morpheme" and "grapheme". These are sometimes called emic units. The latter term was first used by Kenneth Pike, who also generalized the concepts of emic and etic description (from "phonemic" and "phonetic" respectively) to applications outside linguistics.
Languages do not generally allow words or syllables to be built of any arbitrary sequences of phonemes; there are phonotactic restrictions on which sequences of phonemes are possible and in which environments certain phonemes can occur. Phonemes that are significantly limited by such restrictions may be called "restricted phonemes".
In English, examples of such restrictions include:
Some phonotactic restrictions can alternatively be analyzed as cases of neutralization. See Neutralization and archiphonemes below, particularly the example of the occurrence of the three English nasals before stops.
Biuniqueness is a requirement of classic structuralist phonemics. It means that a given phone, wherever it occurs, must unambiguously be assigned to one and only one phoneme. In other words, the mapping between phones and phonemes is required to be many-to-one rather than many-to-many. The notion of biuniqueness was controversial among some pre-generative linguists and was prominently challenged by Morris Halle and Noam Chomsky in the late 1950s and early 1960s.
An example of the problems arising from the biuniqueness requirement is provided by the phenomenon of flapping in North American English. This may cause either or (in the appropriate environments) to be realized with the phone (an alveolar flap). For example, the same flap sound may be heard in the words "hitting" and "bidding", although it is clearly intended to realize the phoneme in the first word and in the second. This appears to contradict biuniqueness.
For further discussion of such cases, see the next section.
Phonemes that are contrastive in certain environments may not be contrastive in all environments. In the environments where they do not contrast, the contrast is said to be neutralized. In these positions it may become less clear which phoneme a given phone represents. Absolute neutralization is a phenomenon in which a segment of the underlying representation is not realized in any of its phonetic representations (surface forms). The term was introduced by Paul Kiparsky (1968), and contrasts with contextual neutralization where some phonemes are not contrastive in certain environments. Some phonologists prefer not to specify a unique phoneme in such cases, since to do so would mean providing redundant or even arbitrary information – instead they use the technique of underspecification. An archiphoneme is an object sometimes used to represent an underspecified phoneme.
An example of neutralization is provided by the Russian vowels and . These phonemes are contrasting in stressed syllables, but in unstressed syllables the contrast is lost, since both are reduced to the same sound, usually (for details, see vowel reduction in Russian). In order to assign such an instance of to one of the phonemes and , it is necessary to consider morphological factors (such as which of the vowels occurs in other forms of the words, or which inflectional pattern is followed). In some cases even this may not provide an unambiguous answer. A description using the approach of underspecification would not attempt to assign to a specific phoneme in some or all of these cases, although it might be assigned to an archiphoneme, written something like , which reflects the two neutralized phonemes in this position.
A somewhat different example is found in English, with the three nasal phonemes . In word-final position these all contrast, as shown by the minimal triplet "sum" , "sun" , "sung" . However, before a stop such as (provided there is no morpheme boundary between them), only one of the nasals is possible in any given position: before , before or , and before , as in "limp, lint, link" (, , ). The nasals are therefore not contrastive in these environments, and according to some theorists this makes it inappropriate to assign the nasal phones heard here to any one of the phonemes (even though, in this case, the phonetic evidence is unambiguous). Instead they may analyze these phones as belonging to a single archiphoneme, written something like , and state the underlying representations of "limp, lint, link" to be .
This latter type of analysis is often associated with Nikolai Trubetzkoy of the Prague school. Archiphonemes are often notated with a capital letter within double virgules or pipes, as with the examples and given above. Other ways the second of these has been notated include , and .
Another example from English, but this time involving complete phonetic convergence as in the Russian example, is the flapping of and in some American English (described above under Biuniqueness). Here the words "betting" and "bedding" might both be pronounced . Under the generative grammar theory of linguistics, if a speaker applies such flapping consistently, morphological evidence (the pronunciation of the related forms "bet" and "bed", for example) would reveal which phoneme the flap represents, once it is known which morpheme is being used. However, other theorists would prefer not to make such a determination, and simply assign the flap in both cases to a single archiphoneme, written (for example) .
Further mergers in English are plosives after , where conflate with , as suggested by the alternative spellings "sketti" and "sghetti". That is, there is no particular reason to transcribe "spin" as rather than as , other than its historical development, and it might be less ambiguously transcribed .
A morphophoneme is a theoretical unit at a deeper level of abstraction than traditional phonemes, and is taken to be a unit from which morphemes are built up. A morphophoneme within a morpheme can be expressed in different ways in different allomorphs of that morpheme (according to morphophonological rules). For example, the English plural morpheme "-s" appearing in words such as "cats" and "dogs" can be considered to be a single morphophoneme, which might be transcribed (for example) or , and which is realized as phonemically after most voiceless consonants (as in "cats) and as in other cases (as in "dogs).
All known languages use only a small subset of the many possible sounds that the human speech organs can produce, and, because of allophony, the number of distinct phonemes will generally be smaller than the number of identifiably different sounds. Different languages vary considerably in the number of phonemes they have in their systems (although apparent variation may sometimes result from the different approaches taken by the linguists doing the analysis). The total phonemic inventory in languages varies from as few as 11 in Rotokas and Pirahã to as many as 141 in !Xũ.
The number of phonemically distinct vowels can be as low as two, as in Ubykh and Arrernte. At the other extreme, the Bantu language Ngwe has 14 vowel qualities, 12 of which may occur long or short, making 26 oral vowels, plus six nasalized vowels, long and short, making a total of 38 vowels; while !Xóõ achieves 31 pure vowels, not counting its additional variation by vowel length, by varying the phonation. As regards consonant phonemes, Puinave and the Papuan language Tauade each have just seven, and Rotokas has only six. !Xóõ, on the other hand, has somewhere around 77, and Ubykh 81. The English language uses a rather large set of 13 to 21 vowel phonemes, including diphthongs, although its 22 to 26 consonants are close to average.
Some languages, such as French, have no phonemic tone or stress, while Cantonese and several of the Kam–Sui languages have nine tones, and one of the Kru languages, Wobé, has been claimed to have 14, though this is disputed.
The most common vowel system consists of the five vowels . The most common consonants are . Relatively few languages lack any of these consonants, although it does happen: for example, Arabic lacks , standard Hawaiian lacks , Mohawk and Tlingit lack and , Hupa lacks both and a simple , colloquial Samoan lacks and , while Rotokas and Quileute lack and .
During the development of phoneme theory in the mid-20th century phonologists were concerned not only with the procedures and principles involved in producing a phonemic analysis of the sounds of a given language, but also with the reality or uniqueness of the phonemic solution. Some writers took the position expressed by Kenneth Pike: "There is only one accurate phonemic analysis for a given set of data", while others believed that different analyses, equally valid, could be made for the same data. Yuen Ren Chao (1934), in his article "The non-uniqueness of phonemic solutions of phonetic systems" stated "given the sounds of a language, there are usually more than one possible way of reducing them to a set of phonemes, and these different systems or solutions are not simply correct or incorrect, but may be regarded only as being good or bad for various purposes". The linguist F.W. Householder referred to this argument within linguistics as "God's Truth vs. hocus-pocus". Different analyses of the English vowel system may be used to illustrate this. The article English phonology states that "English has a particularly large number of vowel phonemes" and that "there are 20 vowel phonemes in Received Pronunciation, 14–16 in General American and 20–21 in Australian English"; the present article () says that "the English language uses a rather large set of 13 to 21 vowel phonemes". Although these figures are often quoted as a scientific fact, they actually reflect just one of many possible analyses, and later in the English Phonology article an alternative analysis is suggested in which some diphthongs and long vowels may be interpreted as comprising a short vowel linked to either or . The transcription system for British English (RP) devised by the phonetician Geoff Lindsey and used in the CUBE pronunciation dictionary also treats diphthongs as composed of a vowel plus or . The fullest exposition of this approach is found in Trager and Smith (1951), where all long vowels and diphthongs ("complex nuclei") are made up of a short vowel combined with either , or (plus for rhotic accents), each thus comprising two phonemes: they wrote "The conclusion is inescapable that the complex nuclei consist each of two phonemes, one of the short vowels followed by one of three glides". The transcription for the vowel normally transcribed would instead be , would be and would be . The consequence of this approach is that English could theoretically have only seven vowel phonemes, which might be symbolized , , , , , and , or even six if schwa were treated as an allophone of or of other short vowels, a figure that would put English much closer to the average number of vowel phonemes in other languages.
In the same period there was disagreement about the correct basis for a phonemic analysis. The structuralist position was that the analysis should be made purely on the basis of the sound elements and their distribution, with no reference to extraneous factors such as grammar, morphology or the intuitions of the native speaker; this position is strongly associated with Leonard Bloomfield. Zellig Harris claimed that it is possible to discover the phonemes of a language purely by examining the distribution of phonetic segments. Referring to mentalistic definitions of the phoneme, Twaddell (1935) stated "Such a definition is invalid because (1) we have no right to guess about the linguistic workings of an inaccessible 'mind', and (2) we can secure no advantage from such guesses. The linguistic processes of the 'mind' as such are quite simply unobservable; and introspection about linguistic processes is notoriously a fire in a wooden stove." This approach was opposed to that of Edward Sapir, who gave an important role to native speakers' intuitions about where a particular sound or groups of sounds fitted into a pattern. Using English as an example, Sapir argued that, despite the superficial appearance that this sound belongs to a group of nasal consonants, "no naive English-speaking person can be made to feel in his bones that it belongs to a single series with and . ... It still "feels" like ŋg". The theory of generative phonology which emerged in the 1960s explicitly rejected the Structuralist approach to phonology and favoured the mentalistic or cognitive view of Sapir.
Phonemes are considered to be the basis for alphabetic writing systems. In such systems the written symbols (graphemes) represent, in principle, the phonemes of the language being written. This is most obviously the case when the alphabet was invented with a particular language in mind; for example, the Latin alphabet was devised for Classical Latin, and therefore the Latin of that period enjoyed a near one-to-one correspondence between phonemes and graphemes in most cases, though the devisers of the alphabet chose not to represent the phonemic effect of vowel length. However, because changes in the spoken language are often not accompanied by changes in the established orthography (as well as other reasons, including dialect differences, the effects of morphophonology on orthography, and the use of foreign spellings for some loanwords), the correspondence between spelling and pronunciation in a given language may be highly distorted; this is the case with English, for example.
The correspondence between symbols and phonemes in alphabetic writing systems is not necessarily a one-to-one correspondence. A phoneme might be represented by a combination of two or more letters (digraph, trigraph, etc.), like in English or in German (both representing phonemes ). Also a single letter may represent two phonemes, as in English representing /gz/ or /ks/. There may also exist spelling/pronunciation rules (such as those for the pronunciation of in Italian) that further complicate the correspondence of letters to phonemes, although they need not affect the ability to predict the pronunciation from the spelling and vice versa, provided the rules are known.
Sign language phonemes are bundles of articulation features. Stokoe was the first scholar to describe the phonemic system of ASL. He identified the bundles "tab" (elements of location, from Latin "tabula"), "dez" (the handshape, from "designator"), "sig" (the motion, from "signation"). Some researchers also discern "ori" (orientation), facial expression or mouthing. Just as with spoken languages, when features are combined, they create phonemes. As in spoken languages, sign languages have minimal pairs which differ in only one phoneme. For instance, the ASL signs for "father" and "mother" differ minimally with respect to location while handshape and movement are identical; location is thus contrastive.
Stokoe's terminology and notation system are no longer used by researchers to describe the phonemes of sign languages; William Stokoe's research, while still considered seminal, has been found not to characterize American Sign Language or other sign languages sufficiently. For instance, non-manual features are not included in Stokoe's classification. More sophisticated models of sign language phonology have since been proposed by Brentari, Sandler, and van der Kooij.
Cherology and chereme (from "hand") are synonyms of phonology and phoneme previously used in the study of sign languages. A "chereme", as the basic unit of signed communication, is functionally and psychologically equivalent to the phonemes of oral languages, and has been replaced by that term in the academic literature. "Cherology", as the study of "cheremes" in language, is thus equivalent to phonology. The terms are not in use anymore. Instead, the terms "phonology" and "phoneme" (or "distinctive feature") are used to stress the linguistic similarities between signed and spoken languages.
The terms were coined in 1960 by William Stokoe at Gallaudet University to describe sign languages as true and full languages. Once a controversial idea, the position is now universally accepted in linguistics. Stokoe's terminology, however, has been largely abandoned. | https://en.wikipedia.org/wiki?curid=22980 |
Primate
A primate ( ) (from Latin "primat-", from "primus": "prime, first rank") is a eutherian mammal constituting the taxonomic order Primates. Primates arose 85–55 million years ago first from small terrestrial mammals, which adapted to living in the trees of tropical forests: many primate characteristics represent adaptations to life in this challenging environment, including large brains, visual acuity, color vision, altered shoulder girdle, and dextrous hands. Primates range in size from Madame Berthe's mouse lemur, which weighs , to the eastern gorilla, weighing over . There are 190–448 species of living primates, depending on which classification is used. New primate species continue to be discovered: over 25 species were described in the first decade of the 2000s, and eleven since 2010.
Primates are divided into two distinct suborders (see diagram under History of terminology). The first suborder is called strepsirrhines (from Greek 'twisted-nosed or twisted-nostrilled'), which contains lemurs, galagos, and lorisids. These primates can be found throughout Africa, Madagascar, India, and Southeast Asia. The colloquial names of species ending in "-nosed" refer to the rhinarium of the primate. The second suborder is called haplorhines, which contains "dry-nosed" primates (from Greek 'simple-nosed') in the tarsier, monkey, and ape clades. The last of these groups includes humans. Simians (the infraorder called Simiiformes from the Greek word simos, meaning 'flat-nosed') refer to monkeys and apes, which can be classified as Old World monkeys and apes under the infraorder of catarrhines (from Greek 'narrow nosed') or as New World monkeys under the infraorder of platyrrhines (from Greek 'flat-nosed'). Forty million years ago, simians from Africa migrated to South America by drifting on debris (presumably), which gave rise to the five families of New World monkeys. The remaining simians (catarrhines) split into apes (Hominoidea) and Old World monkeys (Cercopithecoidea) approximately twenty-five million years ago. Common species that are simians include the (Old World) baboons, macaques, gibbons, and great apes; and the (New World) capuchins, howlers and squirrel monkeys.
Primates have large brains (relative to body size) compared to other mammals, as well as an increased reliance on visual acuity at the expense of the sense of smell, which is the dominant sensory system in most mammals. These features are more developed in monkeys and apes, and noticeably less so in lorises and lemurs. Some primates are trichromats, with three independent channels for conveying color information. Except for apes and humans, primates have tails. Most primates also have opposable thumbs. Many species are sexually dimorphic; differences may include muscle mass, fat distribution, pelvic width, canine tooth size, hair distribution, and coloration. Primates have slower rates of development than other similarly sized mammals, reach maturity later, and have longer lifespans. Depending on the species, adults may live in solitude, in mated pairs, or in groups of up to hundreds of members. Some primates, including gorillas, humans, and baboons, are primarily terrestrial rather than arboreal, but all species have adaptations for climbing trees. Arboreal locomotion techniques used include leaping from tree to tree and swinging between branches of trees (brachiation); terrestrial locomotion techniques include walking on two limbs (bipedalism) and modified walking on four limbs (knuckle-walking).
Primates are among the most social of animals, forming pairs or family groups, uni-male harems, and multi-male/multi-female groups. Non-human primates have at least four types of social systems, many defined by the amount of movement by adolescent females between groups. Most primate species remain at least partly arboreal: the exceptions are humans, some other great apes, and baboons, who left the trees for the ground and now inhabit every continent.
Close interactions between humans and non-human primates (NHPs) can create opportunities for the transmission of zoonotic diseases, especially virus diseases, including herpes, measles, ebola, rabies, and hepatitis. Thousands of non-human primates are used in research around the world because of their psychological and physiological similarity to humans. About 60% of primate species are threatened with extinction. Common threats include deforestation, forest fragmentation, monkey drives, and primate hunting for use in medicines, as pets, and for food. Large-scale tropical forest clearing for agriculture most threatens primates.
The English name "primates" is derived from Old French or French "primat", from a noun use of Latin "primat-", from "primus" ("prime, first rank"). The name was given by Carl Linnaeus because he thought this the "highest" order of animals. The relationships among the different groups of primates were not clearly understood until relatively recently, so the commonly used terms are somewhat confused. For example, "ape" has been used either as an alternative for "monkey" or for any tailless, relatively human-like primate.
Sir Wilfrid Le Gros Clark was one of the primatologists who developed the idea of trends in primate evolution and the methodology of arranging the living members of an order into an "ascending series" leading to humans. Commonly used names for groups of primates such as "prosimians", "monkeys", "lesser apes", and "great apes" reflect this methodology. According to our current understanding of the evolutionary history of the primates, several of these groups are paraphyletic: a paraphyletic group is one which does "not" include all the descendants of the group's common ancestor.
In contrast with Clark's methodology, modern classifications typically identify (or name) only those groupings that are monophyletic; that is, such a named group includes "all" the descendants of the group's common ancestor.
The cladogram below shows one possible classification sequence of the living primates: groups that use common (traditional) names are shown on the right.
All groups with scientific names are monophyletic (that is, they are clades), and the sequence of scientific classification reflects the evolutionary history of the related lineages. Groups that are traditionally named are shown on the right; they form an "ascending series" (per Clark, see above), and several groups are paraphyletic:
Thus, the members of the two sets of groups, and hence names, do not match, which causes problems in relating scientific names to common (usually traditional) names. Consider the superfamily Hominoidea: In terms of the common names on the right, this group consists of apes and humans and there is no single common name for all the members of the group. One remedy is to create a new common name, in this case "hominoids". Another possibility is to expand the use of one of the traditional names. For example, in his 2005 book, the vertebrate palaeontologist Benton wrote, "The apes, Hominoidea, today include the gibbons and orang-utan ... the gorilla and chimpanzee ... and humans"; thereby Benton was using "apes" to mean "hominoids". In that case, the group heretofore called "apes" must now be identified as the "non-human apes".
, there is no consensus as to whether to accept traditional (that is, common), but paraphyletic, names or to use monophyletic names only; or to use 'new' common names or adaptations of old ones. Both competing approaches can be found in biological sources, often in the same work, and sometimes by the same author. Thus, Benton defines "apes" to include humans, then he repeatedly uses "ape-like" to mean "like an ape rather than a human"; and when discussing the reaction of others to a new fossil he writes of "claims that "Orrorin" ... was an ape rather than a human".
A list of the families of the living primates is given below, together with one possible classification into ranks between order and family. Other classifications are also used. For example, an alternative classification of the living Strepsirrhini divides them into two infraorders, Lemuriformes and Lorisiformes.
Order Primates was established by Carl Linnaeus in 1758, in the tenth edition of his book "Systema Naturae", for the genera "Homo" (humans), "Simia" (other apes and monkeys), "Lemur" (prosimians) and "Vespertilio" (bats). In the first edition of the same book (1735), he had used the name Anthropomorpha for "Homo", "Simia" and "Bradypus" (sloths). In 1839, Henri Marie Ducrotay de Blainville, following Linnaeus and imitating his nomenclature, established the orders Secundates (including the suborders Chiroptera, Insectivora and Carnivora), Tertiates (or Glires) and Quaternates (including Gravigrada, Pachydermata and Ruminantia), but these new taxa were not accepted.
Before Anderson and Jones introduced the classification of Strepsirrhini and Haplorhini in 1984, (followed by McKenna and Bell's 1997 work "Classification of Mammals: Above the species level"), the Primates were divided into two superfamilies: Prosimii and Anthropoidea. Prosimii included all of the prosimians: Strepsirrhini plus the tarsiers. Anthropoidea contained all of the simians.
Order Primates is part of the clade Euarchontoglires, which is nested within the clade Eutheria of Class Mammalia. Recent molecular genetic research on primates, colugos, and treeshrews has shown that the two species of colugos are more closely related to primates than to treeshrews, even though treeshrews were at one time considered primates. These three orders make up the clade Euarchonta. The combination of this clade with the clade Glires (composed of Rodentia and Lagomorpha) forms the clade Euarchontoglires. Variously, both Euarchonta and Euarchontoglires are ranked as superorders. Some scientists consider Dermoptera to be a suborder of Primates and use the suborder Euprimates for the "true" primates.
The primate lineage is thought to go back at least near the Cretaceous–Paleogene boundary or around 63–74 (mya), even though the oldest known primates from the fossil record date to the Late Paleocene of Africa ("Altiatlasius") or the Paleocene-Eocene transition in the northern continents, c. 55 mya ("Cantius", "Donrussellia", "Altanius", "Plesiadapis" and "Teilhardina"). Other studies, including molecular clock studies, have estimated the origin of the primate branch to have been in the mid-Cretaceous period, around 85 mya.
By modern cladistic reckoning, the order Primates is monophyletic. The suborder Strepsirrhini, the "wet-nosed" primates, is generally thought to have split off from the primitive primate line about 63 mya, although earlier dates are also supported. The seven strepsirrhine families are the five related lemur families and the two remaining families that include the lorisids and the galagos. Older classification schemes wrap Lepilemuridae into Lemuridae and Galagidae into Lorisidae, yielding a four-one family distribution instead of five-two as presented here. During the Eocene, most of the northern continents were dominated by two groups, the adapiforms and the omomyids. The former are considered members of Strepsirrhini, but did not have a toothcomb like modern lemurs; recent analysis has demonstrated that "Darwinius masillae" fits into this grouping. The latter was closely related to tarsiers, monkeys, and apes. How these two groups relate to extant primates is unclear. Omomyids perished about 30 mya, while adapiforms survived until about 10 mya.
According to genetic studies, the lemurs of Madagascar diverged from the lorisoids approximately 75 mya. These studies, as well as chromosomal and molecular evidence, also show that lemurs are more closely related to each other than to other strepsirrhine primates. However, Madagascar split from Africa 160 mya and from India 90 mya. To account for these facts, a founding lemur population of a few individuals is thought to have reached Madagascar from Africa via a single rafting event between 50 and 80 mya. Other colonization options have been suggested, such as multiple colonizations from Africa and India, but none are supported by the genetic and molecular evidence.
Until recently, the aye-aye has been difficult to place within Strepsirrhini. Theories had been proposed that its family, Daubentoniidae, was either a lemuriform primate (meaning its ancestors split from the lemur line more recently than lemurs and lorises split) or a sister group to all the other strepsirrhines. In 2008, the aye-aye family was confirmed to be most closely related to the other Malagasy lemurs, likely having descended from the same ancestral population that colonized the island.
Suborder Haplorhini, the simple-nosed or "dry-nosed" primates, is composed of two sister clades. Prosimian tarsiers in the family Tarsiidae (monotypic in its own infraorder Tarsiiformes), represent the most basal division, originating about 58 mya. The earliest known haplorhine skeleton, that of 55 MA old tarsier-like "Archicebus", was found in central China, supporting an already suspected Asian origin for the group. The infraorder Simiiformes (simian primates, consisting of monkeys and apes) emerged about 40 mya, possibly also in Asia; if so, they dispersed across the Tethys Sea from Asia to Africa soon afterwards. There are two simian clades, both parvorders: Catarrhini, which developed in Africa, consisting of Old World monkeys, humans and the other apes, and Platyrrhini, which developed in South America, consisting of New World monkeys. A third clade, which included the eosimiids, developed in Asia, but became extinct millions of years ago.
As in the case of lemurs, the origin of New World monkeys is unclear. Molecular studies of concatenated nuclear sequences have yielded a widely varying estimated date of divergence between platyrrhines and catarrhines, ranging from 33 to 70 mya, while studies based on mitochondrial sequences produce a narrower range of 35 to 43 mya. The anthropoid primates possibly traversed the Atlantic Ocean from Africa to South America during the Eocene by island hopping, facilitated by Atlantic Ocean ridges and a lowered sea level. Alternatively, a single rafting event may explain this transoceanic colonization. Due to continental drift, the Atlantic Ocean was not nearly as wide at the time as it is today. Research suggests that a small primate could have survived 13 days on a raft of vegetation. Given estimated current and wind speeds, this would have provided enough time to make the voyage between the continents.
Apes and monkeys spread from Africa into Europe and Asia starting in the Miocene. Soon after, the lorises and tarsiers made the same journey. The first hominin fossils were discovered in northern Africa and date back 5–8 mya. Old World monkeys disappeared from Europe about 1.8 mya. Molecular and fossil studies generally show that modern humans originated in Africa 100,000–200,000 years ago.
Although primates are well studied in comparison to other animal groups, several new species have been discovered recently, and genetic tests have revealed previously unrecognised species in known populations. "Primate Taxonomy" listed about 350 species of primates in 2001; the author, Colin Groves, increased that number to 376 for his contribution to the third edition of "Mammal Species of the World" (MSW3). However, publications since the taxonomy in MSW3 was compiled in 2003 have pushed the number to 424 species, or 658 including subspecies.
Primate hybrids usually arise in captivity, but there have also been examples in the wild. Hybridization occurs where two species' range overlap to form hybrid zones; hybrids may be created by humans when animals are placed in zoos or due to environmental pressures such as predation. Intergeneric hybridizations, hybrids of different genera, have also been found in the wild. Although they belong to genera that have been distinct for several million years, interbreeding still occurs between the gelada and the hamadryas baboon.
On 24 January 2018, scientists in China reported in the journal "Cell" the creation of two crab-eating macaque clones, named "Zhong Zhong" and "Hua Hua", using the complex DNA transfer method that produced "Dolly" the sheep, for the first time.
The primate skull has a large, domed cranium, which is particularly prominent in anthropoids. The cranium protects the large brain, a distinguishing characteristic of this group. The endocranial volume (the volume within the skull) is three times greater in humans than in the greatest nonhuman primate, reflecting a larger brain size. The mean endocranial volume is 1,201 cubic centimeters in humans, 469 cm3 in gorillas, 400 cm3 in chimpanzees and 397 cm3 in orangutans. The primary evolutionary trend of primates has been the elaboration of the brain, in particular the neocortex (a part of the cerebral cortex), which is involved with sensory perception, generation of motor commands, spatial reasoning, conscious thought and, in humans, language. While other mammals rely heavily on their sense of smell, the arboreal life of primates has led to a tactile, visually dominant sensory system, a reduction in the olfactory region of the brain and increasingly complex social behavior.
Primates have forward-facing eyes on the front of the skull; binocular vision allows accurate distance perception, useful for the brachiating ancestors of all great apes. A bony ridge above the eye sockets reinforces weaker bones in the face, which are put under strain during chewing. Strepsirrhines have a postorbital bar, a bone around the eye socket, to protect their eyes; in contrast, the higher primates, haplorhines, have evolved fully enclosed sockets.
Primates show an evolutionary trend towards a reduced snout. Technically, Old World monkeys are distinguished from New World monkeys by the structure of the nose, and from apes by the arrangement of their teeth. In New World monkeys, the nostrils face sideways; in Old World monkeys, they face downwards. Dental pattern in primates vary considerably; although some have lost most of their incisors, all retain at least one lower incisor. In most strepsirrhines, the lower incisors form a toothcomb, which is used in grooming and sometimes foraging. Old World monkeys have eight premolars, compared with 12 in New World monkeys. The Old World species are divided into apes and monkeys depending on the number of cusps on their molars: monkeys have four, apes have five - although humans may have four or five. The main hominid molar cusp (hypocone) evolved in early primate history, while the cusp of the corresponding primitive lower molar (paraconid) was lost. Prosimians are distinguished by their immobilized upper lips, the moist tip of their noses and forward-facing lower front teeth.
Primates generally have five digits on each limb (pentadactyly), with a characteristic type of keratin fingernail on the end of each finger and toe. The bottom sides of the hands and feet have sensitive pads on the fingertips. Most have opposable thumbs, a characteristic primate feature most developed in humans, though not limited to this order, (opossums and koalas, for example, also have them). Thumbs allow some species to use tools. In primates, the combination of opposing thumbs, short fingernails (rather than claws) and long, inward-closing fingers is a relict of the ancestral practice of gripping branches, and has, in part, allowed some species to develop brachiation (swinging by the arms from tree limb to tree limb) as a significant means of locomotion. Prosimians have clawlike nails on the second toe of each foot, called toilet-claws, which they use for grooming.
The primate collar bone is a prominent element of the pectoral girdle; this allows the shoulder joint broad mobility. Compared to Old World monkeys, apes have more mobile shoulder joints and arms due to the dorsal position of the scapula, broad ribcages that are flatter front-to-back, a shorter, less mobile spine, and with lower vertebrae greatly reduced - resulting in tail loss in some species. Prehensile tails are found in atelids, including the howler, spider, woolly spider, woolly monkeys; and in capuchins. Male primates have a pendulous penis and scrotal testes.
Sexual dimorphism is often exhibited in simians, though to a greater degree in Old World species (apes and some monkeys) than New World species. Recent studies involve comparing DNA to examine both the variation in the expression of the dimorphism among primates and the fundamental causes of sexual dimorphism. Primates usually have dimorphism in body mass and canine tooth size along with pelage and skin color. The dimorphism can be attributed to and affected by different factors, including mating system, size, habitat and diet.
Comparative analyses have generated a more complete understanding of the relationship between sexual selection, natural selection, and mating systems in primates. Studies have shown that dimorphism is the product of changes in both male and female traits. Ontogenetic scaling, where relative extension of a common growth trajectory occurs, may give some insight into the relationship between sexual dimorphism and growth patterns. Some evidence from the fossil record suggests that there was convergent evolution of dimorphism, and some extinct hominids probably had greater dimorphism than any living primate.
Primate species move by brachiation, bipedalism, leaping, arboreal and terrestrial quadrupedalism, climbing, knuckle-walking or by a combination of these methods. Several prosimians are primarily vertical clingers and leapers. These include many bushbabies, all indriids (i.e., sifakas, avahis and indris), sportive lemurs, and all tarsiers. Other prosimians are arboreal quadrupeds and climbers. Some are also terrestrial quadrupeds, while some are leapers. Most monkeys are both arboreal and terrestrial quadrupeds and climbers. Gibbons, muriquis and spider monkeys all brachiate extensively, with gibbons sometimes doing so in remarkably acrobatic fashion. Woolly monkeys also brachiate at times. Orangutans use a similar form of locomotion called quadramanous climbing, in which they use their arms and legs to carry their heavy bodies through the trees. Chimpanzees and gorillas knuckle walk, and can move bipedally for short distances. Although numerous species, such as australopithecines and early hominids, have exhibited fully bipedal locomotion, humans are the only extant species with this trait.
The evolution of color vision in primates is unique among most eutherian mammals. While the remote vertebrate ancestors of the primates possessed three color vision (trichromaticism), the nocturnal, warm-blooded, mammalian ancestors lost one of three cones in the retina during the Mesozoic era. Fish, reptiles and birds are therefore trichromatic or tetrachromatic, while all mammals, with the exception of some primates and marsupials, are dichromats or monochromats (totally color blind). Nocturnal primates, such as the night monkeys and bush babies, are often monochromatic. Catarrhines are routinely trichromatic due to a gene duplication of the red-green opsin gene at the base of their lineage, 30 to 40 million years ago. Platyrrhines, on the other hand, are trichromatic in a few cases only. Specifically, individual females must be heterozygous for two alleles of the opsin gene (red and green) located on the same locus of the X chromosome. Males, therefore, can only be dichromatic, while females can be either dichromatic or trichromatic. Color vision in strepsirrhines is not as well understood; however, research indicates a range of color vision similar to that found in platyrrhines.
Like catarrhines, howler monkeys (a family of platyrrhines) show routine trichromatism that has been traced to an evolutionarily recent gene duplication. Howler monkeys are one of the most specialized leaf-eaters of the New World monkeys; fruits are not a major part of their diets, and the type of leaves they prefer to consume (young, nutritive, and digestible) are detectable only by a red-green signal. Field work exploring the dietary preferences of howler monkeys suggests that routine trichromaticism was selected by environment.
Richard Wrangham stated that social systems of primates are best classified by the amount of movement by females occurring between groups. He proposed four categories:
Other systems are known to occur as well. For example, with howler monkeys and gorillas both the males and females typically transfer from their natal group on reaching sexual maturity, resulting in groups in which neither the males nor females are typically related. Some prosimians, colobine monkeys and callitrichid monkeys also use this system.
The transfer of females or males from their native group is likely an adaptation for avoiding inbreeding. An analysis of breeding records of captive primate colonies representing numerous different species indicates that the infant mortality of inbred young is generally higher than that of non-inbred young. This effect of inbreeding on infant mortality is probably largely a result of increased expression of deleterious recessive alleles (see Inbreeding depression).
Primatologist Jane Goodall, who studied in the Gombe Stream National Park, noted fission-fusion societies in chimpanzees. There is "fission" when the main group splits up to forage during the day, then "fusion" when the group returns at night to sleep as a group. This social structure can also be observed in the hamadryas baboon, spider monkeys and the bonobo. The gelada has a similar social structure in which many smaller groups come together to form temporary herds of up to 600 monkeys. Humans also form fission-fusion societies. In hunter-gatherer societies, humans form groups which are made up of several individuals that may split up to obtain different resources.
These social systems are affected by three main ecological factors: distribution of resources, group size, and predation. Within a social group there is a balance between cooperation and competition. Cooperative behaviors in many primates species include social grooming (removing skin parasites and cleaning wounds), food sharing, and collective defense against predators or of a territory. Aggressive behaviors often signal competition for food, sleeping sites or mates. Aggression is also used in establishing dominance hierarchies.
Several species of primates are known to associate in the wild. Some of these associations have been extensively studied. In the Tai Forest of Africa several species coordinate anti-predator behavior. These include the Diana monkey, Campbell's mona monkey, lesser spot-nosed monkey, western red colobus, king colobus (western black and white colobus), and sooty mangabey, which coordinate anti-predator alarm calls. Among the predators of these monkeys is the common chimpanzee.
The red-tailed monkey associates with several species, including the western red colobus, blue monkey, Wolf's mona monkey, mantled guereza, black crested mangabey and Allen's swamp monkey. Several of these species are preyed upon by the common chimpanzee.
In South America, squirrel monkeys associate with capuchin monkeys. This may have more to do with foraging benefits to the squirrel monkeys than anti-predation benefits.
Lemurs, lorises, tarsiers, and New World monkeys rely on olfactory signals for many aspects of social and reproductive behavior. Specialized glands are used to mark territories with pheromones, which are detected by the vomeronasal organ; this process forms a large part of the communication behavior of these primates. In Old World monkeys and apes this ability is mostly vestigial, having regressed as trichromatic eyes evolved to become the main sensory organ. Primates also use vocalizations, gestures, and facial expressions to convey psychological state. Facial musculature is very developed in primates, particularly in monkeys and apes, allowing for complex facial communication. Like humans, chimpanzees can distinguish the faces of familiar and unfamiliar individuals. Hand and arm gestures are also important forms of communication for great apes and a single gesture can have multiple functions.
The Philippine tarsier, has a high-frequency limit of auditory sensitivity of approximately 91 kHz with a dominant frequency of 70 kHz. Such values are among the highest recorded for any terrestrial mammal, and a relatively extreme example of ultrasonic communication. For Philippine tarsiers, ultrasonic vocalizations might represent a private channel of communication that subverts detection by predators, prey and competitors, enhances energetic efficiency, or improves detection against low-frequency background noise. Male howler monkeys are among the loudest land mammals and their roars can be heard up to . Roars are produced by modified larynx and enlarged hyoid bone which contains an air sac. These calls are thought to relate to intergroup spacing and territorial protection as well as possibly mate-guarding. The vervet monkey gives a distinct alarm call for each of at least four different predators, and the reactions of other monkeys vary according to the call. For example, if an alarm call signals a python, the monkeys climb into the trees, whereas the eagle alarm causes monkeys to seek a hiding place on the ground. Many non-human primates have the vocal anatomy to produce human speech but lack the proper brain wiring. Vowel-like vocal patterns have been recorded in baboons which has implications for the origin of speech in humans.
The time range for the evolution of human language and/or its anatomical prerequisites extends, at least in principle, from the phylogenetic divergence of "Homo" (2.3 to 2.4 million years ago) from "Pan" (5 to 6 million years ago) to the emergence of full behavioral modernity some 50,000–150,000 years ago. Few dispute that "Australopithecus" probably lacked vocal communication significantly more sophisticated than that of great apes in general.
Primates have slower rates of development than other mammals. All primate infants are breastfed by their mothers (with the exception of some human cultures and various zoo raised primates which are fed formula) and rely on them for grooming and transportation. In some species, infants are protected and transported by males in the group, particularly males who may be their fathers. Other relatives of the infant, such as siblings and aunts, may participate in its care as well. Most primate mothers cease ovulation while breastfeeding an infant; once the infant is weaned the mother can reproduce again. This often leads to weaning conflict with infants who attempt to continue breastfeeding.
Infanticide is common in polygynous species such as gray langurs and gorillas. Adult males may kill dependent offspring that are not theirs so the female will return to estrus and thus they can sire offspring of their own. Social monogamy in some species may have evolved to combat this behavior. Promiscuity may also lessen the risk of infanticide since paternity becomes uncertain.
Primates have a longer juvenile period between weaning and sexual maturity than other mammals of similar size. Some primates such as galagos and new world monkeys use tree-holes for nesting, and park juveniles in leafy patches while foraging. Other primates follow a strategy of "riding", i.e. carrying individuals on the body while feeding. Adults may construct or use nesting sites, sometimes accompanied by juveniles, for the purpose of resting, a behavior which has developed secondarily in the great apes. During the juvenile period, primates are more susceptible than adults to predation and starvation; they gain experience in feeding and avoiding predators during this time. They learn social and fighting skills, often through playing. Primates, especially females, have longer lifespans than other similarly sized mammals, this may be partially due to their slower metabolisms. Late in life, female catarrhine primates appear to undergo a cessation of reproductive function known as menopause; other groups are less studied.
Primates exploit a variety of food sources. It has been said that many characteristics of modern primates, including humans, derive from an early ancestor's practice of taking most of its food from the tropical canopy. Most primates include fruit in their diets to obtain easily digested nutrients including carbohydrates and lipids for energy. Primates in the suborder Strepsirrhini (non-tarsier prosimians) are able to synthesize vitamin C, like most other mammals, while primates of the suborder Haplorrhini (tarsiers, monkeys and apes) have lost this ability, and require the vitamin in their diet.
Many primates have anatomical specializations that enable them to exploit particular foods, such as fruit, leaves, gum or insects. For example, leaf eaters such as howler monkeys, black-and-white colobuses and sportive lemurs have extended digestive tracts which enable them to absorb nutrients from leaves that can be difficult to digest. Marmosets, which are gum eaters, have strong incisor teeth, enabling them to open tree bark to get to the gum, and claws rather than nails, enabling them to cling to trees while feeding. The aye-aye combines rodent-like teeth with a long, thin middle finger to fill the same ecological niche as a woodpecker. It taps on trees to find insect larvae, then gnaws holes in the wood and inserts its elongated middle finger to pull the larvae out. Some species have additional specializations. For example, the grey-cheeked mangabey has thick enamel on its teeth, enabling it to open hard fruits and seeds that other monkeys cannot. The gelada is the only primate species that feeds primarily on grass.
Tarsiers are the only extant obligate carnivorous primates, exclusively eating insects, crustaceans, small vertebrates and snakes (including venomous species). Capuchin monkeys can exploit many different types of plant matter, including fruit, leaves, flowers, buds, nectar and seeds, but also eat insects and other invertebrates, bird eggs, and small vertebrates such as birds, lizards, squirrels and bats.
The common chimpanzee eats an omnivorous frugivorous diet. It prefers fruit above all other food items and even seeks out and eats them when they are not abundant. It also eats leaves and leaf buds, seeds, blossoms, stems, pith, bark and resin. Insects and meat make up a small proportion of their diet, estimated as 2%. The meat consumption includes predation on other primate species, such as the western red colobus monkey. The bonobo is an omnivorous frugivore – the majority of its diet is fruit, but it supplements this with leaves, meat from small vertebrates, such as anomalures, flying squirrels and duikers, and invertebrates. In some instances, bonobos have been shown to consume lower-order primates.
Until the development of agriculture approximately 10,000 years ago, "Homo sapiens" employed a hunter-gatherer method as their sole means of food collection. This involved combining stationary food sources (such as fruits, grains, tubers, and mushrooms, insect larvae and aquatic mollusks) with wild game, which must be hunted and killed in order to be consumed. It has been proposed that humans have used fire to prepare and cook food since the time of "Homo erectus". Around ten thousand years ago, humans developed agriculture, which substantially altered their diet. This change in diet may also have altered human biology; with the spread of dairy farming providing a new and rich source of food, leading to the evolution of the ability to digest lactose in some adults.
Predators of primates include various species of carnivorans, birds of prey, reptiles, and other primates. Even gorillas have been recorded as prey. Predators of primates have diverse hunting strategies and as such, primates have evolved several different antipredator adaptations including crypsis, alarm calls and mobbing. Several species have separate alarm calls for different predators such as air-borne or ground-dwelling predators. Predation may have shaped group size in primates as species exposed to higher predation pressures appear to live in larger groups.
Primates have advanced cognitive abilities: some make tools and use them to acquire food and for social displays; some can perform tasks requiring cooperation, influence and rank; they are status conscious, manipulative and capable of deception; they can recognise kin and conspecifics; and they can learn to use symbols and understand aspects of human language including some relational syntax and concepts of number and numerical sequence. Research in primate cognition explores problem solving, memory, social interaction, a theory of mind, and numerical, spatial, and abstract concepts. Comparative studies show a trend towards higher intelligence going from prosimians to New World monkeys to Old World monkeys, and significantly higher average cognitive abilities in the great apes. However, there is a great deal of variation in each group (e.g., among New World monkeys, both spider and capuchin monkeys have scored highly by some measures), as well as in the results of different studies.
In 1960, Jane Goodall observed a chimpanzee poking pieces of grass into a termite mound and then raising the grass to his mouth. After he left, Goodall approached the mound and repeated the behaviour because she was unsure what the chimpanzee was doing. She found that the termites bit onto the grass with their jaws. The chimpanzee had been using the grass as a tool to "fish" or "dip" for termites. There are more limited reports of the closely related bonobo using tools in the wild; it has been claimed they rarely use tools in the wild although they use tools as readily as chimpanzees when in captivity. It has been reported that females, both chimpanzee and bonobo, use tools more avidly than males. Orangutans in Borneo scoop catfish out of small ponds. Anthropologist Anne Russon saw several animals on these forested islands learn on their own to jab at catfish with sticks, so that the panicked prey would flop out of ponds and into the orangutan's waiting hands There are few reports of gorillas using tools in the wild. An adult female western lowland gorilla used a branch as a walking stick apparently to test water depth and to aid her in crossing a pool of water. Another adult female used a detached trunk from a small shrub as a stabilizer during food gathering, and another used a log as a bridge.
The black-striped capuchin was the first non-ape primate for which tool use was documented in the wild; individuals were observed cracking nuts by placing them on a stone anvil and hitting them with another large stone. In Thailand and Myanmar, crab-eating macaques use stone tools to open nuts, oysters and other bivalves, and various types of sea snails. Chacma baboons use stones as weapons; stoning by these baboons is done from the rocky walls of the canyon where they sleep and retreat to when they are threatened. Stones are lifted with one hand and dropped over the side whereupon they tumble down the side of the cliff or fall directly to the canyon floor.
Although they have not been observed to use tools in the wild, lemurs in controlled settings have been shown to be capable of understanding the functional properties of the objects they had been trained to use as tools, performing as well as tool-using haplorhines.
Tool manufacture is much rarer than simple tool use and probably represents higher cognitive functioning. Soon after her initial discovery of tool use, Goodall observed other chimpanzees picking up leafy twigs, stripping off the leaves and using the stems to fish for insects. This change of a leafy twig into a tool was a major discovery. Prior to this, scientists thought that only humans manufactured and used tools, and that this ability was what separated humans from other animals. Chimpanzees have also been observed making "sponges" out of leaves and moss that suck up water. Sumatran orangutans have been observed making and using tools. They will break off a tree branch that is about 30 cm long, snap off the twigs, fray one end and then use the stick to dig in tree holes for termites. In the wild, mandrills have been observed to clean their ears with modified tools. Scientists filmed a large male mandrill at Chester Zoo (UK) stripping down a twig, apparently to make it narrower, and then using the modified stick to scrape dirt from underneath its toenails. Captive gorillas have made a variety of tools.
Non-human primates primarily live in the tropical latitudes of Africa, Asia, and the Americas. Species that live outside of the tropics; include the Japanese macaque which lives in the Japanese islands of Honshū and Hokkaido; the Barbary macaque which lives in North Africa and several species of langur which live in China. Primates tend to live in tropical rainforests but are also found in temperate forests, savannas, deserts, mountains and coastal areas. The number of primate species within tropical areas has been shown to be positively correlated to the amount of rainfall and the amount of rain forest area. Accounting for 25% to 40% of the fruit-eating animals (by weight) within tropical rainforests, primates play an important ecological role by dispersing seeds of many tree species.
Primate habitats span a range of altitudes: the black snub-nosed monkey has been found living in the Hengduan Mountains at altitudes of 4,700 meters (15,400 ft), the mountain gorilla can be found at 4,200 meters (13,200 ft) crossing the Virunga Mountains, and the gelada has been found at elevations of up to in the Ethiopian Highlands. Some species interact with aquatic environments and may swim or even dive, including the proboscis monkey, De Brazza's monkey and Allen's swamp monkey. Some primates, such as the rhesus macaque and gray langurs, can exploit human-modified environments and even live in cities.
Close interactions between humans and non-human primates (NHPs) can create pathways for the transmission of zoonotic diseases. Viruses such as "Herpesviridae" (most notably Herpes B Virus), "Poxviridae", measles, ebola, rabies, the Marburg virus and viral hepatitis can be transmitted to humans; in some cases the viruses produce potentially fatal diseases in both humans and non-human primates.
Only humans are recognized as persons and protected in law by the United Nations Universal Declaration of Human Rights. The legal status of NHPs, on the other hand, is the subject of much debate, with organizations such as the Great Ape Project (GAP) campaigning to award at least some of them legal rights. In June 2008, Spain became the first country in the world to recognize the rights of some NHPs, when its parliament's cross-party environmental committee urged the country to comply with GAP's recommendations, which are that chimpanzees, bonobos, orangutans, and gorillas are not to be used for animal experiments.
Many species of NHP are kept as pets by humans, the Allied Effort to Save Other Primates (AESOP) estimates that around 15,000 NHPs live as exotic pets in the United States. The expanding Chinese middle class has increased demand for NHPs as exotic pets in recent years. Although NHP import for the pet trade was banned in the U.S. in 1975, smuggling still occurs along the United States – Mexico border, with prices ranging from US$3000 for monkeys to $30,000 for apes.
Primates are used as model organisms in laboratories and have been used in space missions. They serve as service animals for disabled humans. Capuchin monkeys can be trained to assist quadriplegic humans; their intelligence, memory, and manual dexterity make them ideal helpers.
NHPs are kept in zoos around the globe. Historically, zoos were primarily a form of entertainment, but more recently have shifted their focus towards conservation, education and research. GAP does not insist that all NHPs should be released from zoos, primarily because captive-born primates lack the knowledge and experience to survive in the wild if released.
Thousands of non-human primates are used around the world in research because of their psychological and physiological similarity to humans. In particular, the brains and eyes of NHPs more closely parallel human anatomy than those of any other animals. NHPs are commonly used in preclinical trials, neuroscience, ophthalmology studies, and toxicity studies. Rhesus macaques are often used, as are other macaques, African green monkeys, chimpanzees, baboons, squirrel monkeys, and marmosets, both wild-caught and purpose-bred.
In 2005, GAP reported that 1,280 of the 3,100 NHPs living in captivity in the United States were used for experiments. In 2004, the European Union used around 10,000 NHPs in such experiments; in 2005 in Great Britain, 4,652 experiments were conducted on 3,115 NHPs. Governments of many nations have strict care requirements of NHPs kept in captivity. In the US, federal guidelines extensively regulate aspects of NHP housing, feeding, enrichment, and breeding. European groups such as the European Coalition to End Animal Experiments are seeking a ban on all NHP use in experiments as part of the European Union's review of animal testing legislation.
The International Union for Conservation of Nature (IUCN) lists more than a third of primates as critically endangered or vulnerable. About 60% of primate species are threatened with extinction, including: 87% of species in Madagascar, 73% in Asia, 37% in Africa, and 36% in South and Central America. Additionally, 75% of primate species have decreasing populations. Trade is regulated, as all species are listed by CITES in Appendix II, except 50 species and subspecies listed in Appendix I, which gain full protection from trade.
Common threats to primate species include deforestation, forest fragmentation, monkey drives (resulting from primate crop raiding), and primate hunting for use in medicines, as pets, and for food. Large-scale tropical forest clearing is widely regarded as the process that most threatens primates. More than 90% of primate species occur in tropical forests. The main cause of forest loss is clearing for agriculture, although commercial logging, subsistence harvesting of timber, mining, and dam construction also contribute to tropical forest destruction. In Indonesia large areas of lowland forest have been cleared to increase palm oil production, and one analysis of satellite imagery concluded that during 1998 and 1999 there was a loss of 1,000 Sumatran orangutans per year in the Leuser Ecosystem alone.
Primates with a large body size (over 5 kg) are at increased extinction risk due to their greater profitability to poachers compared to smaller primates. They reach sexual maturity later and have a longer period between births. Populations therefore recover more slowly after being depleted by poaching or the pet trade. Data for some African cities show that half of all protein consumed in urban areas comes from the bushmeat trade. Endangered primates such as guenons and the drill are hunted at levels that far exceed sustainable levels. This is due to their large body size, ease of transport and profitability per animal. As farming encroaches on forest habitats, primates feed on the crops, causing the farmers large economic losses. Primate crop raiding gives locals a negative impression of primates, hindering conservation efforts.
Madagascar, home to five endemic primate families, has experienced the greatest extinction of the recent past; since human settlement 1,500 years ago, at least eight classes and fifteen of the larger species have become extinct due to hunting and habitat destruction. Among the primates wiped out were "Archaeoindris" (a lemur larger than a silverback gorilla) and the families Palaeopropithecidae and Archaeolemuridae.
In Asia, Hinduism, Buddhism, and Islam prohibit eating primate meat; however, primates are still hunted for food. Some smaller traditional religions allow the consumption of primate meat. The pet trade and traditional medicine also increase demand for illegal hunting. The rhesus macaque, a model organism, was protected after excessive trapping threatened its numbers in the 1960s; the program was so effective that they are now viewed as a pest throughout their range.
In Central and South America forest fragmentation and hunting are the two main problems for primates. Large tracts of forest are now rare in Central America. This increases the amount of forest vulnerable to edge effects such as farmland encroachment, lower levels of humidity and a change in plant life. Movement restriction results in a greater amount of inbreeding, which can cause deleterious effects leading to a population bottleneck, whereby a significant percentage of the population is lost.
There are 21 critically endangered primates, 7 of which have remained on the IUCN's "The World's 25 Most Endangered Primates" list since the year 2000: the silky sifaka, Delacour's langur, the white-headed langur, the gray-shanked douc, the Tonkin snub-nosed monkey, the Cross River gorilla and the Sumatran orangutan. Miss Waldron's red colobus was recently declared extinct when no trace of the subspecies could be found from 1993 to 1999. A few hunters have found and killed individuals since then, but the subspecies' prospects remain bleak. | https://en.wikipedia.org/wiki?curid=22984 |
Politics
Politics (from ,
) is the set of activities that are associated with making decisions in groups, or other forms of power relations between individuals, such as the distribution of resources or status. The academic study of politics is referred to as political science.
Politics is a multifaceted word. It may be used positively in the context of a "political solution" which is compromising and non-violent, or descriptively as "the art or science of government", but also often carries a negative connotation. For example, abolitionist Wendell Phillips declared that "we do not play politics; anti-slavery is no half-jest with us." The concept has been defined in various ways, and different approaches have fundamentally differing views on whether it should be used extensively or limitedly, empirically or normatively, and on whether conflict or co-operation is more essential to it.
A variety of methods are deployed in politics, which include promoting one's own political views among people, negotiation with other political subjects, making laws, and exercising force, including warfare against adversaries. Politics is exercised on a wide range of social levels, from clans and tribes of traditional societies, through modern local governments, companies and institutions up to sovereign states, to the international level. In modern nation states, people often form political parties to represent their ideas. Members of a party often agree to take the same position on many issues and agree to support the same changes to law and the same leaders. An election is usually a competition between different parties.
A political system is a framework which defines acceptable political methods within a society. The history of political thought can be traced back to early antiquity, with seminal works such as Plato's "Republic", Aristotle's "Politics," Chanakya's "Arthashastra" and "Chanakya Niti" (3rd Century BCE), as well as the works of Confucius.
The English word "politics" derives from the Greek word (), the name of Aristotle's classic work, "Politiká." In the mid-15th century, Aristotle's composition would be rendered in Early Modern English as ""Polettiques"", which would become ""Politics"" in Modern English.
The singular "politic" first attested in English in 1430, coming from Middle French —itself taking from , a Latinization of the Greek () from (,
'citizen') and (,
'city').
According to Harold Lasswell, politics is "who gets what, when, how".
For David Easton, it is about "the authoritative allocation of values for a society".
To Vladimir Lenin, "politics is the most concentrated expression of economics".
Bernard Crick argued that "politics is a distinctive form of rule whereby people act together through institutionalized procedures to resolve differences, to conciliate diverse interests and values and to make public policies in the pursuit of common purposes".
Adrian Leftwich gives the definition that "Politics comprises all the activities of co-operation, negotiation and conflict within and between societies, whereby people go about organizing the use, production or distribution of human, natural and other resources in the course of the production and reproduction of their biological and social life".
There are several ways in which approaching politics has been conceptualized.
Adrian Leftwich has differentiated views of politics based on how extensive or limited their perception of what accounts as 'political' is. The extensive view sees politics as present across the sphere of human social relations, while the limited view restricts it to certain contexts. For example, in a more restrictive way, politics may be viewed as primarily about governance, while a feminist perspective could argue that sites which have been viewed traditionally as non-political, should indeed be viewed as political as well. This latter position is encapsulated in the slogan the personal is political, which disputes the distinction between private and public issues. Instead, politics may be defined by the use of power, as has been argued by Robert A. Dahl.
Some perspectives on politics view it empirically as an exercise of power, while other see it as a social function with a normative basis. This distinction has been called the difference between political "moralism" and political "realism." For moralists, politics is closely linked to ethics, and is at its extreme in utopian thinking. For example, according to Hannah Arendt, the view of Aristotle was that "to be political . . . meant that everything was decided through words and persuasion and not through violence", while according to Bernard Crick "Politics is the way in which free societies are governed. Politics is politics and other forms of rule are something else". In contrast, for realists, represented by those such as Niccolò Machiavelli, Thomas Hobbes, and Harold Lasswell, politics is based on the use of power, irrespective of the ends being pursued.
Agonism argues that politics essentially comes down to conflict between conflicting interests. Political scientist Elmer Schattschneider argued that "at the root of all politics is the universal language of conflict", while for Carl Schmitt the essence of politics is the distinction of 'friend' from foe'. This is in direct contrast to the more co-operative views of politics by Aristotle and Crick. However, a more mixed view between these extremes is provided by the Irish author Michael Laver, who noted that "Politics is about the characteristic blend of conflict and co-operation that can be found so often in human interactions. Pure conflict is war. Pure co-operation is true love. Politics is a mixture of both."
The history of politics spans human history and is not limited to modern institutions of government.
Frans de Waal argued that already chimpanzees engage in politics through "social manipulation to secure and maintain influential positions". Early human forms of social organization—bands and tribes—lacked centralized political structures. These are sometimes referred to as stateless societies.
There are a number of different theories and hypotheses regarding early state formation that seek generalizations to explain why the state developed in some places but not others. Other scholars believe that generalizations are unhelpful and that each case of early state formation should be treated on its own.
Voluntary theories contend that diverse groups of people came together to form states as a result of some shared rational interest. The theories largely focus on the development of agriculture, and the population and organizational pressure that followed and resulted in state formation. One of the most prominent theories of early and primary state formation is the "hydraulic hypothesis", which contends that the state was a result of the need to build and maintain large-scale irrigation projects.
Conflict theories of state formation regard conflict and dominance of some population over another population as key to the formation of states. In contrast with voluntary theories, these arguments believe that people do not voluntarily agree to create a state to maximize benefits, but that states form due to some form of oppression by one group over others.
Some theories in turn argue that warfare was critical for state formation.
In ancient history, civilizations did not have definite boundaries as states have today, and their borders could be more accurately described as frontiers. Early dynastic Sumer, and early dynastic Egypt were the first civilizations to define their borders. Moreover, up to the twentieth century, many people lived in non-state societies. These range from relatively egalitarian bands and tribes to complex and highly stratified chiefdoms.
The first states of sorts were those of early dynastic Sumer and early dynastic Egypt, which arose from the Uruk period and Predynastic Egypt respectively at approximately 3000BCE. Early dynastic Egypt was based around the Nile River in the north-east of Africa, the kingdom's boundaries being based around the Nile and stretching to areas where oases existed. Early dynastic Sumer was located in southern Mesopotamia with its borders extending from the Persian Gulf to parts of the Euphrates and Tigris rivers.
Although state-forms existed before the rise of the Ancient Greek empire, the Greeks were the first people known to have explicitly formulated a political philosophy of the state, and to have rationally analyzed political institutions. Prior to this, states were described and justified in terms of religious myths.
Several important political innovations of classical antiquity came from the Greek city-states and the Roman Republic. The Greek city-states before the 4th century granted citizenship rights to their free population, and in Athens these rights were combined with a directly democratic form of government that was to have a long afterlife in political thought and history.
The Peace of Westphalia (1648) is considered by political scientists to be the beginning of the modern international system, in which external powers should avoid interfering in another country's domestic affairs. The principle of non-interference in other countries' domestic affairs was laid out in the mid-18th century by Swiss jurist Emer de Vattel. States became the primary institutional agents in an interstate system of relations. The Peace of Westphalia is said to have ended attempts to impose supranational authority on European states. The "Westphalian" doctrine of states as independent agents was bolstered by the rise in 19th century thought of nationalism, under which legitimate states were assumed to correspond to "nations"—groups of people united by language and culture.
In Europe, during the 18th century, the classic non-national states were the "multinational" empires, the Austrian Empire, Kingdom of France, Kingdom of Hungary, the Russian Empire, the Spanish Empire, the Ottoman Empire, the British Empire. Such empires also existed in Asia, Africa and the Americas. In the Muslim world, immediately after Muhammad's death in 632, Caliphates were established which developed into multi-ethnic trans-national empires. The multinational empire was an absolute monarchy ruled by a king, emperor or sultan. The population belonged to many ethnic groups, and they spoke many languages. The empire was dominated by one ethnic group, and their language was usually the language of public administration. The ruling dynasty was usually, but not always, from that group. Some of the smaller European states were not so ethnically diverse, but were also dynastic states, ruled by a royal house. A few of the smaller states survived, such as the independent principalities of Liechtenstein, Andorra, Monaco, and the republic of San Marino.
Most theories see the nation state as a 19th-century European phenomenon, facilitated by developments such as state-mandated education, mass literacy and mass media. However, historians also note the early emergence of a relatively unified state and identity in Portugal and the Dutch Republic. Scholars such as Steven Weber, David Woodward, Michel Foucault and Jeremy Black have advanced the hypothesis that the nation state did not arise out of political ingenuity or an unknown undetermined source, nor was it an accident of history or political invention; but is an inadvertent byproduct of 15th-century intellectual discoveries in political economy, capitalism, mercantilism, political geography, and geography combined together with cartography and advances in map-making technologies.
Some nation states, such as Germany and Italy, came into existence at least partly as a result of political campaigns by nationalists, during the 19th century. In both cases, the territory was previously divided among other states, some of them very small. Liberal ideas of free trade played a role in German unification, which was preceded by a customs union, the Zollverein. National self-determination was a key aspect of United States President Woodrow Wilson's Fourteen points, leading to the dissolution of the Austro-Hungarian Empire and the Ottoman Empire after the First World War, while the Russian Empire became the Soviet Union after the Russian Civil War. Decolonization lead to the creation of new nation states in place of multinational empires in the third world.
Political globalization began in the 20th century through intergovernmental organizations and supranational unions. The League of Nations was founded after World War I, and after World War II it was replaced by the United Nations. Various international treaties have been signed through it. Regional integration has been pursued by the African Union, ASEAN, the European Union, and Mercosur. International political institutions on the international level include the International Criminal Court, the International Monetary Fund, and the World Trade Organization.
The study of politics is called political science, or politology. It comprises numerous subfields, including comparative politics, political economy, international relations, political philosophy, public administration, public policy, and political methodology. Furthermore, political science is related to, and draws upon, the fields of economics, law, sociology, history, philosophy, geography, psychology/psychiatry, anthropology and neurosciences.
Comparative politics is the science of comparison and teaching of different types of constitutions, political actors, legislature and associated fields, all of them from an intrastate perspective. International relations deals with the interaction between nation-states as well as intergovernmental and transnational organizations. Political philosophy is more concerned with contributions of various classical and contemporary thinkers and philosophers.
Political science is methodologically diverse and appropriates many methods originating in psychology, social research and cognitive neuroscience. Approaches include positivism, interpretivism, rational choice theory, behavioralism, structuralism, post-structuralism, realism, institutionalism, and pluralism. Political science, as one of the social sciences, uses methods and techniques that relate to the kinds of inquiries sought: primary sources such as historical documents and official records, secondary sources such as scholarly journal articles, survey research, statistical analysis, case studies, experimental research, and model building.
The political system defines the process for making official government decisions. It is usually compared to the legal system, economic system, cultural system, and other social systems. According to David Easton, "A political system can be designated as the interactions through which values are authoritatively allocated for a society". Each political system is embedded in a society with its own political culture, and they in turn shape their societies through public policy. The interactions between different political systems are the basis for global politics.
Forms of government can be classified by several ways. The source of power determines the difference between democracies, oligarchies, and autocracies. In terms of the structure of power, there are monarchies (including constitutional monarchies) and republics (usually presidential, semi-presidential, or parliamentary). In terms of level of vertical integration, they can be divided into (from least to most integrated) confederations, federations, and unitary States. The separation of powers describes the degree of horizontal integration between the legislature, the executive, the judiciary, and other independent institutions.
In a democracy, political legitimacy is based on popular sovereignty. Forms of democracy include representative democracy, direct democracy, and demarchy. These are separated by the way decisions are made, whether by elected representatives, referenda, or by citizen juries. Democracies can be either republics or constitutional monarchies.
Oligarchy is a power structure where a minority rules. These may be in the form of anocracy, aristocracy, ergatocracy, geniocracy, gerontocracy, kakistocracy, kleptocracy, meritocracy, noocracy, particracy, plutocracy, stratocracy, technocracy, theocracy or timocracy.
Autocracies are either dictatorships (including military dictatorships) or absolute monarchies.
A federation (also known as a federal state) is a political entity characterized by a union of partially self-governing provinces, states, or other regions under a central federal government (federalism). In a federation, the self-governing status of the component states, as well as the division of power between them and the central government, is typically constitutionally entrenched and may not be altered by a unilateral decision of either party, the states or the federal political body. Federations were formed first in Switzerland, then in the United States in 1776, in Canada in 1867 and in Germany in 1871 and in 1901, Australia. Compared to a federation, a confederation has less centralized power.
All the above forms of government are variations of the same basic polity, the sovereign state. The state has been defined by Max Weber as a political entity that has monopoly on violence within its territory, while the Montevideo Convention holds that states need to have a defined territory; a permanent population; a government; and a capacity to enter into international relations.
A stateless society is a society that is not governed by a state. In stateless societies, there is little concentration of authority; most positions of authority that do exist are very limited in power and are generally not permanently held positions; and social bodies that resolve disputes through predefined rules tend to be small. Stateless societies are highly variable in economic organization and cultural practices.
While stateless societies were the norm in human prehistory, few stateless societies exist today; almost the entire global population resides within the jurisdiction of a sovereign state. In some regions nominal state authorities may be very weak and wield little or no actual power. Over the course of history most stateless peoples have been integrated into the state-based societies around them.
Some political philosophies consider the state undesirable, and thus consider the formation of a stateless society a goal to be achieved. A central tenet of anarchism is the advocacy of society without states. The type of society sought for varies significantly between anarchist schools of thought, ranging from extreme individualism to complete collectivism. In Marxism, Marx's theory of the state considers that in a post-capitalist society the state, an undesirable institution, would be unnecessary and wither away. A related concept is that of stateless communism, a phrase sometimes used to describe Marx's anticipated post-capitalist society.
Constitutions are written documents that specify and limit the powers of the different branches of government. Although a constitution is a written document, there is also an unwritten constitution. The unwritten constitution is continually being written by the legislative and judiciary branch of government; this is just one of those cases in which the nature of the circumstances determines the form of government that is most appropriate. England did set the fashion of written constitutions during the Civil War but after the Restoration abandoned them to be taken up later by the American Colonies after their emancipation and then France after the Revolution and the rest of Europe including the European colonies.
Constitutions often set out separation of powers, dividing the government into the executive, the legislature, and the judiciary (together referred to as the trias politica), in order to achieve checks and balances within the state. Additional independent branches may also be created, including civil service commissions, election commissions, and supreme audit institutions.
Political culture describes how culture impacts politics. Every political system is embedded in a particular political culture. Lucian Pye's definition is that "Political culture is the set of attitudes, beliefs, and sentiments, which give order and meaning to a political process and which provide the underlying assumptions and rules that govern behavior in the political system".
Trust is a major factor in political culture, as its level determines the capacity of the state to function. Postmaterialism is the degree to which a political culture is concerned with issues which are not of immediate physical or material concern, such as human rights and environmentalism. Religion has also an impact on political culture.
Macropolitics describes political issues which affect the entire political system (e.g. the nation-state) or which relate to interactions between political systems (e.g. international relations).
Global (or world) politics covers all aspects of politics which affect multiple political systems, in practice meaning any political phenomenon crossing national borders. This may include cities, nation-states, multinational corporations, non-governmental organizations, or international organizations. An important element is international relations. The relations between nation-states may be peaceful, when they are conducted through diplomacy, or violent, which is described as war. States which are able to exert strong international influence are referred to as superpowers, while less powerful ones may be called regional or middle powers. The international system of power is called the world order, and it is affected by the balance of power which affects the degree of polarity in the system. Emerging powers are potentially destabilizing to it, especially if they display revanchism or irredentism.
Politics inside the limits of political systems, which in contemporary context correspond to national borders, are referred to as domestic politics. This includes most forms of public policy, such as social policy, economic policy, or law enforcement, which are executed by the state bureaucracy.
Mesopolitics describes the politics of intermediary structures within the political system, such as national political parties or movements.
A political party is a political organization that typically seeks to attain and maintain political power within government, usually by participating in political campaigns, educational outreach or protest actions. Parties often espouse an expressed ideology or vision bolstered by a written platform with specific goals, forming a coalition among disparate interests.
Political parties within a particular political system together form the party system. This may be a multiparty system, a two-party system, a dominant-party system, or a one-party system, depending on the level of pluralism. This is affected by characteristics of the political system, including its electoral system. According to Duverger's law, first-past-the-post systems are likely to lead to two-party systems, while proportional representation systems are more likely to create a multiparty system.
Micropolitics describes the actions of individual actors within the political system. This is often described as political participation.
Political participation may take many forms, including:
Political corruption is the use of powers by government officials or their network contacts for illegitimate private gain. Forms of political corruption include bribery, cronyism, nepotism, and political patronage. Forms of political patronage in turn includes clientelism, earmarking, political machines, pork barreling, slush funds, and spoils systems.
A political system which operates for corrupt ends may be called a political machine.
When corruption is embedded in political culture, this may be referred to as patrimonialism or neopatrimonialism.
A form of government which is built on corruption is called a kleptocracy ("rule of thieves").
Political conflict entails the use of political violence to achieve political ends. As noted by Carl von Clausewitz, "War is a mere continuation of politics by other means". Beyond just inter-state warfare, this may include civil war, wars of national liberation, or asymmetric warfare such as guerrilla war or terrorism. When a political system is overthrown, the event is called a revolution, which may be only political revolution if it does not go further, or a social revolution if the social system is also radically altered. However, these may also be nonviolent revolutions.
Democracy is a system of processing conflicts in which outcomes depend on what participants do, but no single force controls what occurs and its outcomes. The uncertainty of outcomes is inherent in democracy. Democracy makes all forces struggle repeatedly to realize their interests and devolves power from groups of people to sets of rules.
Among modern political theorists, there are three contending conceptions of democracy: "aggregative democracy", "deliberative democracy", and "radical democracy".
The theory of "aggregative democracy" claims that the aim of the democratic processes is to solicit citizens' preferences and aggregate them together to determine what social policies society should adopt. Therefore, proponents of this view hold that democratic participation should primarily focus on voting, where the policy with the most votes gets implemented.
Different variants of aggregative democracy exist. Under "minimalism", democracy is a system of government in which citizens have given teams of political leaders the right to rule in periodic elections. According to this minimalist conception, citizens cannot and should not "rule" because, for example, on most issues, most of the time, they have no clear views or their views are not well-founded. Joseph Schumpeter articulated this view most famously in his book "Capitalism, Socialism, and Democracy". Contemporary proponents of minimalism include William H. Riker, Adam Przeworski, Richard Posner.
According to the theory of direct democracy, on the other hand, citizens should vote directly, not through their representatives, on legislative proposals. Proponents of direct democracy offer varied reasons to support this view. Political activity can be valuable in itself, it socializes and educates citizens, and popular participation can check powerful elites. Most importantly, citizens do not rule themselves unless they directly decide laws and policies.
Governments will tend to produce laws and policies that are close to the views of the median voter—with half to their left and the other half to their right. This is not a desirable outcome as it represents the action of self-interested and somewhat unaccountable political elites competing for votes. Anthony Downs suggests that ideological political parties are necessary to act as a mediating broker between individual and governments. Downs laid out this view in his 1957 book "An Economic Theory of Democracy".
Robert A. Dahl argues that the fundamental democratic principle is that, when it comes to binding collective decisions, each person in a political community is entitled to have his/her interests be given equal consideration (not necessarily that all people are equally satisfied by the collective decision). He uses the term polyarchy to refer to societies in which there exists a certain set of institutions and procedures which are perceived as leading to such democracy. First and foremost among these institutions is the regular occurrence of free and open elections which are used to select representatives who then manage all or most of the public policy of the society. However, these polyarchic procedures may not create a full democracy if, for example, poverty prevents political participation. Similarly, Ronald Dworkin argues that "democracy is a substantive, not a merely procedural, ideal."
"Deliberative democracy" is based on the notion that democracy is government by deliberation. Unlike aggregative democracy, deliberative democracy holds that, for a democratic decision to be legitimate, it must be preceded by authentic deliberation, not merely the aggregation of preferences that occurs in voting. "Authentic deliberation" is deliberation among decision-makers that is free from distortions of unequal political power, such as power a decision-maker obtained through economic wealth or the support of interest groups. If the decision-makers cannot reach consensus after authentically deliberating on a proposal, then they vote on the proposal using a form of majority rule.
"Radical democracy" is based on the idea that there are hierarchical and oppressive power relations that exist in society. Democracy's role is to make visible and challenge those relations by allowing for difference, dissent and antagonisms in decision-making processes.
Equality is a state of affairs in which all people within a specific society or isolated group have the same social status, especially socioeconomic status, including protection of human rights and dignity, and equal access to certain social goods and social services. Furthermore, it may also include health equality, economic equality and other social securities. Social equality requires the absence of legally enforced social class or caste boundaries and the absence of discrimination motivated by an inalienable part of a person's identity. To this end there must be equal justice under law, and equal opportunity regardless of, for example, sex, gender, ethnicity, age, sexual orientation, origin, caste or class, income or property, language, religion, convictions, opinions, health or disability.
A common way of understanding politics is through the left–right political spectrum, which ranges from left-wing politics via centrism to right-wing politics. This classification is comparatively recent and dates from the French Revolution, when those members of the National Assembly who supported the republic, the common people and a secular society sat on the left and supporters of the monarchy, aristocratic privilege and the Church sat on the right. Today, the left is generally progressivist, seeking social progress in society. The more extreme elements of the left, named the far-left, tend to support revolutionary means for achieving this. This includes ideologies such as Communism and Marxism. The center-left, on the other hand, advocate for more reformist approaches, for example that of social democracy. In contrast, the right is generally motivated by conservatism, which seeks to conserve what it sees as the important elements of society. The far-right goes beyond this, and often represents a reactionary turn against progress, seeking to undo it. Examples of such ideologies have included Fascism and Nazism. The center-right may be less clear-cut and more mixed in this regard, with neoconservatives supporting the spread of democracy, and one-nation conservatives more open to social welfare programs.
According to Norberto Bobbio, one of the major exponents of this distinction, the left believes in attempting to eradicate social inequality—believing it to be unethical or unnatural while the right regards most social inequality as the result of ineradicable natural inequalities, and sees attempts to enforce social equality as utopian or authoritarian.
Some ideologies, notably Christian Democracy, claim to combine left and right-wing politics; according to Geoffrey K. Roberts and Patricia Hogwood, "In terms of ideology, Christian Democracy has incorporated many of the views held by liberals, conservatives and socialists within a wider framework of moral and Christian principles." Movements which claim or formerly claimed to be above the left-right divide include Fascist Terza Posizione economic politics in Italy and Peronism in Argentina.
Political freedom (also known as political liberty or autonomy) is a central concept in political thought and one of the most important features of democratic societies. Negative liberty has been described as freedom from oppression or coercion and unreasonable external constraints on action, often enacted through civil and political rights, while positive liberty is the absence of disabling conditions for an individual and the fulfillment of enabling conditions, e.g. economic compulsion, in a society. This capability approach to freedom requires economic, social and cultural rights in order to be realized.
Authoritarianism and libertarianism disagree the amount of individual freedom each person possesses in that society relative to the state. One author describes authoritarian political systems as those where "individual rights and goals are subjugated to group goals, expectations and conformities," while libertarians generally oppose the state and hold the individual as sovereign. In their purest form, libertarians are anarchists, who argue for the total abolition of the state, of political parties and of other political entities, while the purest authoritarians are, by definition, totalitarians who support state control over all aspects of society.
For instance, classical liberalism (also known as "laissez-faire liberalism") is a doctrine stressing individual freedom and limited government. This includes the importance of human rationality, individual property rights, free markets, natural rights, the protection of civil liberties, constitutional limitation of government, and individual freedom from restraint as exemplified in the writings of John Locke, Adam Smith, David Hume, David Ricardo, Voltaire, Montesquieu and others. According to the libertarian Institute for Humane Studies, "the libertarian, or 'classical liberal,' perspective is that individual well-being, prosperity, and social harmony are fostered by 'as much liberty as possible' and 'as little government as necessary.'" For anarchist political philosopher L. Susan Brown (1993), "liberalism and anarchism are two political philosophies that are fundamentally concerned with individual freedom yet differ from one another in very distinct ways. Anarchism shares with liberalism a radical commitment to individual freedom while rejecting liberalism's competitive property relations." | https://en.wikipedia.org/wiki?curid=22986 |
Paris
Paris () is the capital and most populous city of France, with an estimated population of 2,150,271 residents as of 2020, in an area of . Since the 17th century, Paris has been one of Europe's major centres of finance, diplomacy, commerce, fashion, science and arts. The City of Paris is the centre and seat of government of the Île-de-France, or Paris Region, which has an estimated official 2020 population of 12,278,210, or about 18 percent of the population of France. The Paris Region had a GDP of €709 billion ($808 billion) in 2017. According to the Economist Intelligence Unit Worldwide Cost of Living Survey in 2018, Paris was the second most expensive city in the world, after Singapore, and ahead of Zürich, Hong Kong, Oslo and Geneva. Another source ranked Paris as most expensive, on a par with Singapore and Hong Kong, in 2018.
The city is a major railway, highway and air-transport hub served by two international airports: Paris-Charles de Gaulle (the second busiest airport in Europe) and Paris-Orly. Opened in 1900, the city's subway system, the Paris Métro, serves 5.23 million passengers daily; it is the second busiest metro system in Europe after the Moscow Metro. Gare du Nord is the 24th busiest railway station in the world, but the first located outside Japan, with 262 million passengers in 2015. Paris is especially known for its museums and architectural landmarks: the Louvre was the most visited art museum in the world in 2019, with 9.6 million visitors. The Musée d'Orsay, Musée Marmottan Monet, and Musée de l'Orangerie are noted for their collections of French Impressionist art, the Pompidou Centre Musée National d'Art Moderne has the largest collection of modern and contemporary art in Europe, and the Musée Rodin and Musée Picasso exhibit the works of two noted Parisians. The historical district along the Seine in the city centre is classified as a UNESCO Heritage Site, and popular landmarks in the city centre included the Cathedral of Notre Dame de Paris, on the Île de la Cité, now closed for renovation after the 15 April 2019 fire. Other popular tourist sites include the Gothic royal chapel of Sainte-Chapelle, also on the Île de la Cité; the Eiffel Tower, constructed for the Paris Universal Exposition of 1889; the Grand Palais and Petit Palais, built for the Paris Universal Exposition of 1900; the Arc de Triomphe on the Champs-Élysées, and the Basilica of Sacré-Coeur on the hill of Montmartre.
Paris received 17.5 million visitors in 2018, measured by hotel stays, with the largest numbers of foreign visitors coming from the United States, the United Kingdom, Germany and China. It was ranked as the sixth most visited travel destination in the world in 2018, after Hong Kong, Bangkok, London, Macao and Singapore. The football club Paris Saint-Germain and the rugby union club Stade Français are based in Paris. The 80,000-seat Stade de France, built for the 1998 FIFA World Cup, is located just north of Paris in the neighbouring commune of Saint-Denis. Paris hosts the annual French Open Grand Slam tennis tournament on the red clay of Roland Garros. The city hosted the Olympic Games in 1900, 1924 and will host the 2024 Summer Olympics. The 1938 and 1998 FIFA World Cups, the 2007 Rugby World Cup, as well as the 1960, 1984 and 2016 UEFA European Championships were also held in the city. Every July, the Tour de France bicycle race finishes on the Avenue des Champs-Élysées in Paris.
The name 'Paris' is derived from its early inhabitants, the Gallic Parisii tribe. The meaning of the Gaulish name "Parisii" is debated. According to Xavier Delamarre, it may derive from the root "pario-" ('cauldron'). Alfred Holder interpreted "Parisii" as 'the makers' or 'the commanders', by comparing the name to the Welsh "peryff" ('lord, commander'), from "paraf" - "peri" ('to make, produce, command to be done'). The city's name is not related to the Paris of Greek mythology.
Paris is often referred to as the 'City of Light' ("La Ville Lumière"), both because of its leading role during the Age of Enlightenment and more literally because Paris was one of the first large European cities to use gas street lighting on a grand scale on its boulevards and monuments. Gas lights were installed on the Place du Carousel, Rue de Rivoli and Place Vendome in 1829. By 1857, the Grand boulevards were lit. By the 1860s, the boulevards and streets of Paris were illuminated by 56,000 gas lamps. Since the late 19th century, Paris has also been known as "Panam(e)" () in French slang.
Inhabitants are known in English as "Parisians" and in French as "Parisiens" (). They are also pejoratively called "Parigots" ().
The "Parisii", a sub-tribe of the Celtic Senones, inhabited the Paris area from around the middle of the 3rd century BC. One of the area's major north–south trade routes crossed the Seine on the île de la Cité; this meeting place of land and water trade routes gradually became an important trading centre. The Parisii traded with many river towns (some as far away as the Iberian Peninsula) and minted their own coins for that purpose.
The Romans conquered the Paris Basin in 52 BC and began their settlement on Paris' Left Bank. The Roman town was originally called Lutetia (more fully, "Lutetia Parisiorum", "Lutetia of the Parisii"). It became a prosperous city with a forum, baths, temples, theatres, and an amphitheatre.
By the end of the Western Roman Empire, the town was known as "Parisius", a Latin name that would later become "Paris" in French. Christianity was introduced in the middle of the 3rd century AD by Saint Denis, the first Bishop of Paris: according to legend, when he refused to renounce his faith before the Roman occupiers, he was beheaded on the hill which became known as "Mons Martyrum" (Latin "Hill of Martyrs"), later "Montmartre", from where he walked headless to the north of the city; the place where he fell and was buried became an important religious shrine, the Basilica of Saint-Denis, and many French kings are buried there.
Clovis the Frank, the first king of the Merovingian dynasty, made the city his capital from 508. As the Frankish domination of Gaul began, there was a gradual immigration by the Franks to Paris and the Parisian Francien dialects were born. Fortification of the Île de la Cité failed to avert sacking by Vikings in 845, but Paris' strategic importance—with its bridges preventing ships from passing—was established by successful defence in the Siege of Paris (885–86), for which the then Count of Paris ("comte de Paris"), Odo of France, was elected king of West Francia. From the Capetian dynasty that began with the 987 election of Hugh Capet, Count of Paris and Duke of the Franks ("duc des Francs"), as king of a unified Francia, Paris gradually became the largest and most prosperous city in France.
By the end of the 12th century, Paris had become the political, economic, religious, and cultural capital of France. The Palais de la Cité, the royal residence, was located at the western end of the Île de la Cité. In 1163, during the reign of Louis VII, Maurice de Sully, bishop of Paris, undertook the construction of the Notre Dame Cathedral at its eastern extremity.
After the marshland between the river Seine and its slower 'dead arm' to its north was filled in around the 10th century, Paris' cultural centre began to move to the Right Bank. In 1137, a new city marketplace (today's Les Halles) replaced the two smaller ones on the Île de la Cité and Place de la Grève (Place de l'Hôtel de Ville). The latter location housed the headquarters of Paris' river trade corporation, an organisation that later became, unofficially (although formally in later years), Paris' first municipal government.
In the late 12th century, Philip Augustus extended the Louvre fortress to defend the city against river invasions from the west, gave the city its first walls between 1190 and 1215, rebuilt its bridges to either side of its central island, and paved its main thoroughfares. In 1190, he transformed Paris' former cathedral school into a student-teacher corporation that would become the University of Paris and would draw students from all of Europe.
With 200,000 inhabitants in 1328, Paris, then already the capital of France, was the most populous city of Europe. By comparison, London in 1300 had 80,000 inhabitants.
During the Hundred Years' War, Paris was occupied by England-friendly Burgundian forces from 1418, before being occupied outright by the English when Henry V of England entered the French capital in 1420; in spite of a 1429 effort by Joan of Arc to liberate the city, it would remain under English occupation until 1436.
In the late 16th-century French Wars of Religion, Paris was a stronghold of the Catholic League, the organisers of 24 August 1572 St. Bartholomew's Day massacre in which thousands of French Protestants were killed. The conflicts ended when pretender to the throne Henry IV, after converting to Catholicism to gain entry to the capital, entered the city in 1594 to claim the crown of France. This king made several improvements to the capital during his reign: he completed the construction of Paris' first uncovered, sidewalk-lined bridge, the Pont Neuf, built a Louvre extension connecting it to the Tuileries Palace, and created the first Paris residential square, the Place Royale, now Place des Vosges. In spite of Henry IV's efforts to improve city circulation, the narrowness of Paris' streets was a contributing factor in his assassination near Les Halles marketplace in 1610.
During the 17th century, Cardinal Richelieu, chief minister of Louis XIII, was determined to make Paris the most beautiful city in Europe. He built five new bridges, a new chapel for the College of Sorbonne, and a palace for himself, the Palais-Cardinal, which he bequeathed to Louis XIII. After Richelieu's death in 1642, it was renamed the Palais-Royal.
Due to the Parisian uprisings during the Fronde civil war, Louis XIV moved his court to a new palace, Versailles, in 1682. Although no longer the capital of France, arts and sciences in the city flourished with the Comédie-Française, the Academy of Painting, and the French Academy of Sciences. To demonstrate that the city was safe from attack, the king had the city walls demolished and replaced with tree-lined boulevards that would become the "Grands Boulevards" of today. Other marks of his reign were the Collège des Quatre-Nations, the Place Vendôme, the Place des Victoires, and Les Invalides.
Paris grew in population from about 400,000 in 1640 to 650,000 in 1780. A new boulevard, the Champs-Élysées, extended the city west to "Étoile", while the working-class neighbourhood of the Faubourg Saint-Antoine on the eastern site of the city grew more and more crowded with poor migrant workers from other regions of France.
Paris was the centre of an explosion of philosophic and scientific activity known as the Age of Enlightenment. Diderot and d'Alembert published their "Encyclopédie" in 1751, and the Montgolfier Brothers launched the first manned flight in a hot-air balloon on 21 November 1783, from the gardens of the Château de la Muette. Paris was the financial capital of continental Europe, the primary European centre of book publishing and fashion and the manufacture of fine furniture and luxury goods.
In the summer of 1789, Paris became the centre stage of the French Revolution. On 14 July, a mob seized the arsenal at the Invalides, acquiring thousands of guns, and stormed the Bastille, a symbol of royal authority. The first independent Paris Commune, or city council, met in the "Hôtel de Ville" and, on 15 July, elected a Mayor, the astronomer Jean Sylvain Bailly.
Louis XVI and the royal family were brought to Paris and made prisoners within the Tuileries Palace. In 1793, as the revolution turned more and more radical, the king, queen, and the mayor were guillotined (executed) in the Reign of Terror, along with more than 16,000 others throughout France. The property of the aristocracy and the church was nationalised, and the city's churches were closed, sold or demolished. A succession of revolutionary factions ruled Paris until 9 November 1799 ("coup d'état du 18 brumaire"), when Napoléon Bonaparte seized power as First Consul.
The population of Paris had dropped by 100,000 during the Revolution, but between 1799 and 1815, it surged with 160,000 new residents, reaching 660,000. Napoleon Bonaparte replaced the elected government of Paris with a prefect reporting only to him. He began erecting monuments to military glory, including the Arc de Triomphe, and improved the neglected infrastructure of the city with new fountains, the Canal de l'Ourcq, Père Lachaise Cemetery and the city's first metal bridge, the Pont des Arts.
During the Restoration, the bridges and squares of Paris were returned to their pre-Revolution names, but the July Revolution of 1830 in Paris, (commemorated by the July Column on Place de la Bastille), brought a constitutional monarch, Louis Philippe I, to power. The first railway line to Paris opened in 1837, beginning a new period of massive migration from the provinces to the city.
Louis-Philippe was overthrown by a popular uprising in the streets of Paris in 1848. His successor, Napoleon III, and the newly appointed prefect of the Seine, Georges-Eugène Haussmann, launched a gigantic public works project to build wide new boulevards, a new opera house, a central market, new aqueducts, sewers, and parks, including the Bois de Boulogne and Bois de Vincennes. In 1860, Napoleon III also annexed the surrounding towns and created eight new arrondissements, expanding Paris to its current limits.
During the Franco-Prussian War (1870–1871), Paris was besieged by the Prussian army. After months of blockade, hunger, and then bombardment by the Prussians, the city was forced to surrender on 28 January 1871. On 28 March, a revolutionary government called the Paris Commune seized power in Paris. The Commune held power for two months, until it was harshly suppressed by the French army during the "Bloody Week" at the end of May 1871.
Late in the 19th century, Paris hosted two major international expositions: the 1889 Universal Exposition, was held to mark the centennial of the French Revolution and featured the new Eiffel Tower; and the 1900 Universal Exposition, which gave Paris the Pont Alexandre III, the Grand Palais, the Petit Palais and the first Paris Métro line. Paris became the laboratory of Naturalism (Émile Zola) and Symbolism (Charles Baudelaire and Paul Verlaine), and of Impressionism in art (Courbet, Manet, Monet, Renoir).
By 1901, the population of Paris had grown to 2,715,000. At the beginning of the century, artists from around the world including: Pablo Picasso, Modigliani, and Henri Matisse made Paris their home. It was the birthplace of Fauvism, Cubism and abstract art, and authors such as Marcel Proust were exploring new approaches to literature.
During the First World War, Paris sometimes found itself on the front line; 600 to 1,000 Paris taxis played a small but highly important symbolic role in transporting 6,000 soldiers to the front line at the First Battle of the Marne. The city was also bombed by Zeppelins and shelled by German long-range guns. In the years after the war, known as "Les Années Folles", Paris continued to be a mecca for writers, musicians and artists from around the world, including Ernest Hemingway, Igor Stravinsky, James Joyce, Josephine Baker, Sidney Bechet Allen Ginsberg and the surrealist Salvador Dalí.
In the years after the peace conference, the city was also home to growing numbers of students and activists from French colonies and other Asian and African countries, who later became leaders of their countries, such as Ho Chi Minh, Zhou Enlai and Léopold Sédar Senghor.
On 14 June 1940, the German army marched into Paris, which had been declared an "open city". On 16–17 July 1942, following German orders, the French police and gendarmes arrested 12,884 Jews, including 4,115 children, and confined them during five days at the "Vel d'Hiv" ("Vélodrome d'Hiver"), from which they were transported by train to the extermination camp at Auschwitz. None of the children came back. On 25 August 1944, the city was liberated by the French 2nd Armoured Division and the 4th Infantry Division of the United States Army. General Charles de Gaulle led a huge and emotional crowd down the Champs Élysées towards Notre Dame de Paris, and made a rousing speech from the Hôtel de Ville.
In the 1950s and the 1960s, Paris became one front of the Algerian War for independence; in August 1961, the pro-independence FLN targeted and killed 11 Paris policemen, leading to the imposition of a curfew on Muslims of Algeria (who, at that time, were French citizens). On 17 October 1961, an unauthorised but peaceful protest demonstration of Algerians against the curfew led to violent confrontations between the police and demonstrators, in which at least 40 people were killed, including some thrown into the Seine. The anti-independence Organisation armée secrète (OAS), for their part, carried out a series of bombings in Paris throughout 1961 and 1962.
In May 1968, protesting students occupied the Sorbonne and put up barricades in the Latin Quarter. Thousands of Parisian blue-collar workers joined the students, and the movement grew into a two-week general strike. Supporters of the government won the June elections by a large majority. The May 1968 events in France resulted in the break-up of the University of Paris into 13 independent campuses. In 1975, the National Assembly changed the status of Paris to that of other French cities and, on 25 March 1977, Jacques Chirac became the first elected mayor of Paris since 1793. The Tour Maine-Montparnasse, the tallest building in the city at 57 storeys and high, was built between 1969 and 1973. It was highly controversial, and it remains the only building in the centre of the city over 32 storeys high. The population of Paris dropped from 2,850,000 in 1954 to 2,152,000 in 1990, as middle-class families moved to the suburbs. A suburban railway network, the RER (Réseau Express Régional), was built to complement the Métro, and the Périphérique expressway encircling the city, was completed in 1973.
Most of the postwar's Presidents of the Fifth Republic wanted to leave their own monuments in Paris; President Georges Pompidou started the Centre Georges Pompidou (1977), Valéry Giscard d'Estaing began the Musée d'Orsay (1986); President François Mitterrand, in power for 14 years, built the Opéra Bastille (1985–1989), the new site of the "Bibliothèque nationale de France" (1996), the Arche de la Défense (1985–1989), and the Louvre Pyramid with its underground courtyard (1983–1989); Jacques Chirac (2006), the Musée du quai Branly.
In the early 21st century, the population of Paris began to increase slowly again, as more young people moved into the city. It reached 2.25 million in 2011. In March 2001, Bertrand Delanoë became the first Socialist Mayor of Paris. In 2007, in an effort to reduce car traffic in the city, he introduced the Vélib', a system which rents bicycles for the use of local residents and visitors. Bertrand Delanoë also transformed a section of the highway along the Left Bank of the Seine into an urban promenade and park, the Promenade des Berges de la Seine, which he inaugurated in June 2013.
In 2007, President Nicolas Sarkozy launched the Grand Paris project, to integrate Paris more closely with the towns in the region around it. After many modifications, the new area, named the Metropolis of Grand Paris, with a population of 6.7 million, was created on 1 January 2016. In 2011, the City of Paris and the national government approved the plans for the Grand Paris Express, totalling of automated metro lines to connect Paris, the innermost three departments around Paris, airports and high-speed rail (TGV) stations, at an estimated cost of €35 billion. The system is scheduled to be completed by 2030.
Between July and October 1995, a series of bombings carried out by the Armed Islamic Group of Algeria caused 8 deaths and more than 200 injuries.
On 7 January 2015, two French Muslim extremists attacked the Paris headquarters of "Charlie Hebdo" and killed thirteen people, in an attack claimed by Al-Qaeda in the Arabian Peninsula, and on 9 January, a third terrorist, who claimed he was part of ISIL, killed four hostages during an attack at a Jewish grocery store at Porte de Vincennes. On 11 January an estimated 1.5 million people marched in Paris in a show of solidarity against terrorism and in support of freedom of speech. On 13 November of the same year, a series of coordinated bomb and gunfire terrorist attacks in Paris and Saint-Denis, claimed by ISIL, killed 130 people and injured more than 350.
On 3 February 2017, a two-backpack-carrying, machete-wielding attacker shouting "Allahu Akbar" attacked soldiers guarding the Louvre museum after they stopped him because of his bags; the assailant was shot, and no explosives were found. On 18 March of the same year, in a Vitry-sur-Seine bar, a man held patrons hostage, then fled to later hold a gun to the head of an Orly Airport French soldier, shouting "I am here to die in the name of Allah", and was shot dead by the soldier's comrades. On 20 April, a man fatally shot a French police officer on the Champs-Élysées, and was later shot dead himself. On 19 June, a man rammed his weapons-and-explosives-laden vehicle into a police van on the Champs-Élysées, but the car only burst into flames.
Paris is located in northern central France, in a north-bending arc of the river Seine whose crest includes two islands, the Île Saint-Louis and the larger Île de la Cité, which form the oldest part of the city. The river's mouth on the English Channel ("La Manche") is about downstream from the city. The city is spread widely on both banks of the river. Overall, the city is relatively flat, and the lowest point is above sea level. Paris has several prominent hills, the highest of which is Montmartre at .
Excluding the outlying parks of Bois de Boulogne and Bois de Vincennes, Paris covers an oval measuring about in area, enclosed by the ring road, the Boulevard Périphérique. The city's last major annexation of outlying territories in 1860 not only gave it its modern form but also created the 20 clockwise-spiralling arrondissements (municipal boroughs). From the 1860 area of , the city limits were expanded marginally to in the 1920s. In 1929, the Bois de Boulogne and Bois de Vincennes forest parks were officially annexed to the city, bringing its area to about . The metropolitan area of the city is .
Measured from the 'point zero' in front of its Notre-Dame cathedral, Paris by road is southeast of London, south of Calais, southwest of Brussels, north of Marseille, northeast of Nantes, and southeast of Rouen.
Paris has a typical Western European oceanic climate (Köppen: "Cfb," which is affected by the North Atlantic Current. The overall climate throughout the year is mild and moderately wet. Summer days are usually warm and pleasant with average temperatures between , and a fair amount of sunshine. Each year, however, there are a few days when the temperature rises above . Longer periods of more intense heat sometimes occur, such as the heat wave of 2003 when temperatures exceeded for weeks, reached on some days and rarely cooled down at night. Spring and autumn have, on average, mild days and fresh nights but are changing and unstable. Surprisingly warm or cool weather occurs frequently in both seasons. In winter, sunshine is scarce; days are cool, nights cold but generally above freezing with low temperatures around . Light night frosts are however quite common, but the temperature will dip below for only a few days a year. Snow falls every year, but rarely stays on the ground. The city sometimes sees light snow or flurries with or without accumulation.
Paris has an average annual precipitation of , and experiences light rainfall distributed evenly throughout the year. However the city is known for intermittent abrupt heavy showers. The highest recorded temperature is on 25 July 2019, and the lowest is on 10 December 1879.
For almost all of its long history, except for a few brief periods, Paris was governed directly by representatives of the king, emperor, or president of France. The city was not granted municipal autonomy by the National Assembly until 1974. The first modern elected mayor of Paris was Jacques Chirac, elected 20 March 1977, becoming the city's first mayor since 1793. The current mayor is Anne Hidalgo, a socialist, first elected 5 April 2014 and re-elected 28 June 2020.
The mayor of Paris is elected indirectly by Paris voters; the voters of each of the city's 20 arrondissements elect members to the "Conseil de Paris" (Council of Paris), which subsequently elects the mayor. The council is composed of 163 members, with each arrondissement allocated a number of seats dependent upon its population, from 10 members for each of the least-populated arrondissements (1st through 9th) to 34 members for the most populated (the 15th). The council is elected using closed list proportional representation in a two-round system. Party lists winning an absolute majority in the first round – or at least a plurality in the second round – automatically win half the seats of an arrondissement. The remaining half of seats are distributed proportionally to all lists which win at least 5% of the vote using the highest averages method. This ensures that the winning party or coalition always wins a majority of the seats, even if they don't win an absolute majority of the vote.
Once elected, the council plays a largely passive role in the city government, primarily because it meets only once a month. The current council is divided between a coalition of the left of 91 members, including the socialists, communists, greens, and extreme left; and 71 members for the centre-right, plus a few members from smaller parties.
Each of Paris' 20 arrondissements has its own town hall and a directly elected council ("conseil d'arrondissement"), which, in turn, elects an arrondissement mayor. The council of each arrondissement is composed of members of the Conseil de Paris and also members who serve only on the council of the arrondissement. The number of deputy mayors in each arrondissement varies depending upon its population. There are a total of 20 arrondissement mayors and 120 deputy mayors.
The budget of the city for 2018 is 9.5 billion Euros, with an expected deficit of 5.5 billion Euros. 7.9 billion Euros are designated for city administration, and 1.7 billion Euros for investment. The number of city employees increased from 40,000 in 2001 to 55,000 in 2018. The largest part of the investment budget is earmarked for public housing (262 million Euros) and for real estate (142 million Euros).
The Métropole du Grand Paris, or simply Grand Paris, formally came into existence on 1 January 2016. It is an administrative structure for co-operation between the City of Paris and its nearest suburbs. It includes the City of Paris, plus the communes of the three departments of the inner suburbs (Hauts-de-Seine, Seine-Saint-Denis and Val-de-Marne), plus seven communes in the outer suburbs, including Argenteuil in Val d'Oise and Paray-Vieille-Poste in Essonne, which were added to include the major airports of Paris. The Metropole covers and has a population of 6.945 million persons.
The new structure is administered by a Metropolitan Council of 210 members, not directly elected, but chosen by the councils of the member Communes. By 2020 its basic competencies will include urban planning, housing and protection of the environment. The first president of the metropolitan council, Patrick Ollier, a Republican and the mayor of the town of Rueil-Malmaison, was elected on 22 January 2016. Though the Metropole has a population of nearly seven million people and accounts for 25 percent of the GDP of France, it has a very small budget: just 65 million Euros, compared with eight billion Euros for the City of Paris.
The Region of Île de France, including Paris and its surrounding communities, is governed by the Regional Council, which has its headquarters in the 7th arrondissement of Paris. It is composed of 209 members representing the different communes within the region. On 15 December 2015, a list of candidates of the Union of the Right, a coalition of centrist and right-wing parties, led by Valérie Pécresse, narrowly won the regional election, defeating a coalition of Socialists and ecologists. The Socialists had governed the region for seventeen years. The regional council has 121 members from the Union of the Right, 66 from the Union of the Left and 22 from the extreme right National Front.
As the capital of France, Paris is the seat of France's national government. For the executive, the two chief officers each have their own official residences, which also serve as their offices. The President of the French Republic resides at the Élysée Palace in the 8th arrondissement, while the Prime Minister's seat is at the Hôtel Matignon in the 7th arrondissement. Government ministries are located in various parts of the city; many are located in the 7th arrondissement, near the Matignon.
The two houses of the French Parliament are located on the Left Bank. The upper house, the Senate, meets in the Palais du Luxembourg in the 6th arrondissement, while the more important lower house, the Assemblée Nationale, meets in the Palais Bourbon in the 7th arrondissement. The President of the Senate, the second-highest public official in France (the President of the Republic being the sole superior), resides in the "Petit Luxembourg", a smaller palace annexe to the Palais du Luxembourg.
France's highest courts are located in Paris. The Court of Cassation, the highest court in the judicial order, which reviews criminal and civil cases, is located in the Palais de Justice on the "Île de la Cité", while the Conseil d'État, which provides legal advice to the executive and acts as the highest court in the administrative order, judging litigation against public bodies, is located in the Palais-Royal in the 1st arrondissement. The Constitutional Council, an advisory body with ultimate authority on the constitutionality of laws and government decrees, also meets in the Montpensier wing of the Palais Royal.
Paris and its region host the headquarters of several international organisations including UNESCO, the Organisation for Economic Co-operation and Development, the International Chamber of Commerce, the Paris Club, the European Space Agency, the International Energy Agency, the "Organisation internationale de la Francophonie", the European Union Institute for Security Studies, the International Bureau of Weights and Measures, the International Exhibition Bureau, and the International Federation for Human Rights.
Following the motto "Only Paris is worthy of Rome; only Rome is worthy of Paris"; the only sister city of Paris is Rome, although Paris has partnership agreements with many other cities around the world.
The security of Paris is mainly the responsibility of the Prefecture of Police of Paris, a subdivision of the Ministry of the Interior. It supervises the units of the National Police who patrol the city and the three neighbouring departments. It is also responsible for providing emergency services, including the Paris Fire Brigade. Its headquarters is on Place Louis Lépine on the Île de la Cité.
There are 30,200 officers under the prefecture, and a fleet of more than 6,000 vehicles, including police cars, motorcycles, fire trucks, boats and helicopters. The national police has its own special unit for riot control and crowd control and security of public buildings, called the Compagnies Républicaines de Sécurité (CRS), a unit formed in 1944 right after the liberation of France. Vans of CRS agents are frequently seen in the centre of the city when there are demonstrations and public events.
The police are supported by the National Gendarmerie, a branch of the French Armed Forces, though their police operations now are supervised by the Ministry of the Interior. The traditional kepis of the gendarmes were replaced in 2002 with caps, and the force modernised, though they still wear kepis for ceremonial occasions.
Crime in Paris is similar to that in most large cities. Violent crime is relatively rare in the city centre. Political violence is uncommon, though very large demonstrations may occur in Paris and other French cities simultaneously. These demonstrations, usually managed by a strong police presence, can turn confrontational and escalate into violence.
Most French rulers since the Middle Ages made a point of leaving their mark on a city that, contrary to many other of the world's capitals, has never been destroyed by catastrophe or war. In modernising its infrastructure through the centuries, Paris has preserved even its earliest history in its street map. At its origin, before the Middle Ages, the city was composed around several islands and sandbanks in a bend of the Seine; of those, two remain today: the Île Saint-Louis and the Île de la Cité. A third one is the 1827 artificially created Île aux Cygnes.
Modern Paris owes much of its downtown plan and architectural harmony to Napoleon III and his Prefect of the Seine, Baron Haussmann. Between 1853 and 1870 they rebuilt the city centre, created the wide downtown boulevards and squares where the boulevards intersected, imposed standard facades along the boulevards, and required that the facades be built of the distinctive cream-grey "Paris stone". They also built the major parks around the city centre. The high residential population of its city centre also makes it much different from most other western major cities.
Paris' urbanism laws have been under strict control since the early 17th century, particularly where street-front alignment, building height and building distribution is concerned. In recent developments, a 1974–2010 building height limitation of was raised to in central areas and in some of Paris' peripheral quarters, yet for some of the city's more central quarters, even older building-height laws still remain in effect. The Tour Montparnasse was both Paris's and France's tallest building until 1973, but this record has been held by the La Défense quarter Tour First tower in Courbevoie since its 2011 construction.
Parisian examples of European architecture date back more than a millennium, including the Romanesque church of the Abbey of Saint-Germain-des-Prés (1014–1163), the early Gothic Architecture of the Basilica of Saint-Denis (1144), the Notre Dame Cathedral (1163–1345), the Flamboyant Gothic of Saint Chapelle (1239–1248), the Baroque churches of Saint-Paul-Saint-Louis (1627–1641) and Les Invalides (1670–1708). The 19th century produced the neoclassical church of La Madeleine (1808–1842), the Palais Garnier serving as an opera house (1875), the neo-Byzantine Basilica of Sacré-Cœur (1875–1919), as well as the exuberant "Belle Époque" modernism of the Eiffel Tower (1889). Striking examples of 20th-century architecture include the Centre Georges Pompidou by Richard Rogers and Renzo Piano (1977), the Cité des Sciences et de l'Industrie by various architects (1986), the Arab World Institute by Jean Nouvel (1987), the Louvre Pyramid by I. M. Pei (1989) and the Opéra Bastille by Carlos Ott (1989). Contemporary architecture includes the Musée du quai Branly – Jacques Chirac by Jean Nouvel (2006), the contemporary art museum of the Louis Vuitton Foundation by Frank Gehry (2014) and the new Tribunal de grande instance de Paris by Renzo Piano (2018).
The most expensive residential streets in Paris in 2018 by average price per square meter were Avenue Montaigne (8th arrondissement), at 22,372 Euros per square meter; Place Dauphine (1st arrondissement; 20,373 euros) and Rue de Furstemberg (6th arrondissement) at 18,839 Euros per square meter. The total number of residences in the City of Paris in 2011 was , up from a former high of in 2006. Among these, (85.9 percent) were main residences, (6.8 percent) were secondary residences, and the remaining 7.3 percent were empty (down from 9.2 percent in 2006).
Sixty-two percent of its buildings date from 1949 and before, 20 percent were built between 1949 and 1974, and only 18 percent of the buildings remaining were built after that date. Two-thirds of the city's 1.3 million residences are studio and two-room apartments. Paris averages 1.9 people per residence, a number that has remained constant since the 1980s, but it is much less than Île-de-France's 2.33 person-per-residence average. Only 33 percent of principal residence Parisians own their habitation (against 47 percent for the entire Île-de-France): the major part of the city's population is a rent-paying one. Social or public housing represented 19.9 percent of the city's total residences in 2017. Its distribution varies widely throughout the city, from 2.6 percent of the housing in the wealthy 7th arrondissement, to 24 percent in the 20th arrondissement, 26 percent in the 14th arrondissement and 39.9 percent in the 19th arrondissement, on the poorer southwest and northern edges of the city.
On the night of 8–9 February 2019, during a period of cold weather, a Paris NGO conducted its annual citywide count of homeless persons. They counted 3,641 homeless persons in Paris, of whom twelve percent were women. More than half had been homeless for more than a year. 2,885 were living in the streets or parks, 298 in train and metro stations, and 756 in other forms of temporary shelter. This was an increase of 588 persons since 2018.
Aside from the 20th-century addition of the Bois de Boulogne, the Bois de Vincennes and the Paris heliport, Paris' administrative limits have remained unchanged since 1860. A greater administrative Seine department had been governing Paris and its suburbs since its creation in 1790, but the rising suburban population had made it difficult to maintain as a unique entity. This problem was 'resolved' when its parent "District de la région parisienne" ('district of the Paris region') was reorganised into several new departments from 1968: Paris became a department in itself, and the administration of its suburbs was divided between the three new departments surrounding it. The district of the Paris region was renamed "Île-de-France" in 1977, but this abbreviated "Paris region" name is still commonly used today to describe the Île-de-France, and as a vague reference to the entire Paris agglomeration. Long-intended measures to unite Paris with its suburbs began on 1 January 2016, when the Métropole du Grand Paris came into existence.
Paris' disconnect with its suburbs, its lack of suburban transportation, in particular, became all too apparent with the Paris agglomeration's growth. Paul Delouvrier promised to resolve the Paris-suburbs "mésentente" when he became head of the Paris region in 1961: two of his most ambitious projects for the Region were the construction of five suburban "villes nouvelles" ("new cities") and the RER commuter train network. Many other suburban residential districts ("grands ensembles") were built between the 1960s and 1970s to provide a low-cost solution for a rapidly expanding population: These districts were socially mixed at first, but few residents actually owned their homes (the growing economy made these accessible to the middle classes only from the 1970s). Their poor construction quality and their haphazard insertion into existing urban growth contributed to their desertion by those able to move elsewhere and their repopulation by those with more limited possibilities.
These areas, "quartiers sensibles" ("sensitive quarters"), are in northern and eastern Paris, namely around its Goutte d'Or and Belleville neighbourhoods. To the north of the city, they are grouped mainly in the Seine-Saint-Denis department, and to a lesser extreme to the east in the Val-d'Oise department. Other difficult areas are located in the Seine valley, in Évry et Corbeil-Essonnes (Essonne), in Mureaux, Mantes-la-Jolie (Yvelines), and scattered among social housing districts created by Delouvrier's 1961 "ville nouvelle" political initiative.
The Paris agglomeration's urban sociology is basically that of 19th-century Paris: its fortuned classes are situated in its west and southwest, and its middle-to-lower classes are in its north and east. The remaining areas are mostly middle-class citizenry dotted with islands of fortuned populations located there due to reasons of historical importance, namely Saint-Maur-des-Fossés to the east and Enghien-les-Bains to the north of Paris.
The official estimated population of the City of Paris was 2,206,488 as of 1 January 2019, according to the INSEE, the official French statistical agency. This is a decline of 59,648 from 2015, close to the total population of the 5th arrondissement. Despite the drop, Paris remains the most densely-populated city in Europe, with 252 residents per hectare, not counting parks. This drop was attributed partly to a lower birth rate, to the departure of middle-class residents. and partly to the possible loss of housing in the city due to short-term rentals for tourism.
Paris is the fourth largest municipality in the European Union, following Berlin, Madrid and Rome. Eurostat, the statistical agency of the EU, places Paris (6.5 million people) second behind London (8 million) and ahead of Berlin (3.5 million), based on the 2012 populations of what Eurostat calls "urban audit core cities".
The population of Paris today is lower than its historical peak of 2.9 million in 1921. The principal reasons were a significant decline in household size, and a dramatic migration of residents to the suburbs between 1962 and 1975. Factors in the migration included de-industrialisation, high rent, the gentrification of many inner quarters, the transformation of living space into offices, and greater affluence among working families. The city's population loss came to a temporary halt at the beginning of the 21st century; the population estimate of July 2004 showed a population increase for the first time since 1954, and the population reached 2,234,000 by 2009, before declining again slightly in 2017. It declined again in 2018.
Paris is the core of a built-up area that extends well beyond its limits: commonly referred to as the "agglomération Parisienne", and statistically as a "unité urbaine" (a measure of urban area), the Paris agglomeration's 2013 population of 10,601,122 made it the largest urban area in the European Union. City-influenced commuter activity reaches well beyond even this in a statistical "aire urbaine" de Paris ("urban area", but a statistical method comparable to a metropolitan area), that had a 2013 population of 12,405,426, a number one-fifth the population of France, and the largest metropolitan area in the Eurozone.
According to Eurostat, the EU statistical agency, in 2012 the Commune of Paris was the most densely populated city in the European Union, with 21,616 people per square kilometre within the city limits (the NUTS-3 statistical area), ahead of Inner London West, which had 10,374 people per square kilometre. According to the same census, three departments bordering Paris, Hauts-de-Seine, Seine-Saint-Denis and Val-de-Marne, had population densities of over 10,000 people per square kilometre, ranking among the 10 most densely populated areas of the EU.
According to the 2012 French census, 586,163 residents of the City of Paris, or 26.2 percent, and 2,782,834 residents of the Paris Region (Île-de-France), or 23.4 percent, were born outside of metropolitan France (the last figure up from 22.4% at the 2007 census). 26,700 of these in the City of Paris and 210,159 in the Paris Region were people born in Overseas France (more than two-thirds of whom in the French West Indies) and are therefore not counted as immigrants since they were legally French citizens at birth.
A further 103,648 in the City of Paris and in 412,114 in the Paris Region were born in foreign countries with French citizenship at birth. This concerns in particular the many Christians and Jews from North Africa who moved to France and Paris after the times of independence and are not counted as immigrants due to their being born French citizens. The remaining group, people born in foreign countries with no French citizenship at birth, are those defined as immigrants under French law. According to the 2012 census, 135,853 residents of the City of Paris were immigrants from Europe, 112,369 were immigrants from the Maghreb, 70,852 from sub-Saharan Africa and Egypt, 5,059 from Turkey, 91,297 from Asia (outside Turkey), 38,858 from the Americas, and 1,365 from the South Pacific. Note that the immigrants from the Americas and the South Pacific in Paris are vastly outnumbered by migrants from French overseas regions and territories located in these regions of the world.
In the Paris Region, 590,504 residents were immigrants from Europe, 627,078 were immigrants from the Maghreb, 435,339 from sub-Saharan Africa and Egypt, 69,338 from Turkey, 322,330 from Asia (outside Turkey), 113,363 from the Americas, and 2,261 from the South Pacific. These last two groups of immigrants are again vastly outnumbered by migrants from French overseas regions and territories located in the Americas and the South Pacific.
In 2012, there were 8,810 British citizens and 10,019 United States citizens living in the City of Paris (Ville de Paris) and 20,466 British citizens and 16,408 United States citizens living in the entire Paris Region (Île-de-France).
At the beginning of the twentieth century, Paris was the largest Catholic city in the world. French census data does not contain information about religious affiliation. According to a 2011 survey by the IFOP, a French public opinion research organisation, 61 percent of residents of the Paris Region (Île-de-France) identified themselves as Roman Catholic, though just 15 percent said they were practising Catholics, while 46 percent were non-practicing. In the same survey, 7 percent of residents identified themselves as Muslims, 4 percent as Protestants, 2 percent as Jewish, and 25 percent as without religion.
According to the INSEE, between 4 and 5 million French residents were born or had at least one parent born in a predominantly Muslim country, particularly Algeria, Morocco, and Tunisia. An IFOP survey in 2008 reported that, of immigrants from these predominantly Muslim countries, 25 percent went to the mosque regularly; 41 percent practised the religion, and 34 percent were believers but did not practice the religion. In 2012 and 2013, it was estimated that there were almost 500,000 Muslims in the City of Paris, 1.5 million Muslims in the Île-de-France region, and 4 to 5 million Muslims in France.
The Jewish population of the Paris Region was estimated in 2014 to be 282,000, the largest concentration of Jews in the world outside of Israel and the United States.
The United Nations Educational, Scientific and Cultural Organization (UNESCO) has had its headquarters in Paris since November 1958. Paris is also the home of the Organisation for Economic Co-operation and Development (OECD). Paris hosts the headquarters of the European Space Agency, the International Energy Agency, European Securities and Markets Authority and, as of 2019, the European Banking Authority.
The economy of the City of Paris is based largely on services and commerce; of the 390,480 enterprises in the city, 80.6 percent are engaged in commerce, transportation, and diverse services, 6.5 percent in construction, and just 3.8 percent in industry. The story is similar in the Paris Region (Île-de-France): 76.7 percent of enterprises are engaged in commerce and services, and 3.4 percent in industry.
At the 2012 census, 59.5% of jobs in the Paris Region were in market services (12.0% in wholesale and retail trade, 9.7% in professional, scientific, and technical services, 6.5% in information and communication, 6.5% in transportation and warehousing, 5.9% in finance and insurance, 5.8% in administrative and support services, 4.6% in accommodation and food services, and 8.5% in various other market services), 26.9% in non-market services (10.4% in human health and social work activities, 9.6% in public administration and defence, and 6.9% in education), 8.2% in manufacturing and utilities (6.6% in manufacturing and 1.5% in utilities), 5.2% in construction, and 0.2% in agriculture.
The Paris Region had 5.4 million salaried employees in 2010, of whom 2.2 million were concentrated in 39 "pôles d'emplois" or business districts. The largest of these, in terms of number of employees, is known in French as the QCA, or "quartier central des affaires"; it is in the western part of the City of Paris, in the 2nd, 8th, 9th, 16th, and 18th arrondissements. In 2010, it was the workplace of 500,000 salaried employees, about 30 percent of the salaried employees in Paris and 10 percent of those in the Île-de-France. The largest sectors of activity in the central business district were finance and insurance (16 percent of employees in the district) and business services (15 percent). The district also includes a large concentration of department stores, shopping areas, hotels and restaurants, as well a government offices and ministries.
The second-largest business district in terms of employment is La Défense, just west of the city, where many companies installed their offices in the 1990s. In 2010, it was the workplace of 144,600 employees, of whom 38 percent worked in finance and insurance, 16 percent in business support services. Two other important districts, Neuilly-sur-Seine and Levallois-Perret, are extensions of the Paris business district and of La Défense. Another district, including Boulogne-Billancourt, Issy-les-Moulineaux and the southern part of the 15th arrondissement, is a centre of activity for the media and information technology.
The top ten French companies listed in the Fortune Global 500 for 2018 all have their headquarters in the Paris Region; six in the central business district of the City of Paris; and four close to the city in the Hauts-de-Seine Department, three in La Défense and one in Boulogne-Billancourt. Some companies, like Société Générale, have offices in both Paris and La Défense.
The Paris Region is France's leading region for economic activity, with a GDP of €681 billion (~US$850 billion) and €56,000 (~US$70,000) per capita. In 2011, its GDP ranked second among the regions of Europe and its per-capita GDP was the 4th highest in Europe. While the Paris region's population accounted for 18.8 percent of metropolitan France in 2011, the Paris region's GDP accounted for 30 percent of metropolitan France's GDP.
The Paris Region economy has gradually shifted from industry to high-value-added service industries (finance, IT services) and high-tech manufacturing (electronics, optics, aerospace, etc.). The Paris region's most intense economic activity through the central Hauts-de-Seine department and suburban La Défense business district places Paris' economic centre to the west of the city, in a triangle between the "Opéra Garnier", "La Défense" and the "Val de Seine". While the Paris economy is dominated by services, and employment in manufacturing sector has declined sharply, the region remains an important manufacturing centre, particularly for aeronautics, automobiles, and "eco" industries.
In the 2017 worldwide cost of living survey by the Economist Intelligence Unit, based on a survey made in September 2016, Paris ranked as the seventh most expensive city in the world, and the second most expensive in Europe, after Zurich.
In 2018, Paris was the most expensive city in the world with Singapore and Hong Kong.
Station F is a business incubator for startups, located in 13th arrondissement of Paris. Noted as the world's largest startup facility.
According to 2015 INSEE figures, 68.3 percent of employees in the City of Paris work in commerce, transportation, and services; 24.5 percent in public administration, health and social services; 4.1 percent in industry, and 0.1 percent in agriculture.
The majority of Paris' salaried employees fill 370,000 businesses services jobs, concentrated in the north-western 8th, 16th and 17th arrondissements. Paris' financial service companies are concentrated in the central-western 8th and 9th arrondissement banking and insurance district. Paris' department store district in the 1st, 6th, 8th and 9th arrondissements employ ten percent of mostly female Paris workers, with 100,000 of these registered in the retail trade. Fourteen percent of Parisians work in hotels and restaurants and other services to individuals. Nineteen percent of Paris employees work for the State in either in administration or education. The majority of Paris' healthcare and social workers work at the hospitals and social housing concentrated in the peripheral 13th, 14th, 18th, 19th and 20th arrondissements. Outside Paris, the western Hauts-de-Seine department La Défense district specialising in finance, insurance and scientific research district, employs 144,600, and the north-eastern Seine-Saint-Denis audiovisual sector has 200 media firms and 10 major film studios.
Paris' manufacturing is mostly focused in its suburbs, and the city itself has only around 75,000 manufacturing workers, most of which are in the textile, clothing, leather goods, and shoe trades. Paris region manufacturing specialises in transportation, mainly automobiles, aircraft and trains, but this is in a sharp decline: Paris proper manufacturing jobs dropped by 64 percent between 1990 and 2010, and the Paris region lost 48 percent during the same period. Most of this is due to companies relocating outside the Paris region. The Paris region's 800 aerospace companies employed 100,000. Four hundred automobile industry companies employ another 100,000 workers: many of these are centred in the Yvelines department around the Renault and PSA-Citroen plants (this department alone employs 33,000), but the industry as a whole suffered a major loss with the 2014 closing of a major Aulnay-sous-Bois Citroen assembly plant.
The southern Essonne department specialises in science and technology, and the south-eastern Val-de-Marne, with its wholesale Rungis food market, specialises in food processing and beverages. The Paris region's manufacturing decline is quickly being replaced by eco-industries: these employ about 100,000 workers. In 2011, while only 56,927 construction workers worked in Paris itself, its metropolitan area employed 246,639, in an activity centred largely around the Seine-Saint-Denis (41,378) and Hauts-de-Seine (37,303) departments and the new business-park centres appearing there.
Paris' 2015 at-census unemployment rate was 12.2%, and in the first trimester of 2018, its ILO-critera unemployment rate was 7.1 percent. The provisional unemployment rate in the whole Paris Region was higher: 8.0 percent, and considerably higher in some suburbs, notably the Department of Seine-Saint-Denis to the east (11.8 percent) and the Val-d'Oise to the north (8.2 percent).
The average net household income (after social, pension and health insurance contributions) in Paris was €36,085 for 2011. It ranged from €22,095 in the 19th arrondissement to €82,449 in the 7th arrondissement. The median taxable income for 2011 was around €25,000 in Paris and €22,200 for "Île-de-France". Generally speaking, incomes are higher in the Western part of the city and in the western suburbs than in the northern and eastern parts of the urban area. Unemployment was estimated at 8.2 percent in the City of Paris and 8.8 percent in the Île-de-France region in the first trimester of 2015. It ranged from 7.6 percent in the wealthy Essonne department to 13.1 percent in the Seine-Saint-Denis department, where many recent immigrants live.
While Paris has some of the richest neighbourhoods in France, it also has some of the poorest, mostly on the eastern side of the city. In 2012, 14 percent of households in the city earned less than €977 per month, the official poverty line. Twenty-five percent of residents in the 19th arrondissement lived below the poverty line; 24 percent in the 18th, 22 percent in the 20th and 18 percent in the 10th. In the city's wealthiest neighbourhood, the 7th arrondissement, 7 percent lived below the poverty line; 8 percent in the 6th arrondissement; and 9 percent in the 16th arrondissement.
Greater Paris, comprising Paris and its three surrounding departments, received 24.5 million visitors in 2018, measured by hotel arrivals. These included 11.2 million French visitors. Of foreign visitors, the greatest number came from the United States (2.4 million), Great Britain (1.2 million), Germany (918 thousand) and China (799 thousand).
In 2018, measured by the Euromonitor Global Cities Destination Index, Paris was the second-busiest airline destination in the world, with 19.10 million visitors, behind Bangkok (22.78 million) but ahead of London (19.09 million). According to the Paris Convention and Visitors Bureau, 393,008 workers in Greater Paris, or 12.4% of the total workforce, are engaged in tourism-related sectors such as hotels, catering, transport, and leisure.
The city's top tourist attraction was the Notre Dame Cathedral, which welcomed an estimated 12,000,000 visitors in 2018, but is now closed for renovation after the 15 April 2019 fire, and is not expected to reopen for several years. Second was the Basilique du Sacré-Cœur on Montmartre, with an estimated 11 million visitors. This was followed by the Louvre Museum (10.1 million visitors); Centre Pompidou (3.5 million visitors); Musée d'Orsay(3.3 million); The National Museum of Natural History, France (2.4 million visitors); The Chapel of Our Lady of the Miraculous Medal (2 million visitors); the Arc de Triomphe (1.7 million visitors) and Sainte-Chapelle (1.3 million visitors).
The centre of Paris contains the most visited monuments in the city, including the Notre Dame Cathedral (now closed for restoration) and the Louvre as well as the Sainte-Chapelle; Les Invalides, where the tomb of Napoleon is located, and the Eiffel Tower are located on the Left Bank south-west of the centre. The Panthéon and the Catacombs of Paris are also located on the Left Bank of the Seine. The banks of the Seine from the Pont de Sully to the Pont d'Iéna have been listed as a UNESCO World Heritage Site since 1991.
Other landmarks are laid out east to west along the historical axis of Paris, which runs from the Louvre through the Tuileries Garden, the Luxor Column in the Place de la Concorde, and the Arc de Triomphe, to the Grande Arche of La Défense.
Several other much-visited landmarks are located in the suburbs of the city; the Basilica of St Denis, in Seine-Saint-Denis, is the birthplace of the Gothic style of architecture and the royal necropolis of French kings and queens. The Paris region hosts three other UNESCO Heritage sites: the Palace of Versailles in the west, the Palace of Fontainebleau in the south, and the medieval fairs site of Provins in the east. In the Paris region, Disneyland Paris, in Marne-la-Vallée, east of the centre of Paris, received 9.66 million visitors in 2017.
In 2017 Greater Paris had 2,020 hotels, including 85 five-star hotels, with a total of 119,000 rooms. Paris has long been famous for its grand hotels. The Hotel Meurice, opened for British travellers in 1817, was one of the first luxury hotels in Paris. The arrival of the railways and the Paris Exposition of 1855 brought the first flood of tourists and the first modern grand hotels; the Hôtel du Louvre (now an antiques marketplace) in 1855; the Grand Hotel (now the InterContinental Paris Le Grand Hotel) in 1862; and the Hôtel Continental in 1878. The Hôtel Ritz on Place Vendôme opened in 1898, followed by the Hôtel Crillon in an 18th-century building on the Place de la Concorde in 1909; the Hotel Bristol on the Rue du Faubourg Saint-Honoré in 1925; and the Hotel George V in 1928.
In addition to hotels, in 2017 Greater Paris had 84,000 homes registered with Airbnb, which received 2.3 million visitors. Under French law, renters of these units must pay the Paris tourism tax. The company paid the city government 7.3 million euros in 2016.
For centuries, Paris has attracted artists from around the world, who arrive in the city to educate themselves and to seek inspiration from its vast pool of artistic resources and galleries. As a result, Paris has acquired a reputation as the "City of Art". Italian artists were a profound influence on the development of art in Paris in the 16th and 17th centuries, particularly in sculpture and reliefs. Painting and sculpture became the pride of the French monarchy and the French royal family commissioned many Parisian artists to adorn their palaces during the French Baroque and Classicism era. Sculptors such as Girardon, Coysevox and Coustou acquired reputations as the finest artists in the royal court in 17th-century France. Pierre Mignard became the first painter to King Louis XIV during this period. In 1648, the "Académie royale de peinture et de sculpture" (Royal Academy of Painting and Sculpture) was established to accommodate for the dramatic interest in art in the capital. This served as France's top art school until 1793.
Paris was in its artistic prime in the 19th century and early 20th century, when it had a colony of artists established in the city and in art schools associated with some of the finest painters of the times: Édouard Manet, Claude Monet, Berthe Morisot, Paul Gauguin, Pierre-Auguste Renoir and others. The French Revolution and political and social change in France had a profound influence on art in the capital. Paris was central to the development of Romanticism in art, with painters such as Gericault. Impressionism, Art Nouveau, Symbolism, Fauvism, Cubism and Art Deco movements all evolved in Paris. In the late 19th century, many artists in the French provinces and worldwide flocked to Paris to exhibit their works in the numerous salons and expositions and make a name for themselves. Artists such as Pablo Picasso, Henri Matisse, Vincent van Gogh, Paul Cézanne, Jean Metzinger, Albert Gleizes, Henri Rousseau, Marc Chagall, Amedeo Modigliani and many others became associated with Paris. Picasso, living in Le Bateau-Lavoir in Montmartre, painted his famous "La Famille de Saltimbanques" and "Les Demoiselles d'Avignon" between 1905 and 1907. Montmartre and Montparnasse became centres for artistic production.
The most prestigious names of French and foreign sculptors, who made their reputation in Paris in the modern era, are Frédéric Auguste Bartholdi (Statue of Liberty – "Liberty Enlightening the World"), Auguste Rodin, Camille Claudel, Antoine Bourdelle, Paul Landowski (statue of "Christ the Redeemer" in Rio de Janeiro) and Aristide Maillol. The Golden Age of the School of Paris ended between the two world wars.
The inventor Nicéphore Niépce produced the first permanent photograph on a polished pewter plate in Paris in 1825. In 1839, after the death of Niépce, Louis Daguerre patented the Daguerrotype, which became the most common form of photography until the 1860s. The work of Étienne-Jules Marey in the 1880s contributed considerably to the development of modern photography. Photography came to occupy a central role in Parisian Surrealist activity, in the works of Man Ray and Maurice Tabard. Numerous photographers achieved renown for their photography of Paris, including Eugène Atget, noted for his depictions of street scenes, Robert Doisneau, noted for his playful pictures of people and market scenes (among which "Le baiser de l'hôtel de ville" has become iconic of the romantic vision of Paris), Marcel Bovis, noted for his night scenes, as well as others such as Jacques-Henri Lartigue and Henri Cartier-Bresson. Poster art also became an important art form in Paris in the late nineteenth century, through the work of Henri de Toulouse-Lautrec, Jules Chéret, Eugène Grasset, Adolphe Willette, Pierre Bonnard, Georges de Feure, Henri-Gabriel Ibels, Paul Gavarni and Alphonse Mucha.
The Louvre received 9.6 million visitors in 2019, ranking it among the most visited museums in the world. Its treasures include the "Mona Lisa" ("La Joconde"), the Venus de Milo statue, "Liberty Leading the People". The second-most visited museum in the city, with 3.5 million visitors, was the Centre Georges Pompidou, also known as Beaubourg, which houses the Musée National d'Art Moderne. The third most visited Paris museum, in a building constructed for the Paris Universal Exhibition of 1900 as the Orsay railway station, was the Musée d'Orsay, which had 3.3 million visitors in 2018. The Orsay displays French art of the 19th century, including major collections of the Impressionists and Post-Impressionists. The Musée de l'Orangerie, near both the Louvre and the Orsay, also exhibits Impressionists and Post-Impressionists, including most of Claude Monet's large "Water Lilies" murals. The Musée national du Moyen Âge, or Cluny Museum, presents Medieval art, including the famous tapestry cycle of "The Lady and the Unicorn". The Guimet Museum, or "Musée national des arts asiatiques", has one of the largest collections of Asian art in Europe. There are also notable museums devoted to individual artists, including the Musée Picasso, the Musée Rodin and the Musée national Eugène Delacroix.
Paris hosts one of the largest science museums in Europe, the Cité des Sciences et de l'Industrie at La Villette. It attracted 2.2 million visitors in 2018. The National Museum of Natural History located near the "Jardin des plantes" attracted two million visitors in 2018. It is famous for its dinosaur artefacts, mineral collections and its Gallery of Evolution. The military history of France, from the Middle Ages to World War II, is vividly presented by displays at the Musée de l'Armée at Les Invalides, near the tomb of Napoleon. In addition to the national museums, run by the Ministry of Culture, the City of Paris operates 14 museums, including the Carnavalet Museum on the history of Paris, Musée d'Art Moderne de la Ville de Paris, Palais de Tokyo, the House of Victor Hugo, the House of Balzac and the Catacombs of Paris. There are also notable private museums; The Contemporary Art museum of the Louis Vuitton Foundation, designed by architect Frank Gehry, opened in October 2014 in the Bois de Boulogne. It received 1.1 million visitors in 2018.
The largest opera houses of Paris are the 19th-century Opéra Garnier (historical Paris Opéra) and modern Opéra Bastille; the former tends toward the more classic ballets and operas, and the latter provides a mixed repertoire of classic and modern. In middle of the 19th century, there were three other active and competing opera houses: the Opéra-Comique (which still exists), Théâtre-Italien and Théâtre Lyrique (which in modern times changed its profile and name to Théâtre de la Ville). Philharmonie de Paris, the modern symphonic concert hall of Paris, opened in January 2015. Another musical landmark is the Théâtre des Champs-Élysées, where the first performances of Diaghilev's Ballets Russes took place in 1913.
Theatre traditionally has occupied a large place in Parisian culture, and many of its most popular actors today are also stars of French television. The oldest and most famous Paris theatre is the Comédie-Française, founded in 1680. Run by the Government of France, it performs mostly French classics at the Salle Richelieu in the Palais-Royal at 2 rue de Richelieu, next to the Louvre. of Other famous theatres include the Odéon-Théâtre de l'Europe, next to the Luxembourg Gardens, also a state institution and theatrical landmark; the Théâtre Mogador, and the Théâtre de la Gaîté-Montparnasse.
The music hall and cabaret are famous Paris institutions. The "Moulin Rouge" was opened in 1889. It was highly visible because of its large red imitation windmill on its roof, and became the birthplace of the dance known as the French Cancan. It helped make famous the singers Mistinguett and Édith Piaf and the painter Toulouse-Lautrec, who made posters for the venue. In 1911, the dance hall Olympia Paris invented the grand staircase as a settling for its shows, competing with its great rival, the "Folies Bergère". Its stars in the 1920s included the American singer and dancer Josephine Baker. Later, Olympia Paris presented Dalida, Edith Piaf, Marlene Dietrich, Miles Davis, Judy Garland and the Grateful Dead.
The Casino de Paris presented many famous French singers, including Mistinguett, Maurice Chevalier and Tino Rossi. Other famous Paris music halls include "Le Lido", on the Champs-Élysées, opened in 1946; and the Crazy Horse Saloon, featuring strip-tease, dance and magic, opened in 1951. A half dozen music halls exist today in Paris, attended mostly by visitors to the city.
The first book printed in France, "Epistolae" ("Letters"), by Gasparinus de Bergamo (Gasparino da Barzizza), was published in Paris in 1470 by the press established by Johann Heynlin. Since then, Paris has been the centre of the French publishing industry, the home of some of the world's best-known writers and poets, and the setting for many classic works of French literature. Almost all the books published in Paris in the Middle Ages were in Latin, rather than French. Paris did not become the acknowledged capital of French literature until the 17th century, with authors such as Boileau, Corneille, La Fontaine, Molière, Racine, several coming from the provinces, as well as the foundation of the Académie française. In the 18th century, the literary life of Paris revolved around the cafés and salons; it was dominated by Voltaire, Jean-Jacques Rousseau, Pierre de Marivaux and Pierre Beaumarchais.
During the 19th century, Paris was the home and subject for some of France's greatest writers, including Charles Baudelaire, Stéphane Mallarmé, Mérimée, Alfred de Musset, Marcel Proust, Émile Zola, Alexandre Dumas, Gustave Flaubert, Guy de Maupassant and Honoré de Balzac. Victor Hugo's "The Hunchback of Notre Dame" inspired the renovation of its setting, the Notre-Dame de Paris. Another of Victor Hugo's works, "Les Misérables", written while he was in exile outside France during the Second Empire, described the social change and political turmoil in Paris in the early 1830s. One of the most popular of all French writers, Jules Verne, worked at the Theatre Lyrique and the Paris stock exchange, while he did research for his stories at the National Library.
In the 20th century, the Paris literary community was dominated by figures such as Colette, André Gide, François Mauriac, André Malraux, Albert Camus, and, after World War II, by Simone de Beauvoir and Jean-Paul Sartre. Between the wars it was the home of many important expatriate writers, including Ernest Hemingway, Samuel Beckett, and, in the 1970s, Milan Kundera. The winner of the 2014 Nobel Prize in Literature, Patrick Modiano (who lives in Paris), based most of his literary work on the depiction of the city during World War II and the 1960s–1970s.
Paris is a city of books and bookstores. In the 1970s, 80 percent of French-language publishing houses were found in Paris, almost all on the Left Bank in the 5th, 6th and 7th arrondissements. Since that time, because of high prices, some publishers have moved out to the less expensive areas. It is also a city of small bookstores. There are about 150 bookstores in the 5th arrondissement alone, plus another 250 book stalls along the Seine. Small Paris bookstores are protected against competition from discount booksellers by French law; books, even e-books, cannot be discounted more than five percent below their publisher's cover price.
In the late 12th century, a school of polyphony was established at Notre-Dame. Among the Trouvères of northern France, a group of Parisian aristocrats became known for their poetry and songs. Troubadours, from the south of France, were also popular. During the reign of François I, in the Renaissance era, the lute became popular in the French court. The French royal family and courtiers "disported themselves in masques, ballets, allegorical dances, recitals, and opera and comedy", and a national musical printing house was established. In the Baroque-era, noted composers included Jean-Baptiste Lully, Jean-Philippe Rameau, and François Couperin. The "Conservatoire de Musique de Paris" was founded in 1795. By 1870, Paris had become an important centre for symphony, ballet and operatic music.
Romantic-era composers (in Paris) include Hector Berlioz ("La Symphonie fantastique"), Charles Gounod ("Faust"), Camille Saint-Saëns ("Samson et Delilah"), Léo Delibes ("Lakmé") and Jules Massenet ("Thaïs"), among others. Georges Bizet's "Carmen" premiered 3 March 1875. "Carmen" has since become one of the most popular and frequently-performed operas in the classical canon. Among the Impressionist composers who created new works for piano, orchestra, opera, chamber music and other musical forms, stand in particular, Claude Debussy ("Suite bergamasque", and its well-known third movement, "Clair de lune", "La Mer", "Pelléas et Mélisande"), Erik Satie ("Gymnopédies", "Je te veux", "Gnossiennes", "Parade") and Maurice Ravel ("Miroirs", "Boléro", "La valse", "L'heure espagnole"). Several foreign-born composers, such as Frédéric Chopin (Poland), Franz Liszt (Hungary), Jacques Offenbach (Germany), Niccolò Paganini (Italy), and Igor Stravinsky (Russia), established themselves or made significant contributions both with their works and their influence in Paris.
Bal-musette is a style of French music and dance that first became popular in Paris in the 1870s and 1880s; by 1880 Paris had some 150 dance halls in the working-class neighbourhoods of the city. Patrons danced the bourrée to the accompaniment of the cabrette (a bellows-blown bagpipe locally called a "musette") and often the vielle à roue (hurdy-gurdy) in the cafés and bars of the city. Parisian and Italian musicians who played the accordion adopted the style and established themselves in Auvergnat bars especially in the 19th arrondissement, and the romantic sounds of the accordion has since become one of the musical icons of the city. Paris became a major centre for jazz and still attracts jazz musicians from all around the world to its clubs and cafés.
Paris is the spiritual home of gypsy jazz in particular, and many of the Parisian jazzmen who developed in the first half of the 20th century began by playing Bal-musette in the city. Django Reinhardt rose to fame in Paris, having moved to the 18th arrondissement in a caravan as a young boy, and performed with violinist Stéphane Grappelli and their Quintette du Hot Club de France in the 1930s and 1940s.
Immediately after the War the Saint-Germain-des-Pres quarter and the nearby Saint-Michel quarter became home to many small jazz clubs, mostly found in cellars because of a lack of space; these included the Caveau des Lorientais, the Club Saint-Germain, the Rose Rouge, the Vieux-Colombier, and the most famous, Le Tabou. They introduced Parisians to the music of Claude Luter, Boris Vian, Sydney Bechet, Mezz Mezzrow, and Henri Salvador. Most of the clubs closed by the early 1960s, as musical tastes shifted toward rock and roll.
Some of the finest manouche musicians in the world are found here playing the cafés of the city at night. Some of the more notable jazz venues include the New Morning, Le Sunset, La Chope des Puces and Bouquet du Nord. Several yearly festivals take place in Paris, including the Paris Jazz Festival and the rock festival Rock en Seine. The Orchestre de Paris was established in 1967. On 19 December 2015, Paris and other worldwide fans commemorated the 100th anniversary of the birth of Edith Piaf—a cabaret singer-songwriter and actress who became widely regarded as France's national chanteuse, as well as being one of France's greatest international stars. Other singers—of similar style—include Maurice Chevalier, Charles Aznavour, Yves Montand, as well as Charles Trenet.
Paris has a big hip hop scene. This music became popular during the 1980s. The presence of a large African and Caribbean community helped to its development, it gave a voice, a political and social status for many minorities.
The movie industry was born in Paris when Auguste and Louis Lumière projected the first motion picture for a paying audience at the Grand Café on 28 December 1895. Many of Paris' concert/dance halls were transformed into cinemas when the media became popular beginning in the 1930s. Later, most of the largest cinemas were divided into multiple, smaller rooms. Paris' largest cinema room today is in the Grand Rex theatre with 2,700 seats.Big multiplex cinemas have been built since the 1990s. UGC Ciné Cité Les Halles with 27 screens, MK2 Bibliothèque with 20 screens and UGC Ciné Cité Bercy with 18 screens are among the largest.
Parisians tend to share the same movie-going trends as many of the world's global cities, with cinemas primarily dominated by Hollywood-generated film entertainment. French cinema comes a close second, with major directors ("réalisateurs") such as Claude Lelouch, Jean-Luc Godard, and Luc Besson, and the more slapstick/popular genre with director Claude Zidi as an example. European and Asian films are also widely shown and appreciated. On 2 February 2000, Philippe Binant realised the first digital cinema projection in Europe, with the DLP CINEMA technology developed by Texas Instruments, in Paris.
Since the late 18th century, Paris has been famous for its restaurants and "haute cuisine", food meticulously prepared and artfully presented. A luxury restaurant, La Taverne Anglaise, opened in 1786 in the arcades of the Palais-Royal by Antoine Beauvilliers; it featured an elegant dining room, an extensive menu, linen tablecloths, a large wine list and well-trained waiters; it became a model for future Paris restaurants. The restaurant Le Grand Véfour in the Palais-Royal dates from the same period. The famous Paris restaurants of the 19th century, including the Café de Paris, the Rocher de Cancale, the Café Anglais, Maison Dorée and the Café Riche, were mostly located near the theatres on the Boulevard des Italiens; they were immortalised in the novels of Balzac and Émile Zola. Several of the best-known restaurants in Paris today appeared during the "Belle Epoque", including Maxim's on Rue Royale, Ledoyen in the gardens of the Champs-Élysées, and the Tour d'Argent on the Quai de la Tournelle.
Today, due to Paris' cosmopolitan population, every French regional cuisine and almost every national cuisine in the world can be found there; the city has more than 9,000 restaurants. The Michelin Guide has been a standard guide to French restaurants since 1900, awarding its highest award, three stars, to the best restaurants in France. In 2018, of the 27 Michelin three-star restaurants in France, ten are located in Paris. These include both restaurants which serve classical French cuisine, such as L'Ambroisie in the Place des Vosges, and those which serve non-traditional menus, such as L'Astrance, which combines French and Asian cuisines. Several of France's most famous chefs, including Pierre Gagnaire, Alain Ducasse, Yannick Alléno and Alain Passard, have three-star restaurants in Paris.
In addition to the classical restaurants, Paris has several other kinds of traditional eating places. The café arrived in Paris in the 17th century, when the beverage was first brought from Turkey, and by the 18th century Parisian cafés were centres of the city's political and cultural life. The Café Procope on the Left Bank dates from this period. In the 20th century, the cafés of the Left Bank, especially Café de la Rotonde and Le Dôme Café in Montparnasse and Café de Flore and Les Deux Magots on Boulevard Saint Germain, all still in business, were important meeting places for painters, writers and philosophers. A bistro is a type of eating place loosely defined as a neighbourhood restaurant with a modest decor and prices and a regular clientele and a congenial atmosphere. Its name is said to have come in 1814 from the Russian soldiers who occupied the city; "bistro" means "quickly" in Russian, and they wanted their meals served rapidly so they could get back their encampment. Real bistros are increasingly rare in Paris, due to rising costs, competition from cheaper ethnic restaurants, and different eating habits of Parisian diners. A brasserie originally was a tavern located next to a brewery, which served beer and food at any hour. Beginning with the Paris Exposition of 1867; it became a popular kind of restaurant which featured beer and other beverages served by young women in the national costume associated with the beverage, particular German costumes for beer. Now brasseries, like cafés, serve food and drinks throughout the day.
Since the 19th century, Paris has been an international fashion capital, particularly in the domain of haute couture (clothing hand-made to order for private clients). It is home to some of the largest fashion houses in the world, including Dior and Chanel, as well as many other well-known and more contemporary fashion designers, such as Karl Lagerfeld, Jean-Paul Gaultier, Yves Saint Laurent, Givenchy, and Christian Lacroix. Paris Fashion Week, held in January and July in the Carrousel du Louvre among other renowned city locations, is one of the top four events on the international fashion calendar. The other fashion capitals of the world, Milan, London, and New York also host fashion weeks. Moreover, Paris is also the home of the world's largest cosmetics company: L'Oréal as well as three of the top five global makers of luxury fashion accessories: Louis Vuitton, Hermés, and Cartier. Most of the major fashion designers have their showrooms along the Avenue Montaigne, between the Champs-Élysées and the Seine.
Bastille Day, a celebration of the storming of the Bastille in 1789, the biggest festival in the city, is a military parade taking place every year on 14 July on the Champs-Élysées, from the Arc de Triomphe to Place de la Concorde. It includes a flypast over the Champs Élysées by the Patrouille de France, a parade of military units and equipment, and a display of fireworks in the evening, the most spectacular being the one at the Eiffel Tower.
Some other yearly festivals are Paris-Plages, a festive event that lasts from mid-July to mid-August when the Right Bank of the Seine is converted into a temporary beach with sand, deck chairs and palm trees; Journées du Patrimoine, Fête de la Musique, Techno Parade, Nuit Blanche, Cinéma au clair de lune, Printemps des rues, Festival d'automne, and Fête des jardins. The Carnaval de Paris, one of the oldest festivals in Paris, dates back to the Middle Ages.
Paris is the département with the highest proportion of highly educated people. In 2009, around 40 percent of Parisians held a "licence"-level diploma or higher, the highest proportion in France, while 13 percent have no diploma, the third-lowest percentage in France. Education in Paris and the Île-de-France region employs approximately 330,000 people, 170,000 of whom are teachers and professors teaching approximately 2.9 million children and students in around 9,000 primary, secondary, and higher education schools and institutions.
The University of Paris, founded in the 12th century, is often called the Sorbonne after one of its original medieval colleges. It was broken up into thirteen autonomous universities in 1970, following the student demonstrations in 1968. Most of the campuses today are in the Latin Quarter where the old university was located, while others are scattered around the city and the suburbs.
The Paris region hosts France's highest concentration of the "grandes écoles" – 55 specialised centres of higher-education outside the public university structure. The prestigious public universities are usually considered "grands établissements". Most of the "grandes écoles" were relocated to the suburbs of Paris in the 1960s and 1970s, in new campuses much larger than the old campuses within the crowded City of Paris, though the École Normale Supérieure has remained on rue d'Ulm in the 5th arrondissement. There are a high number of engineering schools, led by the Paris Institute of Technology which comprises several colleges such as École Polytechnique, École des Mines, AgroParisTech, Télécom Paris, Arts et Métiers, and École des Ponts et Chaussées. There are also many business schools, including HEC, INSEAD, ESSEC, and ESCP Europe. The administrative school such as ENA has been relocated to Strasbourg, the political science school Sciences-Po is still located in Paris' 7th arrondissement, the most prestigious university for social sciences, the École des hautes études en sciences sociales is located in Paris' 6th arrondissement and the most prestigious university of economics and finance, Paris-Dauphine, is located in Paris' 16th. The Parisian school of journalism CELSA department of the Paris-Sorbonne University is located in Neuilly-sur-Seine. Paris is also home to several of France's most famous high-schools such as Lycée Louis-le-Grand, Lycée Henri-IV, Lycée Janson de Sailly and Lycée Condorcet. The National Institute of Sport and Physical Education, located in the 12th arrondissement, is both a physical education institute and high-level training centre for elite athletes.
The "Bibliothèque nationale de France" (BnF) operates public libraries in Paris, among them the François Mitterrand Library, Richelieu Library, Louvois, Opéra Library, and Arsenal Library. There are three public libraries in the 4th arrondissement. The Forney Library, in the Marais district, is dedicated to the decorative arts; the Arsenal Library occupies a former military building, and has a large collection on French literature; and the Bibliothèque historique de la ville de Paris, also in Le Marais, contains the Paris historical research service. The Sainte-Geneviève Library is in 5th arrondissement; designed by Henri Labrouste and built in the mid-1800s, it contains a rare book and manuscript division. Bibliothèque Mazarine, in the 6th arrondissement, is the oldest public library in France. The Médiathèque Musicale Mahler in the 8th arrondissement opened in 1986 and contains collections related to music. The François Mitterrand Library (nicknamed "Très Grande Bibliothèque") in the 13th arrondissement was completed in 1994 to a design of Dominique Perrault and contains four glass towers.
There are several academic libraries and archives in Paris. The Sorbonne Library in the 5th arrondissement is the largest university library in Paris. In addition to the Sorbonne location, there are branches in Malesherbes, Clignancourt-Championnet, Michelet-Institut d'Art et d'Archéologie, Serpente-Maison de la Recherche, and Institut des Etudes Ibériques. Other academic libraries include Interuniversity Pharmaceutical Library, Leonardo da Vinci University Library, Paris School of Mines Library, and the René Descartes University Library.
Paris' most popular sport clubs are the association football club Paris Saint-Germain F.C. and the rugby union clubs Stade Français and Racing 92, the last of which is based just outside the city proper. The 80,000-seat Stade de France, built for the 1998 FIFA World Cup, is located just north of Paris in the commune of Saint-Denis. It is used for football, rugby union and track and field athletics. It hosts the French national football team for friendlies and major tournaments qualifiers, annually hosts the French national rugby team's home matches of the Six Nations Championship, and hosts several important matches of the Stade Français rugby team. In addition to Paris Saint-Germain F.C., the city has a number of other professional and amateur football clubs: Paris FC, Red Star, RCF Paris and Stade Français Paris.
Paris hosted the 1900 and 1924 Summer Olympics and will host the 2024 Summer Olympics and Paralympic Games.
The city also hosted the finals of the 1938 FIFA World Cup (at the Stade Olympique de Colombes), as well as the 1998 FIFA World Cup and the 2007 Rugby World Cup Final (both at the Stade de France). Two UEFA Champions League Finals in the current century have also been played in the Stade de France: the 2000 and 2006 editions. Paris has most recently been the host for UEFA Euro 2016, both at the Parc des Princes in the city proper and also at Stade de France, with the latter hosting the opening match and final.
The final stage of the most famous bicycle racing in the world, Tour de France, always finishes in Paris. Since 1975, the race has finished on the Champs-Elysées.
Tennis is another popular sport in Paris and throughout France; the French Open, held every year on the red clay of the Roland Garros National Tennis Centre, is one of the four Grand Slam events of the world professional tennis tour. The 17,000-seat Bercy Arena (officially named "AccorHotels Arena" and formerly known as the "Palais Omnisports de Paris-Bercy") is the venue for the annual Paris Masters ATP Tour tennis tournament and has been a frequent site of national and international tournaments in basketball, boxing, cycling, handball, ice hockey, show jumping and other sports. The Bercy Arena also hosted the 2017 IIHF World Ice Hockey Championship, together with Cologne, Germany. The final stages of the FIBA EuroBasket 1999 were also played at the Palais Omnisports de Paris-Bercy.
The basketball team Levallois Metropolitans plays some of its games at the 4,000 capacity Stade Pierre de Coubertin. Another top-level professional team, Nanterre 92, plays in Nanterre.
Paris is a major rail, highway, and air transport hub. Île-de-France Mobilités (IDFM), formerly the Syndicat des transports d'Île-de-France (STIF) and before that the Syndicat des transports parisiens (STP), oversees the transit network in the region. The syndicate coordinates public transport and contracts it out to the RATP (operating 347 bus lines, the Métro, eight tramway lines, and sections of the RER), the SNCF (operating suburban rails, one tramway line and the other sections of the RER) and the Optile consortium of private operators managing 1,176 bus lines.
A central hub of the national rail network, Paris' six major railway stations (Gare du Nord, Gare de l'Est, Gare de Lyon, Gare d'Austerlitz, Gare Montparnasse, Gare Saint-Lazare) and a minor one (Gare de Bercy) are connected to three networks: the TGV serving four high-speed rail lines, the normal speed Corail trains, and the suburban rails (Transilien).
Since the inauguration of its first line in 1900, Paris's Métro network has grown to become the city's most widely used local transport system; today it carries about 5.23 million passengers daily through 16 lines, 303 stations (385 stops) and of rails. Superimposed on this is a 'regional express network', the RER, whose five lines (A, B, C, D, and E), 257 stops and of rails connect Paris to more distant parts of the urban area.
Over €26.5 billion will be invested over the next 15 years to extend the Métro network into the suburbs, with notably the Grand Paris Express project.
In addition, the Paris region is served by a light rail network of nine lines, the tramway: Line T1 runs from Asnières-Gennevilliers to Noisy-le-Sec, Line T2 runs from Pont de Bezons to Porte de Versailles, Line T3a runs from Pont du Garigliano to Porte de Vincennes, Line T3b runs from Porte de Vincennes to Porte d'Asnières, Line T5 runs from Saint-Denis to Garges-Sarcelles, Line T6 runs from Châtillon to Viroflay, Line T7 runs from Villejuif to Athis-Mons, Line T8 runs from Saint-Denis to Épinay-sur-Seine and Villetaneuse, all of which are operated by the RATP Group, and line T4 runs from Bondy RER to Aulnay-sous-Bois, which is operated by the state rail carrier SNCF. Five new light rail lines are currently in various stages of development.
Paris is a major international air transport hub with the 5th busiest airport system in the world. The city is served by three commercial international airports: Paris-Charles de Gaulle, Paris-Orly and Beauvais-Tillé. Together these three airports recorded traffic of 96.5 million passengers in 2014. There is also one general aviation airport, Paris-Le Bourget, historically the oldest Parisian airport and closest to the city centre, which is now used only for private business flights and air shows.
Orly Airport, located in the southern suburbs of Paris, replaced Le Bourget as the principal airport of Paris from the 1950s to the 1980s. Charles de Gaulle Airport, located on the edge of the northern suburbs of Paris, opened to commercial traffic in 1974 and became the busiest Parisian airport in 1993. For the year 2017 it was the 5th busiest airport in the world by international traffic and it is the hub for the nation's flag carrier Air France. Beauvais-Tillé Airport, located north of Paris' city centre, is used by charter airlines and low-cost carriers such as Ryanair.
Domestically, air travel between Paris and some of France's largest cities such as Lyon, Marseille, or Strasbourg has been in a large measure replaced by high-speed rail due to the opening of several high-speed TGV rail lines from the 1980s. For example, after the LGV Méditerranée opened in 2001, air traffic between Paris and Marseille declined from 2,976,793 passengers in 2000 to 1,502,196 passengers in 2014. After the LGV Est opened in 2007, air traffic between Paris and Strasbourg declined from 1,006,327 passengers in 2006 to 157,207 passengers in 2014.
Internationally, air traffic has increased markedly in recent years between Paris and the Gulf airports, the emerging nations of Africa, Russia, Turkey, Portugal, Italy, and mainland China, whereas noticeable decline has been recorded between Paris and the British Isles, Egypt, Tunisia, and Japan.
The city is also the most important hub of France's motorway network, and is surrounded by three orbital freeways: the Périphérique, which follows the approximate path of 19th-century fortifications around Paris, the A86 motorway in the inner suburbs, and finally the Francilienne motorway in the outer suburbs. Paris has an extensive road network with over of highways and motorways.
The Paris region is the most active water transport area in France, with most of the cargo handled by Ports of Paris in facilities located around Paris. The rivers Loire, Rhine, Rhone, Meuse, and Scheldt can be reached by canals connecting with the Seine, which include the Canal Saint-Martin, Canal Saint-Denis, and the Canal de l'Ourcq.
There are of cycle paths and routes in Paris. These include "piste cyclable" (bike lanes separated from other traffic by physical barriers such as a kerb) and "bande cyclable" (a bicycle lane denoted by a painted path on the road). Some of specially marked bus lanes are free to be used by cyclists, with a protective barrier protecting against encroachments from vehicles. Cyclists have also been given the right to ride in both directions on certain one-way streets. Paris offers a bike sharing system called Vélib' with more than 20,000 public bicycles distributed at 1,800 parking stations, which can be rented for short and medium distances including one way trips.
Electricity is provided to Paris through a peripheral grid fed by multiple sources. , around 50% of electricity generated in the Île-de-France comes from cogeneration energy plants located near the outer limits of the region; other energy sources include the Nogent Nuclear Power Plant (35%), trash incineration (9% – with cogeneration plants, these provide the city in heat as well), methane gas (5%), hydraulics (1%), solar power (0.1%) and a negligible amount of wind power (0.034 GWh). A quarter of the city's district heating is to come from a plant in Saint-Ouen-sur-Seine, burning a 50/50-mix of coal and 140,000 tonnes of wood pellets from the United States per year.
Paris in its early history had only the rivers Seine and Bièvre for water. From 1809, the Canal de l'Ourcq provided Paris with water from less-polluted rivers to the north-east of the capital. From 1857, the civil engineer Eugène Belgrand, under Napoleon III, oversaw the construction of a series of new aqueducts that brought water from locations all around the city to several reservoirs built atop the Capital's highest points of elevation. From then on, the new reservoir system became Paris' principal source of drinking water, and the remains of the old system, pumped into lower levels of the same reservoirs, were from then on used for the cleaning of Paris' streets. This system is still a major part of Paris' modern water-supply network. Today Paris has more than of underground passageways dedicated to the evacuation of Paris' liquid wastes.
In 1982, Mayor Chirac introduced the motorcycle-mounted Motocrotte to remove dog faeces from Paris streets. The project was abandoned in 2002 for a new and better enforced local law, under the terms of which dog owners can be fined up to €500 for not removing their dog faeces. The air pollution in Paris, from the point of view of particulate matter (PM10), is the highest in France with 38 μg/m³.
Paris today has more than 421 municipal parks and gardens, covering more than 3,000 hectares and containing more than 250,000 trees. Two of Paris's oldest and most famous gardens are the Tuileries Garden (created in 1564 for the Tuileries Palace and redone by André Le Nôtre between 1664 and 1672) and the Luxembourg Garden, for the Luxembourg Palace, built for Marie de' Medici in 1612, which today houses the Senate. The "Jardin des plantes" was the first botanical garden in Paris, created in 1626 by Louis XIII's doctor Guy de La Brosse for the cultivation of medicinal plants.
Between 1853 and 1870, Emperor Napoleon III and the city's first director of parks and gardens, Jean-Charles Alphand, created the Bois de Boulogne, Bois de Vincennes, Parc Montsouris and Parc des Buttes-Chaumont, located at the four points of the compass around the city, as well as many smaller parks, squares and gardens in the Paris's quarters. Since 1977, the city has created 166 new parks, most notably the Parc de la Villette (1987), Parc André Citroën (1992), Parc de Bercy (1997) and Parc Clichy-Batignolles (2007). One of the newest parks, the Promenade des Berges de la Seine (2013), built on a former highway on the left bank of the Seine between the Pont de l'Alma and the Musée d'Orsay, has floating gardens and gives a view of the city's landmarks.
Weekly Parkruns take place in the Bois de Boulogne and the Parc Montsouris
During the Roman era, the city's main cemetery was located to the outskirts of the left bank settlement, but this changed with the rise of Catholic Christianity, where most every inner-city church had adjoining burial grounds for use by their parishes. With Paris's growth many of these, particularly the city's largest cemetery, the Holy Innocents' Cemetery, were filled to overflowing, creating quite unsanitary conditions for the capital. When inner-city burials were condemned from 1786, the contents of all Paris' parish cemeteries were transferred to a renovated section of Paris's stone mines outside the "Porte d'Enfer" city gate, today place Denfert-Rochereau in the 14th arrondissement. The process of moving bones from the "Cimetière des Innocents" to the catacombs took place between 1786 and 1814; part of the network of tunnels and remains can be visited today on the official tour of the catacombs.
After a tentative creation of several smaller suburban cemeteries, the Prefect Nicholas Frochot under Napoleon Bonaparte provided a more definitive solution in the creation of three massive Parisian cemeteries outside the city limits. Open from 1804, these were the cemeteries of Père Lachaise, Montmartre, Montparnasse, and later Passy; these cemeteries became inner-city once again when Paris annexed all neighbouring communes to the inside of its much larger ring of suburban fortifications in 1860. New suburban cemeteries were created in the early 20th century: The largest of these are the Cimetière parisien de Saint-Ouen, the Cimetière parisien de Pantin (also known as Cimetière parisien de Pantin-Bobigny), the Cimetière parisien d'Ivry, and the Cimetière parisien de Bagneux. Some of the most famous people in the world are buried in Parisian cemeteries, such as Oscar Wilde and Serge Gainsbourg among others.
Health care and emergency medical service in the City of Paris and its suburbs are provided by the Assistance publique – Hôpitaux de Paris (AP-HP), a public hospital system that employs more than 90,000 people (including practitioners, support personnel, and administrators) in 44 hospitals. It is the largest hospital system in Europe. It provides health care, teaching, research, prevention, education and emergency medical service in 52 branches of medicine. The hospitals receive more than 5.8 million annual patient visits.
One of the most notable hospitals is the Hôtel-Dieu, founded in 651, the oldest hospital in the city, although the current building is the product of a reconstruction of 1877. Other hospitals include Pitié-Salpêtrière Hospital (one of the largest in Europe), Hôpital Cochin, Bichat–Claude Bernard Hospital, Hôpital Européen Georges-Pompidou, Bicêtre Hospital, Beaujon Hospital, the Curie Institute, Lariboisière Hospital, Necker–Enfants Malades Hospital, Hôpital Saint-Louis, Hôpital de la Charité and the American Hospital of Paris.
Paris and its close suburbs is home to numerous newspapers, magazines and publications including "Le Monde", "Le Figaro", "Libération", "Le Nouvel Observateur", "Le Canard enchaîné", "La Croix", "Pariscope", "Le Parisien (in" "Saint-Ouen"), "Les Échos", "Paris Match (Neuilly-sur-Seine)", "Réseaux & Télécoms", Reuters France, and "L'Officiel des Spectacles". France's two most prestigious newspapers, "Le Monde" and "Le Figaro", are the centrepieces of the Parisian publishing industry. Agence France-Presse is France's oldest, and one of the world's oldest, continually operating news agencies. AFP, as it is colloquially abbreviated, maintains its headquarters in Paris, as it has since 1835. France 24 is a television news channel owned and operated by the French government, and is based in Paris. Another news agency is France Diplomatie, owned and operated by the Ministry of Foreign and European Affairs, and pertains solely to diplomatic news and occurrences.
The most-viewed network in France, TF1, is in nearby Boulogne-Billancourt. France 2, France 3, Canal+, France 5, M6 (Neuilly-sur-Seine), Arte, D8, W9, NT1, NRJ 12, La Chaîne parlementaire, France 4, BFM TV, and Gulli are other stations located in and around the capital. Radio France, France's public radio broadcaster, and its various channels, is headquartered in Paris' 16th arrondissement. Radio France Internationale, another public broadcaster is also based in the city. Paris also holds the headquarters of the La Poste, France's national postal carrier.
Since 9 April 1956, Paris is exclusively and reciprocally twinned only with:
Paris has agreements of friendship and co-operation with: | https://en.wikipedia.org/wiki?curid=22989 |
Paul Cohen
Paul Joseph Cohen (April 2, 1934 – March 23, 2007) was an American mathematician. He is best known for his proofs that the continuum hypothesis and the axiom of choice are independent from Zermelo–Fraenkel set theory, for which he was awarded a Fields Medal.
Cohen was born in Long Branch, New Jersey, into a Jewish family that had immigrated to the United States from what is now Poland; he grew up in Brooklyn. He graduated in 1950, at age 16, from Stuyvesant High School in New York City.
Cohen next studied at the Brooklyn College from 1950 to 1953, but he left without earning his bachelor's degree when he learned that he could start his graduate studies at the University of Chicago with just two years of college. At Chicago, Cohen completed his master's degree in mathematics in 1954 and his Doctor of Philosophy degree in 1958, under supervision of Antoni Zygmund. The title of his doctoral thesis was "Topics in the Theory of Uniqueness of Trigonometrical Series".
On June 2, 1995 Cohen received an honorary doctorate from the Faculty of Science and Technology at Uppsala University, Sweden.
Cohen is noted for developing a mathematical technique called forcing, which he used to prove that neither the continuum hypothesis (CH) nor the axiom of choice can be proved from the standard Zermelo–Fraenkel axioms (ZF) of set theory. In conjunction with the earlier work of Gödel, this showed that both of these statements are logically independent of the ZF axioms: these statements can be neither proved nor disproved from these axioms. In this sense, the continuum hypothesis is undecidable, and it is the most widely known example of a natural statement that is independent from the standard ZF axioms of set theory.
For his result on the continuum hypothesis, Cohen won the Fields Medal in mathematics in 1966, and also the National Medal of Science in 1967. The Fields Medal that Cohen won continues to be the only Fields Medal to be awarded for a work in mathematical logic, as of 2018.
Apart from his work in set theory, Cohen also made many valuable contributions to analysis. He was awarded the Bôcher Memorial Prize in mathematical analysis in 1964 for his paper "On a conjecture by Littlewood and idempotent measures", and lends his name to the Cohen–Hewitt factorization theorem.
Cohen was a full professor of mathematics at Stanford University. He was an Invited Speaker at the ICM in 1962 in Stockholm and in 1966 in Moscow.
Angus MacIntyre of the Queen Mary University of London stated about Cohen: "He was dauntingly clever, and one would have had to be naive or exceptionally altruistic to put one's 'hardest problem' to the Paul I knew in the '60s." He went on to compare Cohen to Kurt Gödel, saying: "Nothing more dramatic than their work has happened in the history of the subject." Gödel himself wrote a letter to Cohen in 1963, a draft of which stated, "Let me repeat that it is really a delight to read your proof of the ind[ependence] of the cont[inuum] hyp[othesis]. I think that in all essential respects you have given the best possible proof & this does not happen frequently. Reading your proof had a similarly pleasant effect on me as seeing a really good play."
While studying the continuum hypothesis, Cohen is quoted as saying in 1985 that he had "had the feeling that people thought the problem was hopeless, since there was no new way of constructing models of set theory. Indeed, they thought you had to be slightly crazy even to think about the problem."
"A point of view which the author [Cohen] feels may eventually come to be accepted is that CH is obviously false. The main reason one accepts the axiom of infinity is probably that we feel it absurd to think that the process of adding only one set at a time can exhaust the entire universe. Similarly with the higher axioms of infinity. Now formula_1 is the cardinality of the set of countable ordinals, and this is merely a special and the simplest way of generating a higher cardinal. The set formula_2 [the continuum] is, in contrast, generated by a totally new and more powerful principle, namely the power set axiom. It is unreasonable to expect that any description of a larger cardinal which attempts to build up that cardinal from ideas deriving from the replacement axiom can ever reach formula_2.
Thus formula_2 is greater than formula_5, where formula_6, etc. This point of view regards formula_2 as an incredibly rich set given to us by one bold new axiom, which can never be approached by any piecemeal process of construction. Perhaps later generations will see the problem more clearly and express themselves more eloquently."
An "enduring and powerful product" of Cohen's work on the continuum hypothesis, and one that has been used by "countless mathematicians" is known as "forcing", and it is used to construct mathematical models to test a given hypothesis for truth or falsehood.
Shortly before his death, Cohen gave a lecture describing his solution to the problem of the continuum hypothesis at the 2006 Gödel centennial conference in Vienna.
Cohen and his wife, Christina (née Karls), had three sons. Cohen died on March 23, 2007 in Stanford, California after suffering from lung disease. | https://en.wikipedia.org/wiki?curid=22994 |
Presbyterianism
Presbyterianism is a part of the Reformed tradition within Protestantism, which traces its origins to Great Britain, particularly Scotland.
Presbyterian churches derive their name from the presbyterian form of church government, which is governed by representative assemblies of elders. A great number of Reformed churches are organized this way, but the word "Presbyterian", when capitalized, is often applied uniquely to churches that trace their roots to the Church of Scotland, as well as several English dissenter groups that formed during the English Civil War. Presbyterian theology typically emphasizes the sovereignty of God, the authority of the Scriptures, and the necessity of grace through faith in Christ. Presbyterian church government was ensured in Scotland by the Acts of Union in 1707, which created the Kingdom of Great Britain. In fact, most Presbyterians found in England can trace a Scottish connection, and the Presbyterian denomination was also taken to North America, mostly by Scots and Scots-Irish immigrants. The Presbyterian denominations in Scotland hold to the Reformed theology of John Calvin and his immediate successors, although there is a range of theological views within contemporary Presbyterianism. Local congregations of churches which use presbyterian polity are governed by sessions made up of representatives of the congregation (elders); a conciliar approach which is found at other levels of decision-making (presbytery, synod and general assembly).
The roots of Presbyterianism lie in the Reformation of the 16th century, the example of John Calvin's Republic of Geneva being particularly influential. Most Reformed churches that trace their history back to Scotland are either presbyterian or congregationalist in government. In the twentieth century, some Presbyterians played an important role in the ecumenical movement, including the World Council of Churches. Many Presbyterian denominations have found ways of working together with other Reformed denominations and Christians of other traditions, especially in the World Communion of Reformed Churches. Some Presbyterian churches have entered into unions with other churches, such as Congregationalists, Lutherans, Anglicans, and Methodists. Presbyterians in the United States came largely from Scottish immigrants, Scots-Irish immigrants, and also from New England Yankee communities that had originally been Congregational but changed because of an agreed-upon Plan of Union of 1801 for frontier areas. Historically, along with Lutherans and Episcopalians, Presbyterians tend to be considerably wealthier and better educated (having more graduate and post-graduate degrees per capita) than most other religious groups in United States, and are disproportionately represented in the upper reaches of American business, law and politics.
Presbyterian tradition, particularly that of the Church of Scotland, traces its early roots to the Church founded by Saint Columba, through the 6th century Hiberno-Scottish mission. Tracing their apostolic origin to Saint John, the Culdees practiced Christian monasticism, a key feature of Celtic Christianity in the region, with a presbyter exercising "authority within the institution, while the different monastic institutions were independent of one another." The Church in Scotland kept the Christian feast of Easter at a date different from the See of Rome and its monks used a unique style of tonsure. The Synod of Whitby in 664, however, ended these distinctives as it ruled "that Easter would be celebrated according to the Roman date, not the Celtic date." Although Roman influence came to dominate the Church in Scotland, certain Celtic influences remained in the Scottish Church, such as "the singing of metrical psalms, many of them set to old Celtic Christianity Scottish traditional and folk tunes", which later became a "distinctive part of Scottish Presbyterian worship".
Presbyterian history is part of the history of Christianity, but the beginning of Presbyterianism as a distinct movement occurred during the 16th-century Protestant Reformation. As the Catholic Church resisted the reformers, several different theological movements splintered from the Church and bore different denominations. Presbyterianism was especially influenced by the French theologian John Calvin, who is credited with the development of Reformed theology, and the work of John Knox, a Scotsman and a Roman Catholic Priest, who studied with Calvin in Geneva, Switzerland. He brought back Reformed teachings to Scotland. The Presbyterian church traces its ancestry back primarily to England and Scotland. In August 1560 the Parliament of Scotland adopted the "Scots Confession" as the creed of the Scottish Kingdom. In December 1560, the "First Book of Discipline" was published, outlining important doctrinal issues but also establishing regulations for church government, including the creation of ten ecclesiastical districts with appointed superintendents which later became known as presbyteries.
In time, the Scots Confession would be supplanted by the Westminster Confession of Faith, and the Larger and Shorter Catechisms, which were formulated by the Westminster Assembly between 1643 and 1649.
Presbyterians distinguish themselves from other denominations by doctrine, institutional organization (or "church order") and worship; often using a "Book of Order" to regulate common practice and order. The origins of the Presbyterian churches are in Calvinism. Many branches of Presbyterianism are remnants of previous splits from larger groups. Some of the splits have been due to doctrinal controversy, while some have been caused by disagreement concerning the degree to which those ordained to church office should be required to agree with the Westminster Confession of Faith, which historically serves as an important confessional document – second only to the Bible, yet directing particularities in the standardization and translation of the Bible – in Presbyterian churches.
Presbyterians place great importance upon education and lifelong learning, tempered with the knowledge that no human action can affect salvation.
Continuous study of the scriptures, theological writings, and understanding and interpretation of church doctrine are embodied in several statements of faith and catechisms formally adopted by various branches of the church, often referred to as "subordinate standards".
Presbyterian government is by councils (properly known as "courts") of elders. Teaching and ruling elders are ordained and convene in the lowest council known as a "session" or "consistory" responsible for the discipline, nurture, and mission of the local . Teaching elders (pastors or ministers) have responsibility for teaching, worship, and performing sacraments. Pastors or ministers are called by individual congregations. A congregation issues a call for the pastor or minister's service, but this call must be ratified by the local presbytery. The pastor or minister is a teaching elder, and Moderator of the Session, but is not usually a member of the congregation.
Ruling elders are men and women who are elected by the congregation and ordained to serve with the teaching elders, assuming responsibility for nurture and leadership of the congregation. Often, especially in larger congregations, the elders delegate the practicalities of buildings, finance, and temporal ministry to the needy in the congregation to a distinct group of officers (sometimes called deacons, which are ordained in some denominations). This group may variously be known as a "Deacon Board", "Board of Deacons" "Diaconate", or "Deacons' Court". These are sometimes known as "presbyters" to the full congregation.
Above the sessions exist presbyteries, which have area responsibilities. These are composed of teaching elders and ruling elders from each of the constituent congregations. The presbytery sends representatives to a broader regional or national assembly, generally known as the General Assembly, although an intermediate level of a "synod" sometimes exists. This congregation / presbytery / synod / general assembly schema is based on the historical structure of the larger Presbyterian churches, such as the Church of Scotland or the Presbyterian Church (U.S.A.); some bodies, such as the Presbyterian Church in America and the Presbyterian Church in Ireland, skip one of the steps between congregation and General Assembly, and usually the step skipped is the Synod. The Church of Scotland abolished the Synod in 1993.
Presbyterian governance is practised by Presbyterian denominations and also by many other Reformed churches.
Presbyterianism is historically a confessional tradition. This has two implications. The obvious one is that confessional churches express their faith in the form of "confessions of faith," which have some level of authoritative status. However this is based on a more subtle point: In confessional churches, theology is not solely an individual matter. While individuals are encouraged to understand Scripture, and may challenge the current institutional understanding, theology is carried out by the community as a whole. It is this community understanding of theology that is expressed in confessions.
However, there has arisen a spectrum of approaches to confessionalism. The manner of subscription, or the degree to which the official standards establish the actual doctrine of the church, turns out to be a practical matter. That is, the decisions rendered in ordination and in the courts of the church largely determine what the church means, representing the whole, by its adherence to the doctrinal standard.
Some Presbyterian traditions adopt only the Westminster Confession of Faith as the doctrinal standard to which teaching elders are required to subscribe, in contrast to the Larger and Shorter catechisms, which are approved for use in instruction. Many Presbyterian denominations, especially in North America, have adopted all of the Westminster Standards as their standard of doctrine which is subordinate to the Bible. These documents are Calvinistic in their doctrinal orientation. The Presbyterian Church in Canada retains the Westminster Confession of Faith in its original form, while admitting the historical period in which it was written should be understood when it is read.
The Westminster Confession is "The principal subordinate standard of the Church of Scotland" but "with due regard to liberty of opinion in points which do not enter into the substance of the Faith" (V). This formulation represents many years of struggle over the extent to which the confession reflects the Word of God and the struggle of conscience of those who came to believe it did not fully do so (e.g. William Robertson Smith). Some Presbyterian Churches, such as the Free Church of Scotland, have no such "conscience clause".
The Presbyterian Church (U.S.A.) has adopted the "Book of Confessions", which reflects the inclusion of other Reformed confessions in addition to the Westminster Standards. These other documents include ancient creedal statements (the Nicene Creed, the Apostles' Creed), 16th-century Reformed confessions (the Scots Confession, the Heidelberg Catechism, the Second Helvetic Confession), and 20th century documents (The Theological Declaration of Barmen, Confession of 1967 and A Brief Statement of Faith).
The Presbyterian Church in Canada developed the confessional document "Living Faith" (1984) and retains it as a subordinate standard of the denomination. It is confessional in format, yet like the Westminster Confession, draws attention back to original Bible text.
Presbyterians in Ireland who rejected Calvinism and the Westminster Confessions formed the Non-subscribing Presbyterian Church of Ireland.
Presbyterian denominations that trace their heritage to the British Isles usually organise their church services inspired by the principles in the Directory of Public Worship, developed by the Westminster Assembly in the 1640s. This directory documented Reformed worship practices and theology adopted and developed over the preceding century by British Puritans, initially guided by John Calvin and John Knox. It was enacted as law by the Scottish Parliament, and became one of the foundational documents of Presbyterian church legislation elsewhere.
Historically, the driving principle in the development of the standards of Presbyterian worship is the Regulative principle of worship, which specifies that (in worship), what is not commanded is forbidden.
Over subsequent centuries, many Presbyterian churches modified these prescriptions by introducing hymnody, instrumental accompaniment, and ceremonial vestments into worship. However, there is not one fixed "Presbyterian" worship style. Although there are set services for the Lord's Day in keeping with first-day Sabbatarianism, one can find a service to be evangelical and even revivalist in tone (especially in some conservative denominations), or strongly liturgical, approximating the practices of Lutheranism or Anglicanism (especially where Scottish tradition is esteemed), or semi-formal, allowing for a balance of hymns, preaching, and congregational participation (favored by probably most American Presbyterians). Most Presbyterian churches follow the traditional liturgical year and observe the traditional holidays, holy seasons, such as Advent, Christmas, Ash Wednesday, Holy Week, Easter, Pentecost, etc. They also make use of the appropriate seasonal liturgical colors, etc. Many incorporate ancient liturgical prayers and responses into the communion services and follow a daily, seasonal, and festival lectionary. Other Presbyterians, however, such as the Reformed Presbyterians, would practice a cappella exclusive psalmody, as well as eschew the celebration of holy days.
Among the paleo-orthodox and emerging church movements in Protestant and evangelical churches, in which some Presbyterians are involved, clergy are moving away from the traditional black Geneva gown to such vestments as the alb and chasuble, but also cassock and surplice (typically a full length Old English style surplice which resembles the Celtic alb, an ungirdled liturgical tunic of the old Gallican Rite), which some, particularly those identifying with the Liturgical Renewal Movement, hold to be more ancient and representative of a more ecumenical past.
Presbyterians traditionally have held the Worship position that there are only two sacraments:
Early Presbyterians were careful to distinguish between the "church," which referred to the "members", and the "meeting house," which was the building in which the church met. Until the late 19th century, very few Presbyterians ever referred to their buildings as "churches." Presbyterians believed that meeting-houses (now called churches) are buildings to support the worship of God. The decor in some instances was austere so as not to detract from worship. Early Presbyterian meeting-houses were extremely plain. No stained glass, no elaborate furnishings, and no images were to be found in the meeting-house. The pulpit, often raised so as only to be accessible by a staircase, was the centerpiece of the building. But these were not the standard characteristics of the mainline Presbyterians. These were more of the wave of Presbyterians that were influenced by the Puritans and simplicity.
In the late 19th century a gradual shift began to occur. Prosperous congregations built imposing churches, such as Fourth Presbyterian Church of Chicago and Brick Presbyterian Church in New York City, Shadyside Presbyterian Church in PA, St Stephen Presbyterian in Fort Worth Texas and many many others.
Usually a Presbyterian church will not have statues of saints, nor the ornate altar more typical of a Roman Catholic church. Instead, one will find a "communion table," usually on the same level as the congregation. There may be a rail between the communion table and the "Chancel" behind it, which may contain a more decorative altar-type table, choir loft, or choir stalls, lectern and clergy area. The altar is called the communion table and the altar area is called the Chancel by Presbyterians. In a Presbyterian (Reformed Church) there may be an altar cross, either on the communion table or on a table in the chancel. By using the "empty" cross, or cross of the resurrection, Presbyterians emphasize the resurrection and that Christ is not continually dying, but died once and is alive for all eternity. Some Presbyterian church buildings are often decorated with a cross that has a circle around the center, or Celtic cross. This not only emphasized the resurrection, but also acknowledges historical aspects of Presbyterianism. A baptismal font will be located either at the entrance or near the chancel area. Presbyterian architecture generally makes significant use of symbolism. You may also find decorative and ornate stained glass windows depicting scenes from the bible. Some Presbyterian churches will also have ornate statues of Christ or Graven Scenes from the Last Supper located behind the Chancel. St. Giles Cathedral ( Church of Scotland- The Mother Church of Presbyterians) does have a Crucifix next to one of the Pulpits that hangs alongside. The image of Christ is more of faint image and more modern design.
John Knox (1505–1572), a Scot who had spent time studying under Calvin in Geneva, returned to Scotland and urged his countrymen to reform the Church in line with Calvinist doctrines. After a period of religious convulsion and political conflict culminating in a victory for the Protestant party at the Siege of Leith the authority of the Catholic Church was abolished in favour of Reformation by the legislation of the Scottish Reformation Parliament in 1560. The Church was eventually organised by Andrew Melville along Presbyterian lines to become the national Church of Scotland. King James VI and I moved the Church of Scotland towards an episcopal form of government, and in 1637, James' successor, Charles I and William Laud, the Archbishop of Canterbury, attempted to force the Church of Scotland to use the Book of Common Prayer. What resulted was an armed insurrection, with many Scots signing the "Solemn League and Covenant". The Covenanters would serve as the government of Scotland for nearly a decade, and would also send military support to the Parliamentarians during the English Civil War. Following the restoration of the monarchy in 1660, Charles II, despite the initial support that he received from the Covenanters, reinstated an episcopal form of government on the church.
However, with the Glorious Revolution of 1688 the Church of Scotland was finally unequivocally recognised as a Presbyterian institution by the monarch due to Scottish Presbyterian support for the aforementioned revolution and the Acts of Union 1707 between Scotland and England guaranteed the Church of Scotland's form of government. However, legislation by the United Kingdom parliament allowing patronage led to splits in the Church. In 1733, a group of ministers seceded from the Church of Scotland to form the Associate Presbytery, another group seceded in 1761 to form the Relief Church and the Disruption of 1843 led to the formation of the Free Church of Scotland. Further splits took place, especially over theological issues, but most Presbyterians in Scotland were reunited by 1929 union of the established Church of Scotland and the United Free Church of Scotland.
There are now eight Presbyterian denominations in Scotland today. These are, in order of size: the Church of Scotland, the Free Church of Scotland, the United Free Church of Scotland, the Free Presbyterian Church of Scotland, the Free Church of Scotland (Continuing), the Associated Presbyterian Church, the Reformed Presbyterian Church of Scotland, and the International Presbyterian Church. Combined, they have over 1500 congregations in Scotland.
Within Scotland the term kirk is usually used to refer to a local Presbyterian church. Informally, the term 'The Kirk' refers to the Church of Scotland. Some of the values and ideals espoused in Scottish Presbyterian denominations can be reflected in this reference in a book from Norman Drummond, chaplain to the Queen in Scotland.
In England, Presbyterianism was established in secret in 1592. Thomas Cartwright is thought to be the first Presbyterian in England. Cartwright's controversial lectures at Cambridge University condemning the episcopal hierarchy of the Elizabethan Church led to his deprivation of his post by Archbishop John Whitgift and his emigration abroad. Between 1645 and 1648, a series of ordinances of the Long Parliament established Presbyterianism as the polity of the Church of England. Presbyterian government was established in London and Lancashire and in a few other places in England, although Presbyterian hostility to the execution of Charles I and the establishment of the republican Commonwealth of England meant that Parliament never enforced the Presbyterian system in England. The re-establishment of the monarchy in 1660 brought the return of Episcopal church government in England (and in Scotland for a short time); but the Presbyterian church in England continued in Non-Conformity, outside of the established church. In 1719 a major split, the Salter's Hall controversy, occurred; with the majority siding with nontrinitarian views. Thomas Bradbury published several sermons bearing on the controversy, and in 1719, "An answer to the reproaches cast on the dissenting ministers who subscribed their belief of the Eternal Trinity.". By the 18th century many English Presbyterian congregations had become Unitarian in doctrine.
A number of new Presbyterian Churches were founded by Scottish immigrants to England in the 19th century and later. Following the 'Disruption' in 1843 many of those linked to the Church of Scotland eventually joined what became the Presbyterian Church of England in 1876. Some, that is Crown Court (Covent Garden, London), St Andrew's (Stepney, London) and Swallow Street (London), did not join the English denomination, which is why there are Church of Scotland congregations in England such as those at Crown Court, and St Columba's, Pont Street (Knightsbridge) in London. There is also a congregation in the heart of London's financial district called London City Presbyterian Church that is also affiliated with Free Church of Scotland.
In 1972, the Presbyterian Church of England (PCofE) united with the Congregational Church in England and Wales to form the United Reformed Church (URC). Among the congregations the PCofE brought to the URC were Tunley (Lancashire), Aston Tirrold (Oxfordshire) and John Knox Presbyterian Church, Stepney, London (now part of Stepney Meeting House URC) – these are among the sole survivors today of the English Presbyterian churches of the 17th century. The URC also has a presence in Scotland, mostly of former Congregationalist Churches. Two former Presbyterian congregations, St Columba's, Cambridge (founded in 1879), and St Columba's, Oxford (founded as a chaplaincy by the PCofE and the Church of Scotland in 1908 and as a congregation of the PCofE in 1929), continue as congregations of the URC and university chaplaincies of the Church of Scotland.
In recent years a number of smaller denominations adopting Presbyterian forms of church government have organised in England, including the International Presbyterian Church planted by evangelical theologian Francis Schaeffer of L'Abri Fellowship in the 1970s, and the Evangelical Presbyterian Church in England and Wales founded in the North of England in the late 1980s.
In Wales, Presbyterianism is represented by the Presbyterian Church of Wales, which was originally composed largely of Calvinistic Methodists who accepted Calvinist theology rather than the Arminianism of the Wesleyan Methodists. They broke off from the Church of England in 1811, ordaining their own ministers. They were originally known as the Calvinist Methodist connexion and in the 1920s it became alternatively known as the Presbyterian Church of Wales.
Presbyterianism is the largest Protestant denomination in Northern Ireland and the second largest on the island of Ireland (after the Anglican Church of Ireland), and was brought by Scottish plantation settlers to Ulster who had been strongly encouraged to emigrate by James VI of Scotland, also James I of Ireland and England. An estimated 100,000 Scottish Presbyterians moved to the northern counties of Ireland between 1607 and the Battle of the Boyne in 1690. The Presbytery of Ulster was formed in 1642 separately from the established Anglican Church. Presbyterians, along with Roman Catholics in Ulster and the rest of Ireland, suffered under the discriminatory Penal Laws until they were revoked in the early 19th century. Presbyterianism is represented in Ireland by the Presbyterian Church in Ireland, the Non-subscribing Presbyterian Church of Ireland, the Free Presbyterian Church of Ulster, the Reformed Presbyterian Church of Ireland and the Evangelical Presbyterian Church.
There is a Church of Scotland (Presbyterian) in central Paris: The Scots Kirk, which is English-speaking, and is attended by many nationalities. It maintains close links with the Church of Scotland in Scotland itself, as well as with the Reformed Church of France.
The Waldensian Evangelical Church (Chiesa Evangelica Valdese, CEV) is an Italian Protestant denomination.
The church was founded in the 12th century, and centuries later, after the Protestant Reformation, it adhered to Calvinist theology and became the Italian branch of the Presbyterian churches. As such, the church is a member of the World Communion of Reformed Churches.
Even before Presbyterianism spread with immigrants abroad from Scotland, there were divisions in the larger Presbyterian family. Some later rejoined only to separate again. In what some interpret as rueful self-reproach, some Presbyterians refer to the divided Presbyterian churches as the "Split Ps".
Presbyterianism first officially arrived in Colonial America in 1644 with the establishment of Christ's First Presbyterian Church in Hempstead, NY. The Church was organized by the Rev. Richard Denton.
Another notable church was established in 1703 the first Presbytery in Philadelphia. In time, the presbytery would be joined by two more to form a synod (1717) and would eventually evolve into the Presbyterian Church in the United States of America in 1789. The nation's largest Presbyterian denomination, the Presbyterian Church (U.S.A.) – PC (USA) – can trace their heritage back to the original PCUSA, as can the Presbyterian Church in America (PCA), the Orthodox Presbyterian Church (OPC), the Bible Presbyterian Church (BPC), the Cumberland Presbyterian Church (CPC), the Cumberland Presbyterian Church in America, the Evangelical Presbyterian Church (EPC), and the Evangelical Covenant Order of Presbyterians (ECO).
Other Presbyterian bodies in the United States include the Reformed Presbyterian Church of North America (RPCNA), the Associate Reformed Presbyterian Church (ARP), the Reformed Presbyterian Church in the United States (RPCUS), the Reformed Presbyterian Church General Assembly, the Reformed Presbyterian Church – Hanover Presbytery, the Covenant Presbyterian Church, the Presbyterian Reformed Church, the Westminster Presbyterian Church in the United States, the Korean American Presbyterian Church, and the Free Presbyterian Church of North America.
The territory within about a radius of Charlotte, North Carolina, is historically the greatest concentration of Presbyterianism in the Southern United States, while an almost identical geographic area around Pittsburgh, Pennsylvania, contains probably the largest number of Presbyterians in the entire nation.
The PC(USA), beginning with its predecessor bodies, has, in common with other so-called "mainline" Protestant denominations, experienced a significant decline in members in recent years. Some estimates have placed that loss at nearly half in the last forty years.
Presbyterian influence, especially through Princeton theology, can be traced in modern Evangelicalism. Balmer says that:
In the late 1800s, Presbyterian missionaries established a presence in what is now northern New Mexico. This provided an alternative to the Catholicism, which was brought to the area by the Spanish Conquistadors and had remained unchanged. The area experienced a "mini" reformation, in that many converts were made to Presbyterianism, prompting persecution. In some cases, the converts left towns and villages to establish their own neighboring villages. The arrival of the United States to the area prompted the Catholic church to modernize and make efforts at winning the converts back, many of which did return. However, there are still stalwart Presbyterians and Presbyterian churches in the area.
In Canada, the largest Presbyterian denomination – and indeed the largest Protestant denomination – was the Presbyterian Church in Canada, formed in 1875 with the merger of four regional groups. In 1925, the United Church of Canada was formed by the majority of Presbyterians combining with the Methodist Church, Canada, and the Congregational Union of Canada. A sizable minority of Canadian Presbyterians, primarily in southern Ontario but also throughout the entire nation, withdrew, and reconstituted themselves as a non-concurring continuing Presbyterian body. They regained use of the original name in 1939.
Presbyterianism arrived in Latin America in the 19th century.
The biggest Presbyterian church is the National Presbyterian Church in Mexico ("Iglesia Nacional Presbiteriana de México"), which has around 2,500,000 members and associates and 3000 congregations, but there are other small denominations like the Associate Reformed Presbyterian Church in Mexico which was founded in 1875 by the Associate Reformed Church in North America. The Independent Presbyterian Church and the Presbyterian Reformed Church in Mexico, the National Conservative Presbyterian Church in Mexico are existing churches in the Reformed tradition.
In Brazil, the Presbyterian Church of Brazil ("Igreja Presbiteriana do Brasil") totals approximately 1,011,300 members; other Presbyterian churches (Independents, United, Conservatives, Renovated, etc.) in this nation have around 350,000 members. The Renewed Presbyterian Church in Brazil was influenced by the charismatic movement and has about 131 000 members as of 2011. The Conservative Presbyterian Church was founded in 1940 and has eight presbyteries. The Fundamentalist Presbyterian church in Brazil was influenced by Karl McIntire and the Bible Presbyterian church USA and has around 1 800 members. The Independent Presbyterian Church in Brasil was founded in 1903 by pastor Pereira, has 500 congregations and 75 000 members. The United Presbyterian Church in Brazil has around 4 000 members. There are also ethnic Korean Presbyterian churches in the country. The Evangelical Reformed Church in Brazil has Dutch origin. The Reformed Churches in Brazil were recently founded by the Canadian Reformed Churches with the Reformed Church in the Netherlands (liberated).
Congregational churches present in the country are also part of the Calvinistic tradition in Latin America.
There are probably more than four million members of Presbyterian churches in all of Latin America. Presbyterian churches are also present in Peru, Bolivia, Cuba, Trinidad and Tobago, Venezuela, Colombia, Chile, Paraguay, Costa Rica, Nicaragua, Argentina, Honduras and others, but with few members. The Presbyterian Church in Belize has 14 churches and church plants and there is a Reformed Seminary founded in 2004. Some Latin Americans in North America are active in the Presbyterian Cursillo Movement.
Presbyterianism arrived in Africa in the 19th century through the work of Scottish missionaries and founded churches such as St Michael and All Angels Church, Blantyre, Malawi. The church has grown extensively and now has a presence in at least 23 countries in the region.
African Presbyterian churches often incorporate diaconal ministries, including social services, emergency relief, and the operation of mission hospitals. A number of partnerships exist between presbyteries in Africa and the PC(USA), including specific connections with Lesotho, Cameroon, Malawi, South Africa, Ghana and Zambia. For example, the Lackawanna Presbytery, located in Northeastern Pennsylvania, has a partnership with a presbytery in Ghana. Also the Southminster Presbyterian Church, located near Pittsburgh, has partnerships with churches in Malawi and Kenya. The Presbyterian Church of Nigeria, western Africa is also healthy and strong in mostly the southern states of this nation, strong density in the south-eastern states of this country. Beginning from Cross River state, the nearby coastal states, Rivers state, Lagos state to Ebonyi and Abia States. The missionary expedition of Mary Slessor and Hope Waddel and their group in the mid 18th century in this coastal regions of the ten British colony has brought about the beginning and the flourishing of this church in these areas.
The Presbyterian Church of East Africa, based in Kenya, is particularly strong, with 500 clergy and 4 million members.
The Reformed Presbyterian Church in Malawi has 150 congregations and 17 000–20 000 members. It was a mission of the Free Presbyterian church of Scotland. The Restored Reformed Church works with RPCM. Evangelical Presbyterian Church in Malawi is an existing small church. Part of the Presbyterian Church in Malawi and Zambia is known as CCAP, Church of Central Africa-Presbyterian. Often the churches there have one main congregation and a number of Prayer Houses develop. Education, health ministries as well as worship and spiritual development are important.
Southern Africa is a major base of Reformed and Presbyterian Churches.
In addition, there are a number of Presbyterian Churches in north Africa, the most known is the Nile Synod in Egypt and a recently founded synod for Sudan.
Cumberland Presbyterian Church Yao Dao Secondary School is a Presbyterian school in Yuen Long, New Territories. The Cumberland Presbyterian Church also have a church on the island of Cheung Chau. There are also Korean Christians resident in Hong Kong who are Presbyterians.
Presbyterian Churches are the biggest and by far the most influential Protestant denominations in South Korea, with close to 20,000 churches affiliated with the two largest Presbyterian denominations in the country. In South Korea there are 9 million Presbyterians, forming the majority of the 15 million Korean Protestants. In South Korea there are 100 different Presbyterian denominations.
Most of the Korean Presbyterian denominations share the same name in Korean, 대한예수교장로회 (literally means the Presbyterian Church of Korea or PCK), tracing its roots to the United Presbyterian Assembly before its long history of disputes and schisms. The Presbyterian schism began with the controversy in relation to the Japanese shrine worship enforced during the Japanese colonial period and the establishment of a minor division (Koryu-pa, 고려파, later The Koshin Presbyterian Church in Korea, Koshin 고신) in 1952. And in 1953 the second schism happened when the theological orientation of the Chosun Seminary (later Hanshin University) founded in 1947 could not be tolerated in the PCK and another minor group (The Presbyterian Church in the Republic of Korea, Kijang, 기장) was separated. The last major schism had to do with the issue of whether the PCK should join the WCC. The controversy divided the PCK into two denominations, The Presbyterian Church of Korea (Tonghap, 통합) and The General Assembly of Presbyterian Church in Korea (Hapdong, 합동) in 1959. All major seminaries associated with each denomination claim heritage from the Pyung Yang Theological Seminary, therefore, not only Presbyterian University and Theological Seminary and Chongsin University which are related to PCK but also Hanshin University of PROK all celebrated the 100th class in 2007, 100 years from the first graduates of Pyung Yang Theological Seminary.
Korean Presbyterian denominations are active in evangelism and many of its missionaries are being sent overseas, being the second biggest missionary sender in the world after the United States. GMS, the missionary body of the "Hapdong" General Assembly of Presbyterian Churches of Korea, is the single largest Presbyterian missionary organization in Korea.
In addition there are many Korean-American Presbyterians in the United States, either with their own church sites or sharing space in pre-existing churches as is the case in Australia, New Zealand and even Muslim countries such as Saudi Arabia with Korean immigration.
The Korean Presbyterian Church started through the mission of the Presbyterian Church (USA) and the Australian Presbyterian theological tradition is central to the United States. But after independence, the 'Presbyterian Church in Korea (KoRyuPa)' advocated a Dutch Reformed position. In the 21st century, a new General Assembly of the Orthodox Presbyterian Church of Korea (Founder. Ha Seung-moo) in 2012 declared itself an authentic historical succession of Scottish Presbyterian John Knox.
The Presbyterian Church in Taiwan (PCT) is by far the largest Protestant denomination in Taiwan, with some 238,372 members as of 2009 (including a majority of the island's aborigines). English Presbyterian Missionary James Laidlaw Maxwell established the first Presbyterian church in Tainan in 1865. His colleague George Leslie Mackay, of the Canadian Presbyterian Mission, was active in Tamsui and north Taiwan from 1872 to 1901; he founded the island's first university and hospital, and created a written script for Taiwanese Minnan. The English and Canadian missions joined together as the PCT in 1912. One of the few churches permitted to operate in Taiwan through the era of Japanese rule (1895–1945), the PCT experienced rapid growth during the era of Kuomintang-imposed martial law (1949–1987), in part due to its support for democracy, human rights, and Taiwan independence. Former ROC president Lee Teng-hui (in office 1988–2000) is a Presbyterian.
In the mainly Christian Indian state of Mizoram, Presbyterianism is the largest of all Christian denominations. It was brought there by missionaries from Wales in 1894. Prior to Mizoram, Welsh Presbyterians started venturing into the northeast India through the Khasi Hills (now in the state of Meghalaya in India) and established Presbyterian churches all over the Khasi Hills from the 1840s onwards. Hence, there is a strong presence of Presbyterians in Shillong (the present capital of Meghalaya) and the areas adjoining it. The Welsh missionaries built their first church in Sohra (aka Cherrapunji) in 1846. The Presbyterian church in India was integrated in 1970 into the United Church of Northern India (originally formed in 1924). It is the largest Presbyterian denomination in India.
In Australia, Presbyterianism is the fourth largest denomination of Christianity, with nearly 600,000 Australians claiming to be Presbyterian in the 2006 Commonwealth Census. Presbyterian churches were founded in each colony, some with links to the Church of Scotland and others to the Free Church. There were also congregations originating from United Presbyterian Church of Scotland as well as a number founded by John Dunmore Lang. Most of these bodies merged between 1859 and 1870, and in 1901 formed a federal union called the Presbyterian Church of Australia but retaining their state assemblies. The Presbyterian Church of Eastern Australia representing the Free Church of Scotland tradition, and congregations in Victoria of the Reformed Presbyterian Church, originally from Ireland, are the other existing denominations dating from colonial times.
In 1977, two-thirds of the Presbyterian Church of Australia, along with most of the Congregational Union of Australia and all the Methodist Church of Australasia, combined to form the Uniting Church in Australia. The third who did not unite had various reasons for so acting, often cultural attachment but often conservative theological or social views. The permission for the ordination of women given in 1974 was rescinded in 1991 without affecting the two or three existing woman ministers. The approval of women elders given in the 1960s has been rescinded in all states except New South Wales, which has the largest membership. The theology of the church is now generally conservative and Reformed. A number of small Presbyterian denominations have arisen since the 1950s through migration or schism.
In New Zealand, Presbyterian is the dominant denomination in Otago and Southland due largely to the rich Scottish and to a lesser extent Ulster-Scots heritage in the region. The area around Christchurch, Canterbury, is dominated philosophically by the Anglican denomination.
Originally there were two branches of Presbyterianism in New Zealand, the northern Presbyterian church which existed in the North Island and the parts of the South Island north of the Waitaki River, and the Synod of Otago and Southland, founded by Free Church settlers in southern South Island. The two churches merged in 1901, forming what is now the Presbyterian Church of Aotearoa New Zealand.
In addition to the Presbyterian Church of Aotearoa New Zealand, there is also a more conservative Presbyterian church called Grace Presbyterian Church of New Zealand. Many of its members left the largely liberal PCANZ because they were seeking a more Biblical church. It has 17 churches throughout New Zealand.
The Presbyterian Church in Vanuatu is the largest denomination in the country, with approximately one-third of the population of Vanuatu members of the church. The PCV was taken to Vanuatu by missionaries from Scotland. The PCV (Presbyterian Church of Vanuatu) is headed by a moderator with offices in Port Vila. The PCV is particularly strong in the provinces of Tafea, Shefa, and Malampa. The Province of Sanma is mainly Presbyterian with a strong Roman Catholic minority in the Francophone areas of the province. There are some Presbyterian people, but no organised Presbyterian churches in Penama and Torba, both of which are traditionally Anglican. Vanuatu is the only country in the South Pacific with a significant Presbyterian heritage and membership. The PCV is a founding member of the Vanuatu Christian Council (VCC). The PCV runs many primary schools and Onesua secondary school. The church is strong in the rural villages. | https://en.wikipedia.org/wiki?curid=24403 |
Parliament
In modern politics and history, a parliament is a legislative body of government. Generally, a modern parliament has three functions: representing the electorate, making laws, and overseeing the government via hearings and inquiries. The term is similar to the idea of a senate, synod or congress, and is commonly used in countries that are current or former monarchies, a form of government with a monarch as the head. Some contexts restrict the use of the word "parliament" to parliamentary systems, although it is also used to describe the legislature in some presidential systems (e.g. the Parliament of Ghana), even where it is not in the official name.
Historically, parliaments included various kinds of deliberative, consultative, and judicial assemblies, e.g. medieval parliaments.
The English term is derived from Anglo-Norman and dates to the 14th century, coming from the 11th century Old French , from "parler", meaning "to talk". The meaning evolved over time, originally referring to any discussion, conversation, or negotiation through various kinds of deliberative or judicial groups, often summoned by a monarch. By the 15th century, in Britain, it had come to specifically mean the legislature.
Since ancient times, when societies were tribal, there were councils or a headman whose decisions were assessed by village elders. This is called tribalism. Some scholars suggest that in ancient Mesopotamia there was a primitive democratic government where the kings were assessed by council. The same has been said about ancient India, where some form of deliberative assemblies existed, and therefore there was some form of democracy. However, these claims are not accepted by most scholars, who see these forms of government as oligarchies.
Ancient Athens was the cradle of democracy. The Athenian assembly (, "ekklesia") was the most important institution, and every free male citizen could take part in the discussions. Slaves and women could not. However, Athenian democracy was not representative, but rather direct, and therefore the "ekklesia" was different from the parliamentary system.
The Roman Republic had legislative assemblies, who had the final say regarding the election of magistrates, the enactment of new statutes, the carrying out of capital punishment, the declaration of war and peace, and the creation (or dissolution) of alliances. The Roman Senate controlled money, administration, and the details of foreign policy.
Some Muslim scholars argue that the Islamic shura (a method of taking decisions in Islamic societies) is analogous to the parliament. However, others highlight what they consider fundamental differences between the shura system and the parliamentary system.
The first recorded signs of a council to decide on different issues in ancient Iran dates back to 247 BC while the Parthian empire was in power. The Parthians established the first Iranian empire since the conquest of Persia by Alexander. In the early years of their rule, an assembly of the nobles called “Mehestan” was formed that made the final decision on serious issues of state.
The word "Mehestan" consists of two parts. "Meh", a word of the old Persian origin, which literally means "The Great" and "-Stan", a suffix in the Persian language, which describes an especial place. Altogether Mehestan means a place where the greats come together.
The Mehestan Assembly, which consisted of Zoroastrian religious leaders and clan elders exerted great influence over the administration of the kingdom.
One of the most important decisions of the council took place in 208 AD, when a civil war broke out and the Mehestan decided that the empire would be ruled by two brothers simultaneously, Ardavan V and Blash V.
In 224 AD, following the dissolution of the Parthian empire, after over 470 years, the Mahestan council came to an end.
Although there are documented councils held in 873, 1020, 1050 and 1063, there was no representation of commoners. What is considered to be the first parliament (with the presence of commoners), the Cortes of León, was held in the Kingdom of León in 1188. According to the UNESCO, the Decreta of Leon of 1188 is the oldest documentary manifestation of the European parliamentary system. In addition, UNESCO granted the 1188 Cortes of Alfonso IX the title of "Memory of the World" and the city of Leon has been recognized as the "Cradle of Parliamentarism".
After coming to power, King Alfonso IX, facing an attack by his two neighbors, Castile and Portugal, decided to summon the "Royal Curia". This was a medieval organization composed of aristocrats and bishops but because of the seriousness of the situation and the need to maximize political support, Alfonso IX took the decision to also call the representatives of the urban middle class from the most important cities of the kingdom to the assembly. León's Cortes dealt with matters like the right to private property, the inviolability of domicile, the right to appeal to justice opposite the King and the obligation of the King to consult the Cortes before entering a war. Prelates, nobles and commoners met separately in the three estates of the Cortes. In this meeting, new laws were approved to protect commoners against the arbitrarities of nobles, prelates and the king. This important set of laws is known as the "Carta Magna Leonesa".
Following this event, new Cortes would appear in the other different territories that would make up Spain: Principality of Catalonia in 1192, the Kingdom of Castile in 1250, Kingdom of Aragon in 1274, Kingdom of Valencia in 1283 and Kingdom of Navarre in 1300.
After the union of the Kingdoms of Leon and Castile under the Crown of Castile, their Cortes were united as well in 1258. The Castilian Cortes had representatives from Burgos, Toledo, León, Seville, Córdoba, Murcia, Jaén, Zamora, Segovia, Ávila, Salamanca, Cuenca, Toro, Valladolid, Soria, Madrid, Guadalajara and Granada (after 1492). The Cortes' assent was required to pass new taxes, and could also advise the king on other matters. The comunero rebels intended a stronger role for the Cortes, but were defeated by the forces of Habsburg Emperor Charles V in 1521. The Cortes maintained some power, however, though it became more of a consultative entity. However, by the time of King Philip II, Charles's son, the Castilian Cortes had come under functionally complete royal control, with its delegates dependent on the Crown for their income.
The Cortes of the Crown of Aragon kingdoms retained their power to control the king's spending with regard to the finances of those kingdoms. But after the War of the Spanish Succession and the victory of another royal house – the Bourbons – and King Philip V, their Cortes were suppressed (those of Aragon and Valencia in 1707, and those of Catalonia and the Balearic islands in 1714).
The very first Cortes representing the whole of Spain (and the Spanish empire of the day) assembled in 1812, in Cadiz, where it operated as a government in exile as at that time most of the rest of Spain was in the hands of Napoleon's army.
After its self-proclamation as an independent kingdom in 1139 by Afonso I of Portugal (followed by the recognition by the Kingdom of León in the Treaty of Zamora of 1143), the first historically established Cortes of the Kingdom of Portugal occurred in 1211 in Coimbra by initiative of Afonso II of Portugal. These established the first general laws of the kingdom ("Leis Gerais do Reino"): protection of the king's property, stipulation of measures for the administration of justice and the rights of his subjects to be protected from abuses by royal officials, and confirming the clerical donations of the previous king Sancho I of Portugal. These Cortes also affirmed the validity of canon law for the Church in Portugal, while introducing the prohibition of the purchase of lands by churches or monasteries (although they can be acquired by donations and legacies).
After the conquest of Algarve in 1249, the Kingdom of Portugal completed its Reconquista. In 1254 King Afonso III of Portugal summoned Portuguese Cortes in Leiria, with the inclusion of burghers from old and newly incorporated municipalities. This inclusion establishes the Cortes of Leiria of 1254 as the second sample of modern parliamentarism in the history of Europe (after the Cortes of León in 1188). In these Cortes the monetagio was introduced: a fixed sum was to be paid by the burghers to the Crown as a substitute for the septennium (the traditional revision of the face value of coinage by the Crown every seven years). These Cortes also introduced staple laws on the Douro River, favoring the new royal city of Vila Nova de Gaia at the expense of the old episcopal city of Porto.
The Portuguese Cortes met again under King Afonso III of Portugal in 1256, 1261 and 1273, always by royal summon. Medieval Kings of Portugal continued to rely on small assemblies of notables, and only summoned the full Cortes on extraordinary occasions. A Cortes would be called if the king wanted to introduce new taxes, change some fundamental laws, announce significant shifts in foreign policy (e.g. ratify treaties), or settle matters of royal succession, issues where the cooperation and assent of the towns was thought necessary. Changing taxation (especially requesting war subsidies), was probably the most frequent reason for convening the Cortes. As the nobles and clergy were largely tax-exempt, setting taxation involved intensive negotiations between the royal council and the burgher delegates at the Cortes.
Delegates ("procuradores") not only considered the king's proposals, but, in turn, also used the Cortes to submit petitions of their own to the royal council on a myriad of matters, e.g. extending and confirming town privileges, punishing abuses of officials, introducing new price controls, constraints on Jews, pledges on coinage, etc. The royal response to these petitions became enshrined as ordinances and statutes, thus giving the Cortes the aspect of a legislature. These petitions were originally referred to as "aggravamentos" (grievances) then "artigos" (articles) and eventually "capitulos" (chapters). In a Cortes-Gerais, petitions were discussed and voted upon separately by each estate and required the approval of at least two of the three estates before being passed up to the royal council. The proposal was then subject to royal veto (either accepted or rejected by the king in its entirety) before becoming law.
Nonetheless, the exact extent of Cortes power was ambiguous. Kings insisted on their ancient prerogative to promulgate laws independently of the Cortes. The compromise, in theory, was that ordinances enacted in Cortes could only be modified or repealed by Cortes. But even that principle was often circumvented or ignored in practice.
The Cortes probably had their heyday in the 14th and 15th centuries, reaching their apex when John I of Portugal relied almost wholly upon the bourgeoisie for his power. For a period after the 1383–1385 Crisis, the Cortes were convened almost annually. But as time went on, they became less important. Portuguese monarchs, tapping into the riches of the Portuguese empire overseas, grew less dependent on Cortes subsidies and convened them less frequently. John II (r.1481-1495) used them to break the high nobility, but dispensed with them otherwise. Manuel I (r.1495-1521) convened them only four times in his long reign. By the time of Sebastian (r.1554–1578), the Cortes was practically an irrelevance.
Curiously, the Cortes gained a new importance with the Iberian Union of 1581, finding a role as the representative of Portuguese interests to the new Habsburg monarch. The Cortes played a critical role in the 1640 Restoration, and enjoyed a brief period of resurgence during the reign of John IV of Portugal (r.1640-1656). But by the end of the 17th century, it found itself sidelined once again. The last Cortes met in 1698, for the mere formality of confirming the appointment of Infante John (future John V of Portugal) as the successor of Peter II of Portugal. Thereafter, Portuguese kings ruled as absolute monarchs and no Cortes were assembled for over a century. This state of affairs came to an end with the Liberal Revolution of 1820, which set in motion the introduction of a new constitution, and a permanent and proper parliament, that however inherited the name of Cortes Gerais.
England has long had a tradition of a body of men who would assist and advise the king on important matters. Under the Anglo-Saxon kings, there was an advisory council, the Witenagemot. The name derives from the Old English ƿitena ȝemōt, or witena gemōt, for "meeting of wise men". The first recorded act of a witenagemot was the law code issued by King Æthelberht of Kent ca. 600, the earliest document which survives in sustained Old English prose; however, the witan was certainly in existence long before this time. The Witan, along with the folkmoots (local assemblies), is an important ancestor of the modern English parliament.
As part of the Norman Conquest of England, the new king, William I, did away with the Witenagemot, replacing it with a Curia Regis ("King's Council"). Membership of the Curia was largely restricted to the tenants in chief, the few nobles who "rented" great estates directly from the king, along with ecclesiastics. William brought to England the feudal system of his native Normandy, and sought the advice of the curia regis before making laws. This is the original body from which the Parliament, the higher courts of law, and the Privy Council and Cabinet descend. Of these, the legislature is formally the High Court of Parliament; judges sit in the Supreme Court of Judicature. Only the executive government is no longer conducted in a royal court.
Most historians date the emergence of a parliament with some degree of power to which the throne had to defer no later than the rule of Edward I. Like previous kings, Edward called leading nobles and church leaders to discuss government matters, especially finance and taxation. A meeting in 1295 became known as the Model Parliament because it set the pattern for later Parliaments. The significant difference between the Model Parliament and the earlier Curia Regis was the addition of the Commons; that is, the inclusion of elected representatives of rural landowners and of townsmen. In 1307, Edward I agreed not to collect certain taxes without the "consent of the realm" through parliament. He also enlarged the court system.
The tenants-in-chief often struggled with their spiritual counterparts and with the king for power. In 1215, they secured from King John of England "Magna Carta", which established that the king may not levy or collect any taxes (except the feudal taxes to which they were hitherto accustomed), save with the consent of a council. It was also established that the most important tenants-in-chief and ecclesiastics be summoned to the council by personal writs from the sovereign, and that all others be summoned to the council by general writs from the sheriffs of their counties. Modern government has its origins in the Curia Regis; parliament descends from the Great Council later known as the "parliamentum" established by "Magna Carta".
During the reign of King Henry III, 13th-Century English Parliaments incorporated elected representatives from shires and towns. These parliaments are, as such, considered forerunners of the modern parliament.
In 1265, Simon de Montfort, then in rebellion against Henry III, summoned a parliament of his supporters without royal authorization. The archbishops, bishops, abbots, earls, and barons were summoned, as were two knights from each shire and two burgesses from each borough. Knights had been summoned to previous councils, but it was unprecedented for the boroughs to receive any representation. Come 1295, Edward I later adopted de Montfort's ideas for representation and election in the so-called "Model Parliament". At first, each estate debated independently; by the reign of Edward III, however, Parliament recognisably assumed its modern form, with authorities dividing the legislative body into two separate chambers.
The purpose and structure of Parliament in Tudor England underwent a significant transformation under the reign of Henry VIII. Originally its methods were primarily medieval, and the monarch still possessed a form of inarguable dominion over its decisions. According to Elton, it was Thomas Cromwell, 1st Earl of Essex, then chief minister to Henry VIII, who initiated still other changes within parliament.
The Reformation Acts supplied Parliament with unlimited power over the country. This included authority over virtually every matter, whether social, economic, political, or religious ; it legalised the Reformation, officially and indisputably. The king had to rule through the council, not over it, and all sides needed to reach a mutual agreement when creating or passing laws, adjusting or implementing taxes, or changing religious doctrines. This was significant: the monarch no longer had sole control over the country. For instance, during the later years of Mary, Parliament exercised its authority in originally rejecting Mary's bid to revive Catholicism in the realm. Later on, the legislative body even denied Elizabeth her request to marry . If Parliament had possessed this power before Cromwell, such as when Wolsey served as secretary, the Reformation may never have happened, as the king would have had to gain the consent of all parliament members before so drastically changing the country's religious laws and fundamental identity .
The power of Parliament increased considerably after Cromwell's adjustments. It also provided the country with unprecedented stability. More stability, in turn, helped assure more effective management, organisation, and efficiency. Parliament printed statutes and devised a more coherent parliamentary procedure.
The rise of Parliament proved especially important in the sense that it limited the repercussions of dynastic complications that had so often plunged England into civil war. Parliament still ran the country even in the absence of suitable heirs to the throne, and its legitimacy as a decision-making body reduced the royal prerogatives of kings like Henry VIII and the importance of their whims. For example, Henry VIII could not simply establish supremacy by proclamation; he required Parliament to enforce statutes and add felonies and treasons. An important liberty for Parliament was its freedom of speech; Henry allowed anything to be spoken openly within Parliament and speakers could not face arrest – a fact which they exploited incessantly. Nevertheless, Parliament in Henry VIII's time offered up very little objection to the monarch's desires. Under his and Edward's reign, the legislative body complied willingly with the majority of the kings' decisions.
Much of this compliance stemmed from how the English viewed and traditionally understood authority. As Williams described it, "King and parliament were not separate entities, but a single body, of which the monarch was the senior partner and the Lords and the Commons the lesser, but still essential, members.".
Although its role in government expanded significantly during the reigns of Henry VIII and Edward VI, the Parliament of England saw some of its most important gains in the 17th century. A series of conflicts between the Crown and Parliament culminated in the execution of King Charles I in 1649. Afterward, England became a commonwealth, with Oliver Cromwell, its lord protector, the de facto ruler. Frustrated with its decisions, Cromwell purged and suspended Parliament on several occasions.
A controversial figure accused of despotism, war crimes, and even genocide, Cromwell is nonetheless regarded as essential to the growth of democracy in England. The years of the Commonwealth, coupled with the restoration of the monarchy in 1660 and the subsequent Glorious Revolution of 1688, helped reinforce and strengthen Parliament as an institution separate from the Crown.
The Parliament of England met until it merged with the Parliament of Scotland under the Acts of Union. This union created the new Parliament of Great Britain in 1707.
From the 10th century the Kingdom of Alba was ruled by chiefs ("toisechs") and subkings ("mormaers") under the suzerainty, real or nominal, of a High King. Popular assemblies, as in Ireland, were involved in law-making, and sometimes in king-making, although the introduction of tanistry—naming a successor in the lifetime of a king—made the second less than common. These early assemblies cannot be considered "parliaments" in the later sense of the word, and were entirely separate from the later, Norman-influenced, institution.
The Parliament of Scotland evolved during the Middle Ages from the King's Council of Bishops and Earls. The unicameral parliament is first found on record, referred to as a "colloquium", in 1235 at Kirkliston (a village now in Edinburgh).
By the early fourteenth century the attendance of knights and freeholders had become important, and from 1326 burgh commissioners attended. Consisting of the Three Estates; of clerics, lay tenants-in-chief and burgh commissioners sitting in a single chamber, the Scottish parliament acquired significant powers over particular issues. Most obviously it was needed for consent for taxation (although taxation was only raised irregularly in Scotland in the medieval period), but it also had a strong influence over justice, foreign policy, war, and all manner of other legislation, whether political, ecclesiastical, social or economic. Parliamentary business was also carried out by "sister" institutions, before c. 1500 by General Council and thereafter by the Convention of Estates. These could carry out much business also dealt with by Parliament – taxation, legislation and policy-making – but lacked the ultimate authority of a full parliament.
The parliament, which is also referred to as the Estates of Scotland, the Three Estates, the Scots Parliament or the auld Scots Parliament (Eng: "old"), met until the Acts of Union merged the Parliament of Scotland and the Parliament of England, creating the new Parliament of Great Britain in 1707.
Following the 1997 Scottish devolution referendum, and the passing of the Scotland Act 1998 by the Parliament of the United Kingdom, the Scottish Parliament was reconvened on 1 July 1999, although with much more limited powers than its 18th-century predecessor. The parliament has sat since 2004 at its newly constructed Scottish Parliament Building in Edinburgh, situated at the foot of the Royal Mile, next to the royal palace of Holyroodhouse.
A "thing" or "ting" (Old Norse and ; other modern Scandinavian: "ting", "ding" in Dutch) was the governing assembly in Germanic societies, made up of the free men of the community and presided by lawspeakers.
The thing was the assembly of the free men of a country, province or a hundred "(hundare/härad/herred)". There were consequently, hierarchies of things, so that the local things were represented at the thing for a larger area, for a province or land. At the thing, disputes were solved and political decisions were made. The place for the thing was often also the place for public religious rites and for commerce.
The thing met at regular intervals, legislated, elected chieftains and kings, and judged according to the law, which was memorised and recited by the "law speaker" (the judge).
The Icelandic, Faroese and Manx parliaments trace their origins back to the Viking expansion originating from the Petty kingdoms of Norway as well as Denmark, replicating Viking government systems in the conquered territories, such as those represented by the Gulating near Bergen in western Norway.
Later national diets with chambers for different estates developed, e.g. in Sweden and in Finland (which was part of Sweden until 1809), each with a House of Knights for the nobility. In both these countries, the national parliaments are now called riksdag (in Finland also "eduskunta"), a word used since the Middle Ages and equivalent of the German word Reichstag.
Today the term lives on in the official names of national legislatures, political and judicial institutions in the North-Germanic countries. In the Yorkshire and former Danelaw areas of England, which were subject to much Norse invasion and settlement, the wapentake was another name for the same institution.
The Sicilian Parliament, dating to 1097, evolved as the legislature of the Kingdom of Sicily.
The Federal Diet of Switzerland was one of the longest-lived representative bodies in history, continuing from the 13th century to 1848.
Originally, there was only the Parliament of Paris, born out of the Curia Regis in 1307, and located inside the medieval royal palace, now the Paris Hall of Justice. The jurisdiction of the "Parliament" of Paris covered the entire kingdom. In the thirteenth century, judicial functions were added. In 1443, following the turmoil of the Hundred Years' War, King Charles VII of France granted Languedoc its own "parliament" by establishing the "Parliament" of Toulouse, the first "parliament" outside of Paris, whose jurisdiction extended over the most part of southern France. From 1443 until the French Revolution several other "parliaments" were created in some provinces of France (Grenoble, Bordeaux).
All the "parliaments" could issue regulatory decrees for the application of royal edicts or of customary practices; they could also refuse to register laws that they judged contrary to fundamental law or simply as being untimely. Parliamentary power in France was suppressed more so than in England as a result of absolutism, and parliaments were eventually overshadowed by the larger Estates General, up until the French Revolution, when the National Assembly became the lower house of France's bicameral legislature.
According to the "Chronicles" of Gallus Anonymus, the first legendary Polish ruler, Siemowit, who began the Piast Dynasty, was chosen by a "wiec". The "veche" (, ) was a popular assembly in medieval Slavic countries, and in late medieval period, a parliament. The idea of the "wiec" led in 1182 to the development of the Polish parliament, the "Sejm".
The term "sejm" comes from an old Polish expression denoting a meeting of the populace. The power of early sejms grew between 1146–1295, when the power of individual rulers waned and various councils and wiece grew stronger. The history of the national Sejm dates back to 1182. Since the 14th century irregular sejms (described in various Latin sources as "contentio generalis, conventio magna, conventio solemna, parlamentum, parlamentum generale, dieta" or Polish "sejm walny") have been called by Polish kings. From 1374, the king had to receive sejm permission to raise taxes. The General Sejm (Polish "Sejm Generalny" or "Sejm Walny"), first convoked by the king John I Olbracht in 1493 near Piotrków, evolved from earlier regional and provincial meetings ("sejmiks"). It followed most closely the "sejmik generally", which arose from the 1454 Nieszawa Statutes, granted to the szlachta (nobles) by King Casimir IV the Jagiellonian. From 1493 forward, indirect elections were repeated every two years. With the development of the unique Polish Golden Liberty the Sejm's powers increased.
The Commonwealth's general parliament consisted of three estates: the King of Poland (who also acted as the Grand Duke of Lithuania, Russia/Ruthenia, Prussia, Mazovia, etc.), the Senat (consisting of Ministers, Palatines, Castellans and Bishops) and the Chamber of Envoys—circa 170 nobles (szlachta) acting on behalf of their Lands and sent by Land Parliaments. Also representatives of selected cities but without any voting powers. Since 1573 at a royal election all peers of the Commonwealth could participate in the Parliament and become the King's electors.
Cossack Rada was the legislative body of a military republic of the Ukrainian Cossacks that grew rapidly in the 15th century from serfs fleeing the more controlled parts of the Polish Lithuanian Commonwealth. The republic did not regard social origin/nobility and accepted all people who declared to be Orthodox Christians.
Originally established at the Zaporizhian Sich, the rada (council) was an institution of Cossack administration in Ukraine from the 16th to the 18th century. With the establishment of the Hetman state in 1648, it was officially known as the General Military Council until 1750.
The zemsky sobor (Russian: зе́мский собо́р) was the first Russian parliament of the feudal Estates type, in the 16th and 17th centuries. The term roughly means assembly of the land.
It could be summoned either by tsar, or patriarch, or the Boyar Duma. Three categories of population, comparable to the Estates-General of France but with the numbering of the first two Estates reversed, participated in the assembly:
Nobility and high bureaucracy, including the Boyar Duma
The Holy Sobor of high Orthodox clergy
Representatives of merchants and townspeople (third estate)
The name of the parliament of nowadays Russian Federation is the Federal Assembly of Russia. The term for its lower house, State Duma (which is better known than the Federal Assembly itself, and is often mistaken for the entirety of the parliament) comes from the Russian word "думать" ("dumat"), "to think". The Boyar Duma was an advisory council to the grand princes and tsars of Muscovy. The Duma was discontinued by Peter the Great, who transferred its functions to the Governing Senate in 1711.
The "veche" was the highest legislature and judicial authority in the republic of Novgorod until 1478. In its sister state, Pskov, a separate veche operated until 1510.
Since the Novgorod revolution of 1137 ousted the ruling grand prince, the veche became the supreme state authority. After the reforms of 1410, the veche was restructured on a model similar to that of Venice, becoming the Commons chamber of the parliament. An upper Senate-like Council of Lords was also created, with title membership for all former city magistrates. Some sources indicate that veche membership may have become full-time, and parliament deputies were now called "vechniks". It is recounted that the Novgorod assembly could be summoned by anyone who rung the veche bell, although it is more likely that the common procedure was more complex. This bell was a symbol of republican sovereignty and independence. The whole population of the city—boyars, merchants, and common citizens—then gathered at Yaroslav's Court. Separate assemblies could be held in the districts of Novgorod. In Pskov the veche assembled in the court of the Trinity cathedral.
"Conciliarism" or the "conciliar movement", was a reform movement in the 14th and 15th century Roman Catholic Church which held that final authority in spiritual matters resided with the Roman Church as corporation of Christians, embodied by a general church council, not with the pope. In effect, the movement sought – ultimately, in vain – to create an All-Catholic Parliament. Its struggle with the Papacy had many points in common with the struggle of parliaments in specific countries against the authority of Kings and other secular rulers.
The development of the modern concept of parliamentary government dates back to the Kingdom of Great Britain (1707–1800) and the parliamentary system in Sweden during the Age of Liberty (1718–1772).
The British Parliament is often referred to as the "Mother of Parliaments" (in fact a misquotation of John Bright, who remarked in 1865 that "England is the Mother of Parliaments") because the British Parliament has been the model for most other parliamentary systems, and its Acts have created many other parliaments. Many nations with parliaments have to some degree emulated the British "three-tier" model. Most countries in Europe and the Commonwealth have similarly organised parliaments with a largely ceremonial head of state who formally opens and closes parliament, a large elected lower house and a smaller, upper house.
The Parliament of Great Britain was formed in 1707 by the Acts of Union that replaced the former parliaments of England and Scotland. A further union in 1801 united the Parliament of Great Britain and the Parliament of Ireland into a Parliament of the United Kingdom.
In the United Kingdom, Parliament consists of the House of Commons, the House of Lords, and the Monarch. The House of Commons is composed of 650 (soon to be 600) members who are directly elected by British citizens to represent single-member constituencies. The leader of a Party that wins more than half the seats, or less than half but is able to gain the support of smaller parties to achieve a majority in the house is invited by the Monarch to form a government. The House of Lords is a body of long-serving, unelected members: Lords Temporal – 92 of whom inherit their titles (and of whom 90 are elected internally by members of the House to lifetime seats), 588 of whom have been appointed to lifetime seats, and Lords Spiritual – 26 bishops, who are part of the house while they remain in office.
Legislation can originate from either the Lords or the Commons. It is voted on in several distinct stages, called readings, in each house. First reading is merely a formality. Second reading is where the bill as a whole is considered. Third reading is detailed consideration of clauses of the bill.
In addition to the three readings a bill also goes through a committee stage where it is considered in great detail. Once the bill has been passed by one house it goes to the other and essentially repeats the process. If after the two sets of readings there are disagreements between the versions that the two houses passed it is returned to the first house for consideration of the amendments made by the second. If it passes through the amendment stage Royal Assent is granted and the bill becomes law as an Act of Parliament.
The House of Lords is the less powerful of the two houses as a result of the Parliament Acts 1911 and 1949. These Acts removed the veto power of the Lords over a great deal of legislation. If a bill is certified by the Speaker of the House of Commons as a money bill (i.e. acts raising taxes and similar) then the Lords can only block it for a month. If an ordinary bill originates in the Commons the Lords can only block it for a maximum of one session of Parliament. The exceptions to this rule are things like bills to prolong the life of a Parliament beyond five years.
In addition to functioning as the second chamber of Parliament, the House of Lords was also the final court of appeal for much of the law of the United Kingdom—a combination of judicial and legislative function that recalls its origin in the Curia Regis. This changed in October 2009 when the Supreme Court of the United Kingdom opened and acquired the former jurisdiction of the House of Lords.
Since 1999, there has been a Scottish Parliament in Edinburgh, and, since 2020, a Welsh Parliament—or Senedd—in Cardiff. However, these national, unicameral legislatures do not have complete power over their respective countries of the United Kingdom, holding only those powers devolved to them by Westminster from 1997. They cannot legislate on defence issues, currency, or national taxation (e.g. VAT, or Income Tax). Additionally, the bodies can be dissolved, at any given time, by the British Parliament without the consent of the devolved government.
In Sweden, the half-century period of parliamentary government beginning with Charles XII's death in 1718 and ending with Gustav III's self-coup in 1772 is known as the Age of Liberty. During this period, civil rights were expanded and power shifted from the monarch to parliament.
While suffrage did not become universal, the taxed peasantry was represented in Parliament, although with little influence and commoners without taxed property had no suffrage at all.
Many parliaments are part of a parliamentary system of government, in which the executive is constitutionally answerable to the parliament. Some restrict the use of the word "parliament" to parliamentary systems, while others use the word for any elected legislative body. Parliaments usually consist of "chambers" or "houses", and are usually either bicameral or unicameral although more complex models exist, or have existed ("see Tricameralism).
In some parliamentary systems, the prime minister is a member of the parliament (e.g. in the United Kingdom), whereas in others they are not (e.g. in the Netherlands). They are commonly the leader of the majority party in the lower house of parliament, but only hold the office as long as the "confidence of the house" is maintained. If members of the lower house lose faith in the leader for whatever reason, they can call a vote of no confidence and force the prime minister to resign.
This can be particularly dangerous to a government when the distribution of seats among different parties is relatively even, in which case a new election is often called shortly thereafter. However, in case of general discontent with the head of government, their replacement can be made very smoothly without all the complications that it represents in the case of a presidential system.
The parliamentary system can be contrasted with a presidential system, such as the American congressional system, which operates under a stricter separation of powers, whereby the executive does not form part of, nor is it appointed by, the parliamentary or legislative body. In such a system, congresses do not select or dismiss heads of governments, and governments cannot request an early dissolution as may be the case for parliaments. Some states, such as France, have a semi-presidential system which falls between parliamentary and congressional systems, combining a powerful head of state (president) with a head of government, the prime minister, who is responsible to parliament.
Australia's States and territories:
In the federal (bicameral) kingdom of Belgium, there is a curious asymmetrical constellation serving as directly elected legislatures for three "territorial" "regions"—Flanders (Dutch), Brussels (bilingual, certain peculiarities of competence, also the only region not comprising any of the 10 provinces) and Wallonia (French)—and three cultural "communities"—Flemish (Dutch, competent in Flanders and for the Dutch-speaking inhabitants of Brussels), Francophone (French, for Wallonia and for Francophones in Brussels) and German (for speakers of that language in a few designated municipalities in the east of the Walloon Region, living alongside Francophones but under two different regimes):
Canada's provinces and territories:
Indian States and Territories Legislative assemblies:
Indian States Legislative councils | https://en.wikipedia.org/wiki?curid=24406 |
Polar bear
The polar bear ("Ursus maritimus") is a hypercarnivorous bear whose native range lies largely within the Arctic Circle, encompassing the Arctic Ocean, its surrounding seas and surrounding land masses. It is a large bear, approximately the same size as the omnivorous Kodiak bear ("Ursus arctos middendorffi"). A boar (adult male) weighs around , while a sow (adult female) is about half that size. Polar bears are the largest land carnivores currently in existence, rivaled only by the Kodiak bear. Although it is the sister species of the brown bear, it has evolved to occupy a narrower ecological niche, with many body characteristics adapted for cold temperatures, for moving across snow, ice and open water, and for hunting seals, which make up most of its diet. Although most polar bears are born on land, they spend most of their time on the sea ice. Their scientific name means "maritime bear" and derives from this fact. Polar bears hunt their preferred food of seals from the edge of sea ice, often living off fat reserves when no sea ice is present. Because of their dependence on the sea ice, polar bears are classified as marine mammals.
Because of expected habitat loss caused by climate change, the polar bear is classified as a vulnerable species. For decades, large-scale hunting raised international concern for the future of the species, but populations rebounded after controls and quotas began to take effect. For thousands of years, the polar bear has been a key figure in the material, spiritual, and cultural life of circumpolar peoples, and polar bears remain important in their cultures. Historically, the polar bear has also been known as the "white bear". It is sometimes referred to as the "nanook", based on the Inuit term "nanuq".
Constantine John Phipps was the first to describe the polar bear as a distinct species in 1774. He chose the scientific name "Ursus maritimus", the Latin for 'maritime bear', due to the animal's native habitat. The Inuit refer to the animal as "nanook" (transliterated as "nanuq" in the Inupiat language). The Yupik also refer to the bear as "nanuuk" in Siberian Yupik. The bear is "umka" in the Chukchi language. In Russian, it is usually called бе́лый медве́дь ("bélyj medvédj", the white bear), though an older word still in use is ошку́й ("Oshkúj", which comes from the Komi "oski", "bear"). In Quebec, the polar bear is referred to as "ours blanc" ("white bear") or "ours polaire" ("polar bear"). In the Norwegian-administered Svalbard archipelago, the polar bear is referred to as "Isbjørn" ("ice bear").
The polar bear was previously considered to be in its own genus, "Thalarctos". However, evidence of hybrids between polar bears and brown bears, and of the recent evolutionary divergence of the two species, does not support the establishment of this separate genus, and the accepted scientific name is now therefore "Ursus maritimus", as Phipps originally proposed.
The bear family, Ursidae, is thought to have split from other carnivorans about 38 million years ago. The subfamily Ursinae originated approximately 4.2 million years ago. The oldest known polar bear fossil is a 130,000 to 110,000-year-old jaw bone, found on Prince Charles Foreland in 2004. Fossils show that between 10,000 and 20,000 years ago, the polar bear's molar teeth changed significantly from those of the brown bear. Polar bears are thought to have diverged from a population of brown bears that became isolated during a period of glaciation in the Pleistocene from the eastern part of Siberia (from Kamchatka and the Kolym Peninsula).
The evidence from DNA analysis is more complex. The mitochondrial DNA (mtDNA) of the polar bear diverged from the brown bear, "Ursus arctos", roughly 150,000 years ago. Further, some clades of brown bear, as assessed by their mtDNA, were thought to be more closely related to polar bears than to other brown bears, meaning that the brown bear might not be considered a species under some species concepts, but paraphyletic. The mtDNA of extinct Irish brown bears is particularly close to polar bears. A comparison of the nuclear genome of polar bears with that of brown bears revealed a different pattern, the two forming genetically distinct clades that diverged approximately 603,000 years ago, although the latest research is based on analysis of the complete genomes (rather than just the mitochondria or partial nuclear genomes) of polar and brown bears, and establishes the divergence of polar and brown bears at 400,000 years ago.
However, the two species have mated intermittently for all that time, most likely coming into contact with each other during warming periods, when polar bears were driven onto land and brown bears migrated northward. Most brown bears have about 2 percent genetic material from polar bears, but one population, the ABC Islands bears, has between 5 percent and 10 percent polar bear genes, indicating more frequent and recent mating. Polar bears can breed with brown bears to produce fertile grizzly–polar bear hybrids; rather than indicating that they have only recently diverged, the new evidence suggests more frequent mating has continued over a longer period of time, and thus the two bears remain genetically similar. However, because neither species can survive long in the other's ecological niche, and because they have different morphology, metabolism, social and feeding behaviours, and other phenotypic characteristics, the two bears are generally classified as separate species.
When the polar bear was originally documented, two subspecies were identified: the American polar bear ("Ursus maritimus maritimus") by Constantine J. Phipps in 1774, and the Siberian polar bear ("Ursus maritimus marinus") by Peter Simon Pallas in 1776. This distinction has since been invalidated. One alleged fossil subspecies has been identified: "Ursus maritimus tyrannus", which became extinct during the Pleistocene. "U.m. tyrannus" was significantly larger than the living subspecies. However, recent reanalysis of the fossil suggests that it was actually a brown bear.
The polar bear is found in the Arctic Circle and adjacent land masses as far south as Newfoundland. Due to the absence of human development in its remote habitat, it retains more of its original range than any other extant carnivore. While they are rare north of 88°, there is evidence that they range all the way across the Arctic, and as far south as James Bay in Canada. Their southernmost range is near the boundary between the subarctic and humid continental climate zones. They can occasionally drift widely with the sea ice, and there have been anecdotal sightings as far south as Berlevåg on the Norwegian mainland and the Kuril Islands in the Sea of Okhotsk. It is difficult to estimate a global population of polar bears as much of the range has been poorly studied; however, biologists use a working estimate of about 20–25,000 or 22–31,000 polar bears worldwide.
There are 19 generally recognized, discrete subpopulations, though polar bears are thought to exist only in low densities in the area of the Arctic Basin. The subpopulations display seasonal fidelity to particular areas, but DNA studies show that they are not reproductively isolated. The 13 North American subpopulations range from the Beaufort Sea south to Hudson Bay and east to Baffin Bay in western Greenland and account for about 54% of the global population.
The range includes the territory of five nations: Denmark (Greenland), Norway (Svalbard), Russia, the United States (Alaska) and Canada. These five nations are the signatories of the International Agreement on the Conservation of Polar Bears, which mandates cooperation on research and conservation efforts throughout the polar bear's range. Bears sometimes swim to Iceland from Greenland—about 600 sightings since the country's settlement in the 9th century AD, and five in the 21st century —and are always killed because of their danger, and the cost and difficulty of repatriation.
Modern methods of tracking polar bear populations have been implemented only since the mid-1980s, and are expensive to perform consistently over a large area. The most accurate counts require flying a helicopter in the Arctic climate to find polar bears, shooting a tranquilizer dart at the bear to sedate it, and then tagging the bear. In Nunavut, some Inuit have reported increases in bear sightings around human settlements in recent years, leading to a belief that populations are increasing. Scientists have responded by noting that hungry bears may be congregating around human settlements, leading to the illusion that populations are higher than they actually are. The Polar Bear Specialist Group of the IUCN Species Survival Commission takes the position that "estimates of subpopulation size or sustainable harvest levels should not be made solely on the basis of traditional ecological knowledge without supporting scientific studies."
Of the 19 recognized polar bear subpopulations, one is in decline, two are increasing, seven are stable, and nine have insufficient data, as of 2017.
The polar bear is a marine mammal because it spends many months of the year at sea. However, it is the only living marine mammal with powerful, large limbs and feet that allow them to cover kilometres on foot and run on land. Its preferred habitat is the annual sea ice covering the waters over the continental shelf and the Arctic inter-island archipelagos. These areas, known as the "Arctic ring of life", have high biological productivity in comparison to the deep waters of the high Arctic. The polar bear tends to frequent areas where sea ice meets water, such as polynyas and leads (temporary stretches of open water in Arctic ice), to hunt the seals that make up most of its diet. Freshwater is limited in these environments because it is either locked up in snow or saline. Polar bears are able to produce water through the metabolism of fats found in seal blubber, and are therefore found primarily along the perimeter of the polar ice pack, rather than in the Polar Basin close to the North Pole where the density of seals is low.
Annual ice contains areas of water that appear and disappear throughout the year as the weather changes. Seals migrate in response to these changes, and polar bears must follow their prey. In Hudson Bay, James Bay, and some other areas, the ice melts completely each summer (an event often referred to as "ice-floe breakup"), forcing polar bears to go onto land and wait through the months until the next freeze-up. In the Chukchi and Beaufort seas, polar bears retreat each summer to the ice further north that remains frozen year-round.
The only other bear similar in size to the polar bear is the Kodiak bear, which is a subspecies of brown bear. Adult male polar bears weigh and measure in total length. Around the Beaufort Sea, however, mature males reportedly average . Adult females are roughly half the size of males and normally weigh , measuring in length. Elsewhere, a slightly larger estimated average weight of was claimed for adult females. When pregnant, however, females can weigh as much as . The polar bear is among the most sexually dimorphic of mammals, surpassed only by the pinnipeds such as elephant seals. The largest polar bear on record, reportedly weighing , was a male shot at Kotzebue Sound in northwestern Alaska in 1960. This specimen, when mounted, stood tall on its hindlegs. The shoulder height of an adult polar bear is . While all bears are short-tailed, the polar bear's tail is relatively the shortest amongst living bears, ranging from in length.
Compared with its closest relative, the brown bear, the polar bear has a more elongated body build and a longer skull and nose. As predicted by Allen's rule for a northerly animal, the legs are stocky and the ears and tail are small. However, the feet are very large to distribute load when walking on snow or thin ice and to provide propulsion when swimming; they may measure across in an adult. The pads of the paws are covered with small, soft papillae (dermal bumps), which provide traction on the ice. The polar bear's claws are short and stocky compared to those of the brown bear, perhaps to serve the former's need to grip heavy prey and ice. The claws are deeply scooped on the underside to assist in digging in the ice of the natural habitat. Research of injury patterns in polar bear forelimbs found injuries to the right forelimb to be more frequent than those to the left, suggesting, perhaps, right-handedness. Unlike the brown bear, polar bears in captivity are rarely overweight or particularly large, possibly as a reaction to the warm conditions of most zoos.
The 42 teeth of a polar bear reflect its highly carnivorous diet. The cheek teeth are smaller and more jagged than in the brown bear, and the canines are larger and sharper. The dental formula is .
Polar bears are superbly insulated by up to of adipose tissue, their hide and their fur; they overheat at temperatures above , and are nearly invisible under infrared photography. Polar bear fur consists of a layer of dense underfur and an outer layer of guard hairs, which appear white to tan but are actually transparent. Two genes that are known to influence melanin production, LYST and AIM1, are both mutated in polar bears, possibly leading to the absence on this pigment in their fur. The guard hair is over most of the body. Polar bears gradually moult from May to August, but, unlike other Arctic mammals, they do not shed their coat for a darker shade to provide camouflage in summer conditions. The hollow guard hairs of a polar bear coat were once thought to act as fiber-optic tubes to conduct light to its black skin, where it could be absorbed; however, this hypothesis was disproved by a study in 1998.
The white coat usually yellows with age. When kept in captivity in warm, humid conditions, the fur may turn a pale shade of green due to algae growing inside the guard hairs. Males have significantly longer hairs on their forelegs, which increase in length until the bear reaches 14 years of age. The male's ornamental foreleg hair is thought to attract females, serving a similar function to the lion's mane.
The polar bear has an extremely well developed sense of smell, being able to detect seals nearly away and buried under of snow. Its hearing is about as acute as that of a human, and its vision is also good at long distances.
The polar bear is an excellent swimmer and often will swim for days. One bear swam continuously for 9 days in the frigid Bering Sea for to reach ice far from land. She then travelled another . During the swim, the bear lost 22% of her body mass and her yearling cub died. With its body fat providing buoyancy, the bear swims in a dog paddle fashion using its large forepaws for propulsion. Polar bears can swim . When walking, the polar bear tends to have a lumbering gait and maintains an average speed of around . When sprinting, they can reach up to .
Unlike brown bears, polar bears are not territorial. Although stereotyped as being voraciously aggressive, they are normally cautious in confrontations, and often choose to escape rather than fight. Satiated polar bears rarely attack humans unless severely provoked. However, due to their lack of prior human interaction, hungry polar bears are extremely unpredictable, fearless towards people and are known to kill and sometimes eat humans. Many attacks by brown bears are the result of surprising the animal, which is not the case with the polar bear. Polar bears are stealth hunters, and the victim is often unaware of the bear's presence until the attack is underway. Whereas brown bears often maul a person and then leave, polar bear attacks are more likely to be predatory and are almost always fatal. However, due to the very small human population around the Arctic, such attacks are rare. Michio Hoshino, a Japanese wildlife photographer, was once pursued briefly by a hungry male polar bear in northern Alaska. According to Hoshino, the bear started running but Hoshino made it to his truck. The bear was able to reach the truck and tore one of the doors off the truck before Hoshino was able to drive off.
In general, adult polar bears live solitary lives. Yet, they have often been seen playing together for hours at a time and even sleeping in an embrace, and polar bear zoologist Nikita Ovsianikov has described adult males as having "well-developed friendships." Cubs are especially playful as well. Among young males in particular, play-fighting may be a means of practicing for serious competition during mating seasons later in life. Polar bears are usually quiet but do communicate with various sounds and vocalizations. Females communicate with their young with moans and chuffs, and the distress calls of both cubs and subadults consists of bleats. Cubs may hum while nursing. When nervous, bears produce huffs, chuffs and snorts while hisses, growls and roars are signs of aggression. Chemical communication can also be important: bears leave behind their scent in their tracks which allow individuals to keep track of one another in the vast Arctic wilderness.
In 1992, a photographer near Churchill took a now widely circulated set of photographs of a polar bear playing with a Canadian Eskimo Dog ("Canis lupus familiaris") a tenth of its size. The pair wrestled harmlessly together each afternoon for 10 days in a row for no apparent reason, although the bear may have been trying to demonstrate its friendliness in the hope of sharing the kennel's food. This kind of social interaction is uncommon; it is far more typical for polar bears to behave aggressively towards dogs.
The polar bear is the most carnivorous member of the bear family, and throughout most of its range, its diet primarily consists of ringed ("Pusa hispida") and bearded seals ("Erignathus barbatus"). The Arctic is home to millions of seals, which become prey when they surface in holes in the ice in order to breathe, or when they haul out on the ice to rest. Polar bears hunt primarily at the interface between ice, water, and air; they only rarely catch seals on land or in open water.
The polar bear's most common hunting method is called "still-hunting": the bear uses its excellent sense of smell to locate a seal breathing hole, and crouches nearby in silence for a seal to appear. The bear may lie in wait for several hours. When the seal exhales, the bear smells its breath, reaches into the hole with a forepaw, and drags it out onto the ice. The polar bear kills the seal by biting its head to crush its skull. The polar bear also hunts by stalking seals resting on the ice: upon spotting a seal, it walks to within , and then crouches. If the seal does not notice, the bear creeps to within of the seal and then suddenly rushes forth to attack. A third hunting method is to raid the birth lairs that female seals create in the snow.
A widespread legend tells that polar bears cover their black noses with their paws when hunting. This behaviour, if it happens, is rare – although the story exists in the oral history of northern peoples and in accounts by early Arctic explorers, there is no record of an eyewitness account of the behaviour in recent decades.
Mature bears tend to eat only the calorie-rich skin and blubber of the seal, which are highly digestible, whereas younger bears consume the protein-rich red meat. Studies have also photographed polar bears scaling near-vertical cliffs, to eat birds' chicks and eggs. For subadult bears, which are independent of their mother but have not yet gained enough experience and body size to successfully hunt seals, scavenging the carcasses from other bears' kills is an important source of nutrition. Subadults may also be forced to accept a half-eaten carcass if they kill a seal but cannot defend it from larger polar bears. After feeding, polar bears wash themselves with water or snow.
Although polar bears are extraordinarily powerful, its primary prey species, the ringed seal, is much smaller than itself, and many of the seals hunted are pups rather than adults. Ringed seals are born weighing and grown to an estimated average weight of only . They also in places prey heavily upon the harp seal ("Pagophilus groenlandicus") or the harbour seal. The bearded seal, on the other hand, can be nearly the same size as the bear itself, averaging . Adult male bearded seals, at are too large for a female bear to overtake, and so are potential prey only for mature male bears. Large males also occasionally attempt to hunt and kill even larger prey items. It can kill an adult walrus ("Odobenus rosmarus"), although this is rarely attempted. At up to and a typical adult mass range of , a walrus can be more than twice the bear's weight, has extremely thick skin and has up to -long ivory tusks that can be used as formidable weapons. A polar bear may charge a group of walruses, with the goal of separating a young, infirm, or injured walrus from the pod. They will even attack adult walruses when their diving holes have frozen over or intercept them before they can get back to the diving hole in the ice. Yet, polar bears will very seldom attack full-grown adult walruses, with the largest male walrus probably invulnerable unless otherwise injured or incapacitated. Since an attack on a walrus tends to be an extremely protracted and exhausting venture, bears have been known to back down from the attack after making the initial injury to the walrus. Polar bears have also been seen to prey on beluga whales ("Delphinapterus leucas") and narwhals ("Monodon monoceros"), by swiping at them at breathing holes. The whales are of similar size to the walrus and nearly as difficult for the bear to subdue. Most terrestrial animals in the Arctic can outrun the polar bear on land as polar bears overheat quickly, and most marine animals the bear encounters can outswim it. In some areas, the polar bear's diet is supplemented by walrus calves and by the carcasses of dead adult walruses or whales, whose blubber is readily devoured even when rotten. Polar bears sometimes swim underwater to catch fish like the Arctic charr or the fourhorn sculpin.
With the exception of pregnant females, polar bears are active year-round, although they have a vestigial hibernation induction trigger in their blood. Unlike brown and black bears, polar bears are capable of fasting for up to several months during late summer and early fall, when they cannot hunt for seals because the sea is unfrozen. When sea ice is unavailable during summer and early autumn, some populations live off fat reserves for months at a time, as polar bears do not 'hibernate' any time of the year.
Being both curious animals and scavengers, polar bears investigate and consume garbage where they come into contact with humans. Polar bears may attempt to consume almost anything they can find, including hazardous substances such as styrofoam, plastic, car batteries, ethylene glycol, hydraulic fluid, and motor oil. The dump in Churchill, Manitoba was closed in 2006 to protect bears, and waste is now recycled or transported to Thompson, Manitoba.
Although seal predation is the primary and an indispensable way of life for most polar bears, when alternatives are present they are quite flexible. Polar bears consume a wide variety of other wild foods, including muskox ("Ovibos moschatus"), reindeer ("Rangifer tarandus"), birds, eggs, rodents, crabs, other crustaceans and other polar bears. They may also eat plants, including berries, roots, and kelp; however, none of these have been a significant part of their diet, except for beachcast marine mammal carcasses. Given the change in climate, with ice breaking up in areas such as the Hudson Bay earlier than it used to, polar bears are exploiting food resources such as snow geese and eggs, and plants such as lyme grass in increased quantities.
When stalking land animals, such as muskox, reindeer, and even willow ptarmigan ("Lagopus lagopus"), polar bears appear to make use of vegetative cover and wind direction to bring them as close to their prey as possible before attacking. Polar bears have been observed to hunt the small Svalbard reindeer ("R. t. platyrhynchus"), which weigh only as adults, as well as the barren-ground caribou ("R. t. groenlandicus"), which is about twice as heavy as that. Adult muskox, which can weigh or more, are a more formidable quarry. Although ungulates are not typical prey, the killing of one during the summer months can greatly increase the odds of survival during that lean period. Like the brown bear, most ungulate prey of polar bears is likely to be young, sickly or injured specimens rather than healthy adults. The polar bear's metabolism is specialized to require large amounts of fat from marine mammals, and it cannot derive sufficient caloric intake from terrestrial food.
In their southern range, especially near Hudson Bay and James Bay, Canadian polar bears endure all summer without sea ice to hunt from. Here, their food ecology shows their dietary flexibility. They still manage to consume some seals, but they are food-deprived in summer as only marine mammal carcasses are an important alternative without sea ice, especially carcasses of the beluga whale. These alternatives may reduce the rate of weight loss of bears when on land. One scientist found that 71% of the Hudson Bay bears had fed on seaweed (marine algae) and that about half were feeding on birds such as the dovekie and sea ducks, especially the long-tailed duck (53%) and common eider, by swimming underwater to catch them. They were also diving to feed on blue mussels and other underwater food sources like the green sea urchin. 24% had eaten moss recently, 19% had consumed grass, 34% had eaten black crowberry and about half had consumed willows. This study illustrates the polar bear's dietary flexibility but it does not represent its life history elsewhere. Most polar bears elsewhere will never have access to these alternatives, except for the marine mammal carcasses that are important wherever they occur.
In Svalbard, polar bears were observed to kill white-beaked dolphins during spring, when the dolphins were trapped in the sea ice. The bears then proceeded to cache the carcasses, which remained and were eaten during the ice-free summer and autumn.
Courtship and mating take place on the sea ice in April and May, when polar bears congregate in the best seal hunting areas. A male may follow the tracks of a breeding female for or more, and after finding her engage in intense fighting with other males over mating rights, fights that often result in scars and broken teeth. Polar bears have a generally polygynous mating system; recent genetic testing of mothers and cubs, however, has uncovered cases of litters in which cubs have different fathers. Partners stay together and mate repeatedly for an entire week; the mating ritual induces ovulation in the female.
After mating, the fertilized egg remains in a suspended state until August or September. During these four months, the pregnant female eats prodigious amounts of food, gaining at least and often more than doubling her body weight.
When the ice floes are at their minimum in the fall, ending the possibility of hunting, each pregnant female digs a "maternity den" consisting of a narrow entrance tunnel leading to one to three chambers. Most maternity dens are in snowdrifts, but may also be made underground in permafrost if it is not sufficiently cold yet for snow. In most subpopulations, maternity dens are situated on land a few kilometres from the coast, and the individuals in a subpopulation tend to reuse the same denning areas each year. The polar bears that do not den on land make their dens on the sea ice. In the den, she enters a dormant state similar to hibernation. This hibernation-like state does not consist of continuous sleeping; however, the bear's heart rate slows from 46 to 27 beats per minute. Her body temperature does not decrease during this period as it would for a typical mammal in hibernation.
Between November and February, cubs are born blind, covered with a light down fur, and weighing less than , but in captivity they might be delivered in the earlier months. The earliest recorded birth of polar bears in captivity was on 11 October 2011 in the Toronto Zoo. On average, each litter has two cubs. The family remains in the den until mid-February to mid-April, with the mother maintaining her fast while nursing her cubs on a fat-rich milk. By the time the mother breaks open the entrance to the den, her cubs weigh about . For about 12 to 15 days, the family spends time outside the den while remaining in its vicinity, the mother grazing on vegetation while the cubs become used to walking and playing. Then they begin the long walk from the denning area to the sea ice, where the mother can once again catch seals. Depending on the timing of ice-floe breakup in the fall, she may have fasted for up to eight months. During this time, cubs playfully imitate the mother's hunting methods in preparation for later life.
Female polar bears are noted for both their affection towards their offspring, and their valor in protecting them. Multiple cases of adoption of wild cubs have been confirmed by genetic testing. Adult bears of either gender occasionally kill and eat polar bear cubs. As of 2006, in Alaska, 42% of cubs were reaching 12 months of age, down from 65% in 1991. In most areas, cubs are weaned at two and a half years of age, when the mother chases them away or abandons them. The Western Hudson Bay subpopulation is unusual in that its female polar bears sometimes wean their cubs at only one and a half years. This was the case for 40% of cubs there in the early 1980s; however by the 1990s, fewer than 20% of cubs were weaned this young. After the mother leaves, sibling cubs sometimes travel and share food together for weeks or months.
Females begin to breed at the age of four years in most areas, and five years in the area of the Beaufort Sea. Males usually reach sexual maturity at six years; however, as competition for females is fierce, many do not breed until the age of eight or ten. A study in Hudson Bay indicated that both the reproductive success and the maternal weight of females peaked in their mid-teens.Maternal success appeared to decline after this point, possibly because of an age-related impairment in the ability to store the fat necessary to rear cubs.
Polar bears appear to be less affected by infectious diseases and parasites than most terrestrial mammals. Polar bears are especially susceptible to "Trichinella", a parasitic roundworm they contract through cannibalism, although infections are usually not fatal. Only one case of a polar bear with rabies has been documented, even though polar bears frequently interact with Arctic foxes, which often carry rabies. Bacterial leptospirosis and "Morbillivirus" have been recorded. Polar bears sometimes have problems with various skin diseases that may be caused by mites or other parasites.
Polar bears rarely live beyond 25 years. The oldest wild bears on record died at age 32, whereas the oldest captive was a female who died in 1991, age 43. The causes of death in wild adult polar bears are poorly understood, as carcasses are rarely found in the species's frigid habitat. In the wild, old polar bears eventually become too weak to catch food, and gradually starve to death. Polar bears injured in fights or accidents may either die from their injuries, or become unable to hunt effectively, leading to starvation.
The polar bear is the apex predator within its range, and is a keystone species for the Arctic. Several animal species, particularly Arctic foxes ("Vulpes lagopus") and glaucous gulls ("Larus hyperboreus"), routinely scavenge polar bear kills.
The relationship between ringed seals and polar bears is so close that the abundance of ringed seals in some areas appears to regulate the density of polar bears, while polar bear predation in turn regulates density and reproductive success of ringed seals. The evolutionary pressure of polar bear predation on seals probably accounts for some significant differences between Arctic and Antarctic seals. Compared to the Antarctic, where there is no major surface predator, Arctic seals use more breathing holes per individual, appear more restless when hauled out on the ice, and rarely defecate on the ice. The baby fur of most Arctic seal species is white, presumably to provide camouflage from predators, whereas Antarctic seals all have dark fur at birth.
Brown bears tend to dominate polar bears in disputes over carcasses, and dead polar bear cubs have been found in brown bear dens. Wolves are rarely encountered by polar bears, though there are two records of Arctic wolf ("Canis lupus arctos") packs killing polar bear cubs. Adult polar bears are occasionally vulnerable to predation by orcas ("Orcinus orca") while swimming, but they are rarely reported as taken and bears are likely to avoid entering the water if possible if they detect an orca pod in the area. The melting sea ice in the Arctic may be causing an increase of orcas in the Arctic sea, which may increase the risk of predation on polar bears but also may benefit the bears by providing more whale carcasses that they can scavenge. The remains of polar bears have found in the stomachs of large Greenland sharks ("Somniosus microcephalus"), although it certainly cannot be ruled out that the bears were merely scavenged by this slow-moving, unusual shark. A rather unlikely killer of a grown polar bear has reportedly included a wolverine ("Gulo gulo"), anecedotely reported to have suffocated a bear in a zoo with a bite to the throat during a conflict. This report may well be dubious, however. Polar bears are sometimes the host of arctic mites such as "Alaskozetes antarcticus".
Researchers tracked 52 sows in the southern Beaufort Sea off Alaska with GPS system collars; no boars were involved in the study due to males' necks being too thick for the GPS-equipped collars. Fifty long-distance swims were recorded; the longest at , with an average of . The length of these swims ranged from most of a day to ten days. Ten of the sows had a cub swim with them and after a year, six cubs survived. The study did not determine if the others lost their cubs before, during, or some time after their long swims. Researchers do not know whether or not this is a new behaviour; before polar ice shrinkage, they opined that there was probably neither the need nor opportunity to swim such long distances.
The polar bear may swim underwater for up to three minutes to approach seals on shore or on ice floes.
Polar bears have long provided important raw materials for Arctic peoples, including the Inuit, Yupik, Chukchi, Nenets, Russian Pomors and others. Hunters commonly used teams of dogs to distract the bear, allowing the hunter to spear the bear or shoot it with arrows at closer range. Almost all parts of captured animals had a use. The fur was used in particular to make trousers and, by the Nenets, to make galoshes-like outer footwear called "tobok"; the meat is edible, despite some risk of trichinosis; the fat was used in food and as a fuel for lighting homes, alongside seal and whale blubber; sinews were used as thread for sewing clothes; the gallbladder and sometimes heart were dried and powdered for medicinal purposes; the large canine teeth were highly valued as talismans. Only the liver was not used, as its high concentration of vitamin A is poisonous. As a carnivore, which feeds largely upon fish-eating carnivores, the polar bear ingests large amounts of vitamin A that is stored in their livers. The resulting high concentrations cause Hypervitaminosis A, Hunters make sure to either toss the liver into the sea or bury it in order to spare their dogs from potential poisoning. Traditional subsistence hunting was on a small enough scale to not significantly affect polar bear populations, mostly because of the sparseness of the human population in polar bear habitat.
In Russia, polar bear furs were already being commercially traded in the 14th century, though it was of low value compared to Arctic fox or even reindeer fur. The growth of the human population in the Eurasian Arctic in the 16th and 17th century, together with the advent of firearms and increasing trade, dramatically increased the harvest of polar bears. However, since polar bear fur has always played a marginal commercial role, data on the historical harvest is fragmentary. It is known, for example, that already in the winter of 1784/1785 Russian Pomors on Spitsbergen harvested 150 polar bears in Magdalenefjorden. In the early 20th century, Norwegian hunters were harvesting 300 bears per year at the same location. Estimates of total historical harvest suggest that from the beginning of the 18th century, roughly 400 to 500 animals were being harvested annually in northern Eurasia, reaching a peak of 1,300 to 1,500 animals in the early 20th century, and falling off as the numbers began dwindling.
In the first half of the 20th century, mechanized and overpoweringly efficient methods of hunting and trapping came into use in North America as well. Polar bears were chased from snowmobiles, icebreakers, and airplanes, the latter practice described in a 1965 "New York Times" editorial as being "about as sporting as machine gunning a cow." Norwegians used "self-killing guns", comprising a loaded rifle in a baited box that was placed at the level of a bear's head, and which fired when the string attached to the bait was pulled. The numbers taken grew rapidly in the 1960s, peaking around 1968 with a global total of 1,250 bears that year.
Concerns over the future survival of the species led to the development of national regulations on polar bear hunting, beginning in the mid-1950s. The Soviet Union banned all hunting in 1956. Canada began imposing hunting quotas in 1968. Norway passed a series of increasingly strict regulations from 1965 to 1973, and has completely banned hunting since then. The United States began regulating hunting in 1971 and adopted the Marine Mammal Protection Act in 1972. In 1973, the International Agreement on the Conservation of Polar Bears was signed by all five nations whose territory is inhabited by polar bears: Canada, Denmark, Norway, the Soviet Union, and the United States. Member countries agreed to place restrictions on recreational and commercial hunting, ban hunting from aircraft and icebreakers, and conduct further research. The treaty allows hunting "by local people using traditional methods". Norway is the only country of the five in which all harvest of polar bears is banned. The agreement was a rare case of international cooperation during the Cold War. Biologist Ian Stirling commented, "For many years, the conservation of polar bears was the only subject in the entire Arctic that nations from both sides of the Iron Curtain could agree upon sufficiently to sign an agreement. Such was the intensity of human fascination with this magnificent predator, the only marine bear."
Agreements have been made between countries to co-manage their shared polar bear subpopulations. After several years of negotiations, Russia and the United States signed an agreement in October 2000 to jointly set quotas for indigenous subsistence hunting in Alaska and Chukotka. The treaty was ratified in October 2007. In September 2015, the polar bear range states agreed upon a "circumpolar action plan" describing their conservation strategy for polar bears.
Although the United States government has proposed that polar bears be transferred to Appendix I of CITES, which would ban all international trade in polar bear parts, polar bears currently remain listed under Appendix II. This decision was approved of by members of the IUCN and TRAFFIC, who determined that such an uplisting was unlikely to confer a conservation benefit.
Polar bears were designated "Not at Risk" in April 1986 and uplisted to "Special Concern" in April 1991. This status was re-evaluated and confirmed in April 1999, November 2002, and April 2008. Polar bears continue to be listed as a species of special concern in Canada because of their sensitivity to overharvest and because of an expected range contraction caused by loss of Arctic sea ice.
More than 600 bears are killed per year by humans across Canada, a rate calculated by scientists to be unsustainable for some areas, notably Baffin Bay. Canada has allowed sport hunters accompanied by local guides and dog-sled teams since 1970, but the practice was not common until the 1980s. The guiding of sport hunters provides meaningful employment and an important source of income for northern communities in which economic opportunities are few. Sport hunting can bring CDN$20,000 to $35,000 per bear into northern communities, which until recently has been mostly from American hunters.
The territory of Nunavut accounts for the location 80% of annual kills in Canada. In 2005, the government of Nunavut increased the quota from 400 to 518 bears, despite protests from the IUCN Polar Bear Specialist Group. In two areas where harvest levels have been increased based on increased sightings, science-based studies have indicated declining populations, and a third area is considered data-deficient. While most of that quota is hunted by the indigenous Inuit people, a growing share is sold to recreational hunters. (0.8% in the 1970s, 7.1% in the 1980s, and 14.6% in the 1990s) Nunavut polar bear biologist, Mitchell Taylor, who was formerly responsible for polar bear conservation in the territory, has insisted that bear numbers are being sustained under current hunting limits. In 2010, the 2005 increase was partially reversed. Government of Nunavut officials announced that the polar bear quota for the Baffin Bay region would be gradually reduced from 105 per year to 65 by the year 2013. The Government of the Northwest Territories maintain their own quota of 72 to 103 bears within the Inuvialuit communities of which some are set aside for sports hunters. Environment Canada also banned the export from Canada of fur, claws, skulls and other products from polar bears harvested in Baffin Bay as of 1 January 2010.
Because of the way polar bear hunting quotas are managed in Canada, attempts to discourage sport hunting would actually increase the number of bears killed in the short term. Canada allocates a certain number of permits each year to sport and subsistence hunting, and those that are not used for sport hunting are re-allocated to indigenous subsistence hunting. Whereas northern communities kill all the polar bears they are permitted to take each year, only half of sport hunters with permits actually manage to kill a polar bear. If a sport hunter does not kill a polar bear before his or her permit expires, the permit cannot be transferred to another hunter.
In August 2011, Environment Canada published a national polar bear conservation strategy.
In Greenland, hunting restrictions were first introduced in 1994 and expanded by executive order in 2005. Until 2005 Greenland placed no limit on hunting by indigenous people. However, in 2006 it imposed a limit of 150, while also allowed recreational hunting for the first time. Other provisions included year-round protection of cubs and mothers, restrictions on weapons used and various administrative requirements to catalogue kills.
Polar bears were hunted heavily in Svalbard, Norway throughout the 19th century and to as recently as 1973, when the conservation treaty was signed. 900 bears a year were harvested in the 1920s and after World War II, there were as many as 400–500 harvested annually. Some regulations of hunting did exist. In 1927, poisoning was outlawed while in 1939, certain denning sights were declared off limits. The killing of females and cubs was made illegal in 1965. Killing of polar bears decreased somewhat 25–30 years before the treaty. Despite this, the polar bear population continued to decline and by 1973, only around 1000 bears were left in Svalbard. Only with the passage of the treaty did they begin to recover.
The Soviet Union banned the harvest of polar bears in 1956; however, poaching continued and is estimated to pose a serious threat to the polar bear population. In recent years, polar bears have approached coastal villages in Chukotka more frequently due to the shrinking of the sea ice, endangering humans and raising concerns that illegal hunting would become even more prevalent. In 2007, the Russian government made subsistence hunting legal for indigenous Chukotkan peoples only, a move supported by Russia's most prominent bear researchers and the World Wide Fund for Nature as a means to curb poaching.
Polar bears are currently listed as "Rare", of "Uncertain Status", or "Rehabilitated and rehabilitating" in the Red Data Book of Russia, depending on population. In 2010, the Ministry of Natural Resources and Environment published a strategy for polar bear conservation in Russia.
The Marine Mammal Protection Act of 1972 afforded polar bears some protection in the United States. It banned hunting (except by indigenous subsistence hunters), banned importing of polar bear parts (except polar bear pelts taken legally in Canada), and banned the harassment of polar bears. On 15 May 2008, the United States Department of the Interior listed the polar bear as a threatened species under the Endangered Species Act, citing the melting of Arctic sea ice as the primary threat to the polar bear. It banned all importing of polar bear trophies. Importing products made from polar bears had been prohibited from 1972 to 1994 under the Marine Mammal Protection Act, and restricted between 1994 and 2008. Under those restrictions, permits from the United States Fish and Wildlife Service were required to import sport-hunted polar bear trophies taken in hunting expeditions in Canada. The permit process required that the bear be taken from an area with quotas based on sound management principles. Since 1994, hundreds of sport-hunted polar bear trophies have been imported into the U.S. In 2015, the U.S. Fish and Wildlife Service published a draft conservation management plan for polar bears to improve their status under the Endangered Species Act and the Marine Mammal Protection Act.
Polar bear population sizes and trends are difficult to estimate accurately because they occupy remote home ranges and exist at low population densities. Polar bear fieldwork can also be hazardous to researchers. As of 2015, the International Union for Conservation of Nature (IUCN) reports that the global population of polar bears is 22,000 to 31,000, and the current population trend is unknown. Nevertheless, polar bears are listed as "Vulnerable" under criterion A3c, which indicates an expected population decrease of ≥30% over the next three generations (~34.5 years) due to "decline in area of occupancy, extent of occurrence and/or quality of habitat". Risks to the polar bear include climate change, pollution in the form of toxic contaminants, conflicts with shipping, oil and gas exploration and development, and human-bear interactions including harvesting and possible stresses from recreational polar-bear watching.
According to the World Wildlife Fund, the polar bear is important as an indicator of Arctic ecosystem health. Polar bears are studied to gain understanding of what is happening throughout the Arctic, because at-risk polar bears are often a sign of something wrong with the Arctic marine ecosystem.
The International Union for Conservation of Nature, Arctic Climate Impact Assessment, United States Geological Survey and many leading polar bear biologists have expressed grave concerns about the impact of climate change, including the belief that the current warming trend imperils the survival of the polar bear.
The key danger posed by climate change is malnutrition or starvation due to habitat loss. Polar bears hunt seals from a platform of sea ice. Rising temperatures cause the sea ice to melt earlier in the year, driving the bears to shore before they have built sufficient fat reserves to survive the period of scarce food in the late summer and early fall. Reduction in sea-ice cover also forces bears to swim longer distances, which further depletes their energy stores and occasionally leads to drowning. Thinner sea ice tends to deform more easily, which appears to make it more difficult for polar bears to access seals. Insufficient nourishment leads to lower reproductive rates in adult females and lower survival rates in cubs and juvenile bears, in addition to poorer body condition in bears of all ages.
In addition to creating nutritional stress, a warming climate is expected to affect various other aspects of polar bear life: Changes in sea ice affect the ability of pregnant females to build suitable maternity dens. As the distance increases between the pack ice and the coast, females must swim longer distances to reach favoured denning areas on land. Thawing of permafrost would affect the bears who traditionally den underground, and warm winters could result in den roofs collapsing or having reduced insulative value. For the polar bears that currently den on multi-year ice, increased ice mobility may result in longer distances for mothers and young cubs to walk when they return to seal-hunting areas in the spring. Disease-causing bacteria and parasites would flourish more readily in a warmer climate.
Problematic interactions between polar bears and humans, such as foraging by bears in garbage dumps, have historically been more prevalent in years when ice-floe breakup occurred early and local polar bears were relatively thin. Increased human-bear interactions, including fatal attacks on humans, are likely to increase as the sea ice shrinks and hungry bears try to find food on land.
The effects of climate change are most profound in the southern part of the polar bear's range, and this is indeed where significant degradation of local populations has been observed. The Western Hudson Bay subpopulation, in a southern part of the range, also happens to be one of the best-studied polar bear subpopulations. This subpopulation feeds heavily on ringed seals in late spring, when newly weaned and easily hunted seal pups are abundant. The late spring hunting season ends for polar bears when the ice begins to melt and break up, and they fast or eat little during the summer until the sea freezes again.
Due to warming air temperatures, ice-floe breakup in western Hudson Bay is currently occurring three weeks earlier than it did 30 years ago, reducing the duration of the polar bear feeding season. The body condition of polar bears has declined during this period; the average weight of lone (and likely pregnant) female polar bears was approximately in 1980 and in 2004. Between 1987 and 2004, the Western Hudson Bay population declined by 22%, although the population was listed as "stable" as of 2017. As the climate change melts sea ice, the U.S. Geological Survey projects that two-thirds of polar bears will disappear by 2050.
In Alaska, the effects of sea ice shrinkage have contributed to higher mortality rates in polar bear cubs, and have led to changes in the denning locations of pregnant females. The proportion of maternity dens on sea ice has changed from 62% between the years 1985 through 1994, to 37% over the years 1998 through 2004. Thus, now the Alaskan population more resembles the world population in that it is more likely to den on land. In recent years, polar bears in the Arctic have undertaken longer than usual swims to find prey, possibly resulting in four recorded drownings in the unusually large ice pack regression of 2005.
A new development is that polar bears have begun ranging to new territory. While not unheard of but still uncommon, polar bears have been sighted increasingly in larger numbers ashore, staying on the mainland for longer periods of time during the summer months, particularly in North Canada, traveling farther inland. This may cause an increased reliance on terrestrial diets, such as goose eggs, waterfowl and caribou, as well as increased human–bear conflict.
Polar bears accumulate high levels of persistent organic pollutants such as polychlorinated biphenyl (PCBs) and chlorinated pesticides. Due to their position at the top of the ecological pyramid, with a diet heavy in blubber in which halocarbons concentrate, their bodies are among the most contaminated of Arctic mammals. Halocarbons (also known as organohalogens) are known to be toxic to other animals, because they mimic hormone chemistry, and biomarkers such as immunoglobulin G and retinol suggest similar effects on polar bears. PCBs have received the most study, and they have been associated with birth defects and immune system deficiency.
Many chemicals, such as PCBs and DDT, have been internationally banned due to the recognition of their harm on the environment. Their concentrations in polar bear tissues continued to rise for decades after being banned, as these chemicals spread through the food chain. Since then, the trend seems to have abated, with tissue concentrations of PCBs declining between studies performed from 1989 to 1993 and studies performed from 1996 to 2002. During the same time periods, DDT was found to be notably lower in the Western Hudson Bay population only.
Oil and gas development in polar bear habitat can affect the bears in a variety of ways. An oil spill in the Arctic would most likely concentrate in the areas where polar bears and their prey are also concentrated, such as sea ice leads. Because polar bears rely partly on their fur for insulation and soiling of the fur by oil reduces its insulative value, oil spills put bears at risk of dying from hypothermia. Polar bears exposed to oil spill conditions have been observed to lick the oil from their fur, leading to fatal kidney failure. Maternity dens, used by pregnant females and by females with infants, can also be disturbed by nearby oil exploration and development. Disturbance of these sensitive sites may trigger the mother to abandon her den prematurely, or abandon her litter altogether.
Steven Amstrup and other U.S. Geological Survey scientists have predicted two-thirds of the world's polar bears may disappear by 2050, based on moderate projections for the shrinking of summer sea ice caused by climate change, though the validity of this study has been debated. The bears could disappear from Europe, Asia, and Alaska, and be depleted from the Canadian Arctic Archipelago and areas off the northern Greenland coast. By 2080, they could disappear from Greenland entirely and from the northern Canadian coast, leaving only dwindling numbers in the interior Arctic Archipelago. However, in the short term, some polar bear populations in historically colder regions of the Arctic may temporarily benefit from a milder climate, as multiyear ice that is too thick for seals to create breathing holes is replaced by thinner annual ice.
Polar bears diverged from brown bears 400,000–600,000 years ago and have survived past periods of climate fluctuation. It has been claimed that polar bears will be able to adapt to terrestrial food sources as the sea ice they use to hunt seals disappears. However, most polar bear biologists think that polar bears will be unable to completely offset the loss of calorie-rich seal blubber with terrestrial foods, and that they will be outcompeted by brown bears in this terrestrial niche, ultimately leading to a population decline.
Warnings about the future of the polar bear are often contrasted with the fact that worldwide population estimates have increased over the past 50 years and are relatively stable today. Some estimates of the global population are around 5,000 to 10,000 in the early 1970s; other estimates were 20,000 to 40,000 during the 1980s. Current estimates put the global population at between 20,000 and 25,000 or 22,000 and 31,000.
There are several reasons for the apparent discordance between past and projected population trends: estimates from the 1950s and 1960s were based on stories from explorers and hunters rather than on scientific surveys. Second, controls of harvesting were introduced that allowed this previously overhunted species to recover. Third, the recent effects of climate change have affected sea ice abundance in different areas to varying degrees.
Debate over the listing of the polar bear under endangered species legislation has put conservation groups and Canada's Inuit at opposing positions; the Nunavut government and many northern residents have condemned the U.S. initiative to list the polar bear under the Endangered Species Act. Many Inuit believe the polar bear population is increasing, and restrictions on commercial sport-hunting are likely to lead to a loss of income to their communities.
For the indigenous peoples of the Arctic, polar bears have long played an important cultural and material role. Polar bear remains have been found at hunting sites dating to 2,500 to 3,000 years ago and 1,500-year-old cave paintings of polar bears have been found in the Chukchi Peninsula. Indeed, it has been suggested that Arctic peoples' skills in seal hunting and igloo construction has been in part acquired from the polar bears themselves.
The Inuit and Alaska Natives have many folk tales featuring the bears including legends in which bears are humans when inside their own houses and put on bear hides when going outside, and stories of how the constellation that is said to resemble a great bear surrounded by dogs came into being. These legends reveal a deep respect for the polar bear, which is portrayed as both spiritually powerful and closely akin to humans. The human-like posture of bears when standing and sitting, and the resemblance of a skinned bear carcass to the human body, have probably contributed to the belief that the spirits of humans and bears were interchangeable.
Among the Chukchi and Yupik of eastern Siberia, there was a longstanding shamanistic ritual of "thanksgiving" to the hunted polar bear. After killing the animal, its head and skin were removed and cleaned and brought into the home, and a feast was held in the hunting camp in its honor. To appease the spirit of the bear, traditional song and drum music was played, and the skull was ceremonially fed and offered a pipe. Only once the spirit was appeased was the skull be separated from the skin, taken beyond the bounds of the homestead, and placed in the ground, facing north.
The Nenets of north-central Siberia placed particular value on the talismanic power of the prominent canine teeth. These were traded in the villages of the lower Yenisei and Khatanga rivers to the forest-dwelling peoples further south, who would sew them into their hats as protection against brown bears. It was believed that the "little nephew" (the brown bear) would not dare to attack a man wearing the tooth of its powerful "big uncle", the polar bear. The skulls of killed polar bears were buried at sacred sites, and altars, called "sedyangi", were constructed out of the skulls. Several such sites have been preserved on the Yamal Peninsula.
Their distinctive appearance and their association with the Arctic have made polar bears popular icons, especially in those areas where they are native. The Canadian two-dollar coin carries an image of a lone polar bear on its reverse side, while a special millennium edition featured three. Vehicle licence plates in the Northwest Territories in Canada are in the shape of a polar bear, as was the case in Nunavut until 2012; these now display polar bear artwork instead. The polar bear is the mascot of Bowdoin College, Maine; the University of Alaska Fairbanks; and the 1988 Winter Olympics held in Calgary. The Eisbären Berlin hockey team uses a roaring polar bear as their logo, and the Charlotte, North Carolina hockey team the Charlotte Checkers uses a polar bear named Chubby Checker as their mascot.
Coca-Cola has used images of the polar bear in its advertising, and Polar Beverages, Nelvana, Bundaberg Rum, Klondike bars, and Fox's Glacier Mints all feature polar bears in their logos.
Polar bears are popular in fiction, particularly in books for children or teenagers. For example, "The Polar Bear Son" is adapted from a traditional Inuit tale. The animated television series "Noah's Island" features a polar bear named Noah as the protagonist. Polar bears feature prominently in "East" ("North Child" in the UK) by Edith Pattou, "The Bear" by Raymond Briggs (adapted into an animated short in 1998), and Chris d'Lacey's "The Fire Within" series. The "panserbjørne" of Philip Pullman's fantasy trilogy "His Dark Materials" are sapient, dignified polar bears who exhibit anthropomorphic qualities, and feature prominently in the 2007 film adaptation of "The Golden Compass". The television series "Lost" features polar bears living on the tropical island setting. | https://en.wikipedia.org/wiki?curid=24408 |
Punic Wars
The Punic Wars were a series of three wars fought between Rome and Carthage from 264 BC to 146 BC. At the time, they were some of the largest wars that had ever taken place. The term "Punic" comes from the Latin word "Punicus" (or "Poenicus"), meaning "Carthaginian", with reference to the Carthaginians' Phoenician ancestry.
The main cause of the Punic Wars was the conflicts of interest between the existing Carthaginian Empire and the expanding Roman Republic. The Romans were initially interested in expansion via Sicily (which at that time was a cultural melting pot), part of which lay under Carthaginian control. At the start of the First Punic War (264–241 BC), Carthage was the dominant power of the Western Mediterranean, with an extensive maritime empire. Rome was a rapidly ascending power in Italy, but it lacked the naval power of Carthage. The Second Punic War (218–201 BC) witnessed Hannibal's crossing of the Alps in 218 BC, followed by a prolonged but ultimately failed campaign of Carthage's Hannibal in mainland Italy. By the end of the Third Punic War (149–146 BC), after more than a hundred years and the loss of many hundreds of thousands of soldiers from both sides, Rome had conquered Carthage's empire, completely destroyed the city, and became the most powerful state of the Western Mediterranean.
With the end of the Macedonian Wars – which ran concurrently with the Punic Wars – and the defeat of the Seleucid King Antiochus III the Great in the Roman–Seleucid War (Treaty of Apamea, 188 BC) in the eastern sea, Rome emerged as the dominant Mediterranean power and one of the most powerful cities in classical antiquity. The Roman victories over Carthage in these wars gave Rome a preeminent status it would retain until the 5th century AD.
During the mid-3rd century BC, Carthage was a large city located on the coast of modern Tunisia. Founded by the Phoenicians in the mid-9th century BC, it was a powerful thalassocratic city-state with a vast commercial network. Of the great city-states in the western Mediterranean, only Rome rivaled it in power, wealth, and population. While Carthage's navy was the largest in the ancient world at the time, it did not maintain a large, permanent, standing army. Instead, Carthage relied mostly on mercenaries, especially the indigenous Numidians, to fight its wars. These mercenaries were primarily led by officers who were Carthaginian citizens. The Carthaginians were famed for their abilities as sailors, and many Carthaginians from the lower classes served in their navy, which provided them with a stable income and career.
In 200 BC, the Roman Republic had gained control of the Italian peninsula south of the Po River. Unlike Carthage, Rome had a large and disciplined army, but lacked a navy at the start of the First Punic War. This left the Romans at a disadvantage until the construction of large fleets during the war.
The First Punic War (264–241 BC) was fought partly on land in Sicily and Africa, but was largely a naval war. It began as a local conflict in Sicily between Hiero II of Syracuse and the Mamertines of Messina. The Mamertines enlisted the aid of the Carthaginian navy, and subsequently betrayed them by entreating the Roman Senate for aid against Carthage. The Romans sent a garrison to secure Messina, so the outraged Carthaginians then lent aid to Syracuse. Tensions quickly escalated into a full-scale war between Carthage and Rome for the control of Sicily.
After a harsh defeat at the Battle of Agrigentum in 262 BC, the Carthaginian leadership resolved to avoid further direct land-based engagements with the powerful Roman legions, and concentrate on the sea where they believed Carthage's large navy had the advantage. Initially the Carthaginian navy prevailed. In 260 BC, they defeated the fledgling Roman navy at the Battle of the Lipari Islands. Rome responded by drastically expanding its navy in a very short time. Within two months, the Romans had a fleet of over one hundred warships.
Aware that they could not defeat the Carthaginians in traditional ramming combat, the Romans used the "corvus", an assault bridge, to leverage their superior infantry. The hinged bridge would be swung down onto enemy vessels with a sharp spike to secure the two ships together. Roman legionaries could then board and capture Carthaginian ships. This innovative Roman tactic reduced the Carthaginian navy's advantage in ship-to-ship engagements.
However, the "corvus" was also cumbersome and dangerous, and was eventually phased out as the Roman navy became more experienced and tactically proficient. Save for the disastrous defeat at the Battle of Tunis in Africa, and the early naval defeats, the First Punic War was a nearly unbroken string of Roman victories. In 241 BC, Carthage signed a peace treaty under the terms of which they evacuated Sicily and paid Rome a large war indemnity. The long war was costly to both powers, but Carthage was more seriously destabilized.
According to Polybius, there had been several trade agreements between Rome and Carthage, even a mutual alliance against king Pyrrhus of Epirus. When Rome and Carthage made peace in 241 BC, Rome secured the release of all 8,000 prisoners of war without ransom and, furthermore, received a considerable amount of silver as a war indemnity. However, Carthage refused to deliver to Rome the Roman deserters serving among their troops. A first issue for dispute was that the initial treaty, agreed upon by Hamilcar Barca and the Roman commander in Sicily, had a clause stipulating that the Roman popular assembly had to accept the treaty in order for it to be valid. The assembly not only rejected the treaty but increased the indemnity Carthage had to pay.
Carthage had a liquidity problem and attempted to gain financial help from Egypt, a mutual ally of Rome and Carthage, but failed. This resulted in delay of payments owed to the mercenary troops that had served Carthage in Sicily, leading to a climate of mutual mistrust and, finally, a revolt supported by the Libyan natives, known as the Mercenary War (240–238 BC). During this war, Rome and Syracuse both aided Carthage, although traders from Italy seem to have done business with the insurgents. Some of them were caught and punished by Carthage, aggravating the political climate, which had started to improve in recognition of the old alliance and treaties.
During the uprising in the Punic mainland, the mercenary troops in Corsica and Sardinia toppled Punic rule and briefly established their own, but were expelled by a native uprising. After securing aid from Rome, the exiled mercenaries then regained authority on the island of Sicily. For several years, a brutal campaign was fought to quell the insurgent natives. Like many Sicilians, they would ultimately rise again in support of Carthage during the Second Punic War.
Eventually, Rome annexed Corsica and Sardinia by revisiting the terms of the treaty that ended the first Punic War. As Carthage was under siege and engaged in a difficult civil war, they grudgingly accepted the loss of these islands and the subsequent Roman conditions for ongoing peace, which also increased the war indemnity levied against Carthage after the first Punic War. This eventually plunged relations between the two powers to a new low point.
After Carthage emerged victorious from the Mercenary War there were two opposing factions: the reformist party was led by Hamilcar Barca while the other, more conservative, faction was represented by Hanno the Great and the old Carthaginian aristocracy. Hamilcar had led the initial Carthaginian peace negotiations and was blamed for the clause that allowed the Roman popular assembly to increase the war indemnity and annex Corsica and Sardinia, but his superlative generalship was instrumental in enabling Carthage to ultimately quell the mercenary uprising, ironically fought against many of the same mercenary troops he had trained. Hamilcar ultimately left Carthage for the Iberian peninsula where he captured rich silver mines and subdued many tribes who fortified his army with levies of native troops.
Hanno had lost many elephants and soldiers when he became complacent after a victory in the Mercenary War. Further, when he and Hamilcar were supreme commanders of Carthage's field armies, the soldiers had supported Hamilcar when his and Hamilcar's personalities clashed. On the other hand, he was responsible for the greatest territorial expansion of Carthage's hinterland during his rule as "strategus" and wanted to continue such expansion. However, the Numidian king of the relevant area was now a son-in-law of Hamilcar and had supported Carthage during a crucial moment in the Mercenary War. While Hamilcar was able to obtain the resources for his aim, the Numidians in the Atlas Mountains were not conquered, like Hanno suggested, but became vassals of Carthage.
The Iberian conquest was begun by Hamilcar Barca and his other son-in-law, Hasdrubal the Fair, who ruled relatively independently of Carthage and signed the Ebro Treaty with Rome. Hamilcar died in battle in 228 BC. Around this time, Hasdrubal became Carthaginian commander in Iberia (229 BC). He maintained this post for some eight years until 221 BC. Soon the Romans became aware of a burgeoning alliance between Carthage and the Celts of the Po river valley in northern Italy. The latter were amassing forces to invade Italy, presumably with Carthaginian backing. Thus, the Romans preemptively invaded the Po region in 225 BC. By 220 BC, the Romans had annexed the area as Gallia Cisalpina. Hasdrubal was assassinated around the same time (221 BC), bringing Hannibal to the fore. It seems that, having apparently dealt with the threat of a Gallo-Carthaginian invasion of Italy (and perhaps with the original Carthaginian commander killed), the Romans lulled themselves into a false sense of security. Thus, Hannibal took the Romans by surprise a mere two years later (218 BC) by merely reviving and adapting the original Gallo-Carthaginian invasion plan of his brother-in-law Hasdrubal.
After Hasdrubal's assassination by a Celtic assassin, Hamilcar's young sons took over, with Hannibal becoming the "strategus" of Iberia, although this decision was not undisputed in Carthage. The output of the Iberian silver mines allowed for the financing of a standing army and the payment of the war indemnity to Rome. The mines also served as a tool for political influence, creating a faction in Carthage's magistrate that was called the "Barcino".
In 219 BC, Hannibal attacked the town of Saguntum, which stood under the special protection of Rome. According to Roman tradition, Hannibal had been made to swear by his father never to be a friend of Rome, and he certainly did not take a conciliatory attitude when the Romans berated him for crossing the river Iberus (Ebro), which Carthage was bound by treaty not to cross. Hannibal did not cross the Ebro River (Saguntum was near modern Valencia – well south of the river) in arms, and the Saguntines provoked his attack by attacking their neighboring tribes who were Carthaginian protectorates and by massacring pro-Punic factions in their city. Rome had no legal protection pact with any tribe south of the Ebro River. Nonetheless, they asked Carthage to hand Hannibal over, and when the Carthaginian oligarchy refused, Rome declared war on Carthage.
The 'Barcid Empire' consisted of the Punic territories in Iberia. According to the historian Pedro Barceló, it can be described as a private military-economic hegemony backed by the two independent powers, Carthage and Gades (modern Cádiz). These shared the profits of the silver mines in southern Iberia with the Barcas family and closely followed Hellenistic diplomatic customs. Gades played a supporting role in this field, but Hannibal visited the local temple to conduct ceremonies before launching his campaign against Rome. The Barcid Empire was strongly influenced by the Hellenistic kingdoms of the time and for example, contrary to Carthage, it minted silver coins in its short time of existence.
The Second Punic War (218–201 BC) is most remembered for the Carthaginian general Hannibal's crossing of the Alps. His army invaded Italy from the north and resoundingly defeated the Roman army in several battles, but never achieved the ultimate goal of causing a political break between Rome and its allies.
While fighting Hannibal in Italy, Hispania, and Sicily, Rome simultaneously fought against Macedon in the First Macedonian War. Eventually, the war was taken to Africa, where Carthage was defeated at the Battle of Zama (201 BC) by Scipio Africanus. The end of the war saw Carthage's control reduced to only the city itself.
There were three military theaters in this war: Italy, where Hannibal defeated the Roman legions repeatedly; Hispania, where Hasdrubal, a younger brother of Hannibal, defended the Carthaginian colonial cities with mixed success until eventually retreating into Italy; and Sicily, where the Romans held military supremacy.
After assaulting Saguntum in Hispania (219 BC), Hannibal attacked Italy in 218 BC by leading the Iberians and three dozen elephants through the Alps. Although Hannibal surprised the Romans and thoroughly beat them on the battlefields of Italy, he lost his only siege engines and most of his elephants to the cold temperatures and icy mountain paths. In the end he could defeat the Romans in the field, but not in the strategically crucial city of Rome itself, thus leaving him unable to win the war.
Hannibal defeated the Roman legions in several major engagements, including the Battle of the Trebia (December 218 BC), the Battle of Lake Trasimene (217 BC) and most famously the Battle of Cannae (216 BC), but his long-term strategy failed. Lacking siege engines and sufficient manpower to take the city of Rome itself, he had planned to turn the Italian allies against Rome and to starve the city out through a siege. However, with the exception of a few of the southern city-states, the majority of the Roman allies remained loyal and continued to fight alongside Rome, despite Hannibal's near-invincible seeming army devastating the Italian countryside. Rome also exhibited an impressive ability to draft army after army of conscripts after each crushing defeat by Hannibal, allowing them to recover from the defeats at Cannae and elsewhere and to keep Hannibal cut off from aid.
Hannibal never successfully received any significant reinforcements from Carthage. Despite his many pleas, Carthage only ever sent reinforcements successfully to Hispania. This lack of reinforcements prevented Hannibal from decisively ending the conflict by conquering Rome through force of arms.
The Roman army under Quintus Fabius Maximus Verrucosus intentionally deprived Hannibal of open battle in Italy for the rest of the war, while making it difficult for Hannibal to forage for supplies. Nevertheless, Rome was also incapable of bringing the conflict in the Italian theatre to a decisive close. Not only did Roman legions contend with Hannibal in Italy and with Hannibal's brother Hasdrubal in Hispania, but Rome had embroiled itself in yet another foreign war, the first of its Macedonian wars against Carthage's ally Philip V, at the same time.
Through Hannibal's inability to take strategically important Italian cities, through the general loyalty Italian allies showed to Rome, and through Rome's own inability to counter Hannibal as a master general, Hannibal's campaign continued in Italy inconclusively for sixteen years. Though he managed to sustain his forces for 15 years, Hannibal did so only by ravaging farm-lands, keeping his army healthy, which brought anger among the Romans' subject states. Realizing that Hannibal's army was outrunning its supply lines quickly, Rome took countermeasures against Hannibal's home base in Africa by sea command and stopped the flow of supplies. Hannibal quickly turned back and rushed to home defense, but suffered defeat in the Battle of Zama (202 BC).
In Hispania, a young Roman commander, Publius Cornelius Scipio (later to be given the agnomen "Africanus" because of his feats during this war), eventually defeated the larger but divided Carthaginian forces under Hasdrubal and two other Carthaginian generals. Abandoning Hispania, Hasdrubal moved to bring his mercenary army into Italy to reinforce Hannibal, but never made it and was defeated by Roman forces near the Alps.
The Third Punic War (149–146 BC) involved an extended siege of Carthage, ending in the city's thorough destruction. The resurgence of the struggle can be explained by growing anti-Roman agitations in Hispania and Greece, and the visible improvement of Carthaginian wealth and martial power in the fifty years since the Second War.
With no military, Carthage suffered raids from its neighbor Numidia. Under the terms of the treaty with Rome, such disputes were arbitrated by the Roman Senate. Because Numidia was a favored client state of Rome, Roman rulings were slanted heavily in favor of the Numidians. After some fifty years of this condition, Carthage had managed to discharge its war indemnity to Rome, and considered itself no longer bound by the restrictions of the treaty, although Rome believed otherwise. Carthage mustered an army to repel Numidian forces. It immediately lost the war with Numidia, placing itself in debt yet again, this time to Numidia.
This new-found Punic militarism alarmed many Romans, including Cato the Elder who, after a voyage to Carthage, ended all his speeches, no matter what the topic, by saying: "Ceterum censeo Carthaginem esse delendam" – "And I also think that Carthage must be destroyed".
In 149 BC, in an attempt to draw Carthage into open conflict, Rome made a series of escalating demands, one being the surrender of three hundred children of the nobility as hostages, and finally ending with the near-impossible demand that the city be demolished and rebuilt away from the coast, deeper into Africa. When the Carthaginians refused this last demand, Rome declared the Third Punic War. Having previously relied on mercenaries to fight their wars for them, the Carthaginians were now forced into a more active role in the defense of their city. They made thousands of makeshift weapons in a short time, even using women's hair for catapult strings, and were able to hold off the initial Roman attack. A second offensive under the command of Scipio Aemilianus resulted in a three-year siege before he breached the walls, sacked the city, and systematically burned Carthage to the ground in 146 BC. When the war ended, the remaining 50,000 Carthaginians, a small part of the original pre-war population, were sold into slavery by the victors – the normal fate in antiquity of inhabitants of sacked cities. Carthage was systematically burned for 17 days; the city's walls and buildings were utterly destroyed. The remaining Carthaginian territories were annexed by Rome and reconstituted to become the Roman province of Africa.
After Rome emerged as victorious, significant Carthaginian settlements, such as those in Mauretania, were taken over and aggrandized by the Romans. Volubilis, for example, was an important Roman town situated near the westernmost border of the Roman conquests. It was built on the site of the previous Carthaginian settlement that overlies an earlier neolithic habitation. | https://en.wikipedia.org/wiki?curid=24417 |
Peter Carey (novelist)
Peter Philip Carey AO (born 7 May 1943) is an Australian novelist. Carey has won the Miles Franklin Award three times and is frequently named as Australia's next contender for the Nobel Prize in Literature. Carey is one of only five writers to have won the Booker Prize twice—the others being J. G. Farrell, J. M. Coetzee, Hilary Mantel and Margaret Atwood. Carey won his first Booker Prize in 1988 for "Oscar and Lucinda", and won for the second time in 2001 with "True History of the Kelly Gang". In May 2008 he was nominated for the Best of the Booker Prize.
In addition to writing fiction, he collaborated on the screenplay of the film "Until the End of the World" with Wim Wenders and is executive director of the Master of Fine Arts in Creative Writing program at Hunter College, part of the City University of New York.
Peter Carey was born in Bacchus Marsh, Victoria, in 1943. His parents ran a General Motors dealership, Carey Motors. He attended Bacchus Marsh State School from 1948 to 1953, then boarded at Geelong Grammar School between 1954 and 1960. In 1961, Carey enrolled in a science degree at the new Monash University in Melbourne, majoring in chemistry and zoology, but cut his studies short because of a car accident and a lack of interest. It was at university that he met his first wife, Leigh Weetman, who was studying German and philosophy, and who also dropped out.
In 1962, he began to work in advertising. He was employed by various Melbourne agencies between 1962 and 1967, including on campaigns for Volkswagen and Lindeman's Wine. His advertising work brought him into contact with older writers who introduced him to recent European and American fiction: "I didn't really start getting an education until I worked in advertising with people like Barry Oakley and Morris Lurie—and Bruce Petty had an office next door."
During this time, he read widely, particularly the works of Samuel Beckett, William Faulkner, James Joyce, Franz Kafka, and Gabriel García Márquez, and began writing on his own, receiving his first rejection slip in 1964, the same year he married Weetman. Over the next few years he wrote five novels—"Contacts" (1964–1965), "Starts Here, Ends Here" (1965–1967), "The Futility Machine" (1966–1967), "Wog" (1969), and "Adventures on Board the Marie" [sic] "Celeste" (1971). None of them were published. Sun Books accepted "The Futility Machine" but did not proceed with publication, and "Adventures on Board the Marie Celeste" was accepted by Outback Press before being withdrawn by Carey himself. These and other unpublished manuscripts from the period—including twenty-one short stories—are now held by the Fryer Library at the University of Queensland.
Carey's only publications during the 1960s were "Contacts" (a short extract from the unpublished novel of the same name, in "Under Twenty-Five: An Anthology", 1966) and "She Wakes" (a short story, in "Australian Letters", 1967). Towards the end of the decade, Carey and Weetman abandoned Australia with "a certain degree of self-hatred", travelling through Europe and Iran before settling in London in 1968, where Carey continued to write highly regarded advertising copy and unpublished fiction.
Returning to Australia in 1970, Carey once again did advertising work in Melbourne and Sydney. He also kept writing, and gradually broke through with editors, publishing short stories in magazines and newspapers such as "Meanjin" and "Nation Review". Most of these were collected in his first book, "The Fat Man In History", which appeared in 1974. In the same year Carey moved to Balmain in Sydney to work for Grey Advertising.
In 1976, Carey moved to Queensland and joined an alternative community named Starlight in Yandina, north of Brisbane, with his new partner, the painter Margot Hutcheson, with whom he lived in the 1970s and 1980s. He remained with Grey, writing in Yandina for three weeks, then spending the fourth week at the agency in Sydney. It was during this time that he produced most of the stories collected in "War Crimes" (1979), as well as "Bliss" (1981), his first published novel.
Carey started his own advertising agency in 1980, the Sydney-based McSpedden Carey Advertising Consultants, in partnership with Bani McSpedden. After many years of separation, Leigh Weetman asked for a divorce in 1980 so that she could remarry and Peter agreed. In 1981, he moved to Bellingen in northern New South Wales. There he wrote "Illywhacker", published in 1985. In the same year he married theatre director Alison Summers. "Illusion", a stage musical Carey wrote with Mike Mullins and composer Martin Armiger, was performed at the 1986 Adelaide Festival of the Arts and a studio cast recording of the musical was nominated for a 1987 ARIA Award (for which Carey as lyricist was nominated).
The decade—and the Australian phase of Carey's career—culminated with the publication of "Oscar and Lucinda" (1988), which won the Booker McConnell Prize (as it was then known) and brought the author international recognition. Carey explained that the novel was inspired, in part, by his time in Bellingen:
Carey sold his share of McSpedden Carey and in 1990 moved with Alison Summers and their son to New York, where he took a job teaching creative writing at New York University. He later said that New York would not have been his first choice of place to live, and that moving there was his wife's idea. Carey and Summers divorced in 2005 after a four-year separation. Carey is now married to the British-born publisher Frances Coady.
"The Tax Inspector" (1991), begun in Australia, was the first book he completed in the United States. It was followed by "The Unusual Life of Tristan Smith" (1994), a fable in which he explored the relationship between Australia and America, disguised in the novel as "Efica" and "Voorstand". This is a relationship that has preoccupied him throughout his career, going back to "Bliss" (1981), "Illywhacker" (1985), and the early short stories. Nevertheless, Carey continued to set his fiction primarily in Australia and remained diffident about writing explicitly on American themes. In a piece on "True History of the Kelly Gang" (2001), Mel Gussow reported that:
It was only after nearly two decades in the United States that he embarked on "Parrot and Olivier in America" (2010), loosely based on events in the life of Alexis de Tocqueville. Carey says "Tocqueville opened a door I could enter. I saw the present in the past. It was accessible, imaginable." Carey continues to extend his canvas; in his novel, "The Chemistry of Tears" (2012), "contemporary London is brought intimately in touch with ... a 19th-century Germany redolent of the Brothers Grimm".
In 1998, Carey was accused of snubbing Queen Elizabeth II by declining an invitation to meet her after winning the Commonwealth Writers Prize for "Jack Maggs" (1997). While Carey is a republican, in the Australian sense, he insists that no offence was intended:
The meeting did eventually take place, with the Queen remarking, according to Carey, "I believe you had a little trouble getting here."
The unhappy circumstances of Carey's break-up with Alison Summers received publicity (largely in Australia) in 2006 when "" appeared, depicting the toxic relationship between its protagonist, Butcher Bones, and his ex-wife, known only as "the Plaintiff".
In April 2015 he, alongside Michael Ondaatje, Francine Prose, Teju Cole, Rachel Kushner and Taiye Selasi, withdrew from the PEN American Center gala honouring the French satirical magazine "Charlie Hebdo" with its "Freedom of Expression Courage" award. He stated that one of his reasons for doing so was "PEN’s seeming blindness to the cultural arrogance of the French nation, which does not recognise its moral obligation to a large and disempowered segment of their population.". In addition, 204 PEN members, including Teju Cole and Deborah Eisenberg, wrote to PEN, objecting to its decision to give the award to Charlie Hebdo.
Carey has been awarded three honorary degrees. He has been elected a Fellow of the Royal Society of Literature (1989), an Honorary Fellow of the Australian Academy of the Humanities (2001), a Member of the American Academy of Arts and Sciences (2003), and a Member of the American Academy of Arts and Letters (2016), which has also awarded him its Harold D Vursell Memorial Award (2012). In 2010, he appeared on two Australian postage stamps in a series dedicated to "Australian Legends". On 11 June 2012, Carey was named an Officer of the Order of Australia for "distinguished service to literature as a novelist, through international promotion of the Australian identity, as a mentor to emerging writers." And in 2014, Carey was awarded an honorary Doctor of Letters (honoris causa) by Sydney University.
Carey has won numerous literary awards, including:
Stories from Carey's first two collections have been repackaged in "The Fat Man in History and Other Stories" (1980), "Exotic Pleasures" (1990), and "Collected Stories" (1994); the last also includes three previously uncollected stories: "Joe" ("Australian New Writing", 1973), "A Million Dollars Worth of Amphetamines" ("Nation Review", 1975), and "Concerning the Greek Tyrant" ("The Tabloid Story Pocket Book", 1978). | https://en.wikipedia.org/wiki?curid=24419 |
Punched card
A punched card or punch card is a piece of stiff paper that can be used to contain digital data represented by the presence or absence of holes in predefined positions. Digital data can be used for data processing applications or used to directly control automated machinery.
Punched cards were widely used through much of the 20th century in the data processing industry, where specialized and increasingly complex unit record machines, organized into semiautomatic data processing systems, used punched cards for data input, output, and storage. The IBM 12-row/80-column punched card format came to dominate the industry. Many early digital computers used punched cards as the primary medium for input of both computer programs and data.
While punched cards are now obsolete as a storage medium, as of 2012, some voting machines still use punched cards to record votes.
The idea of control and data storage via punched holes was developed over a long period of time. In most cases there is no evidence that each of the inventors was aware of the earlier work.
Basile Bouchon developed the control of a loom by punched holes in paper tape in 1725. The design was improved by his assistant Jean-Baptiste Falcon and by Jacques Vaucanson. Although these improvements controlled the patterns woven, they still required an assistant to operate the mechanism.
In 1804 Joseph Marie Jacquard demonstrated a mechanism to automate loom operation. A number of punched cards were linked into a chain of any length. Each card held the instructions for shedding (raising and lowering the warp) and selecting the shuttle for a single pass.
Semyon Korsakov was reputedly the first to propose punched cards in informatics for information store and search. Korsakov announced his new method and machines in September 1832.
Charles Babbage proposed the use of "Number Cards", "pierced with certain holes and stand[ing] opposite levers connected with a set of figure wheels ... advanced they push in those levers opposite to which there are no holes on the cards and thus transfer that number together with its sign" in his description of the Calculating Engine's Store. There is no evidence that he built a practical example.
In 1881 Jules Carpentier developed a method of recording and playing back performances on a harmonium using punched cards. The system was called the "Mélographe Répétiteur" and “writes down ordinary music played on the keyboard dans la langage de Jacquard”, that is as holes punched in a series of cards. By 1887 Carpentier had separated the mechanism into the "Melograph" which recorded the player's key presses and the "Melotrope" which played the music.
At the end of the 1800s Herman Hollerith invented the recording of data on a medium that could then be read by a machine. claim 2 of which reads: "After some initial trials with paper tape, he settled on punched cards...", developing punched card data processing technology for the 1890 U.S. census. His tabulating machines read and summarized data stored on punched cards and they began use for government and commercial data processing.
Initially, these electromechanical machines only counted holes, but by the 1920s they had units for carrying out basic arithmetic operations.
Hollerith founded the "Tabulating Machine Company" (1896) which was one of four companies that were amalgamated via stock acquisition to form a fifth company, Computing-Tabulating-Recording Company (CTR) (1911), later renamed International Business Machines Corporation (IBM) (1924). Other companies entering the punched card business included The Tabulator Limited (1902), Deutsche Hollerith-Maschinen Gesellschaft mbH (Dehomag) (1911), Powers Accounting Machine Company (1911), Remington Rand (1927), and H.W. Egli Bull (1931). These companies, and others, manufactured and marketed a variety of punched cards and unit record machines for creating, sorting, and tabulating punched cards, even after the development of electronic computers in the 1950s.
Both IBM and Remington Rand tied punched card purchases to machine leases, a violation of the 1914 Clayton Antitrust Act. In 1932, the US government took both to court on this issue. Remington Rand settled quickly. IBM viewed its business as providing a service and that the cards were part of the machine. IBM fought all the way to the Supreme Court and lost in 1936; the court ruled that IBM could only set card specifications.
"By 1937... IBM had 32 presses at work in Endicott, N.Y., printing, cutting and stacking five to 10 million punched cards every day." Punched cards were even used as legal documents, such as U.S. Government checks and savings bonds.
During World War II punched card equipment was used by the Allies in some of their efforts to decrypt Axis communications. See, for example, Central Bureau in Australia. At Bletchley Park in England, "some 2 million punched cards a week were being produced, indicating the sheer scale of this part of the operation".
Punched card technology developed into a powerful tool for business data-processing. By 1950 punched cards had become ubiquitous in industry and government. "Do not fold, spindle or mutilate," a warning that appeared on some punched cards distributed as documents such as checks and utility bills to be returned for processing, became a motto for the post-World War II era.
In 1955 IBM signed a consent decree requiring, amongst other things, that IBM would by 1962 have no more than one-half of the punched card manufacturing capacity in the United States. Tom Watson Jr.'s decision to sign this decree, where IBM saw the punched card provisions as the most significant point, completed the transfer of power to him from Thomas Watson, Sr.
The UNITYPER introduced magnetic tape for data entry in the 1950s. During the 1960s, the punched card was gradually replaced as the primary means for data storage by magnetic tape, as better, more capable computers became available. Mohawk Data Sciences introduced a magnetic tape encoder in 1965, a system marketed as a keypunch replacement which was somewhat successful. Punched cards were still commonly used for entering both data and computer programs until the mid-1980s when the combination of lower cost magnetic disk storage, and affordable interactive terminals on less expensive minicomputers made punched cards obsolete for these roles as well. However, their influence lives on through many standard conventions and file formats. The terminals that replaced the punched cards, the IBM 3270 for example, displayed 80 columns of text in text mode, for compatibility with existing software. Some programs still operate on the convention of 80 text columns, although fewer and fewer do as newer systems employ graphical user interfaces with variable-width type fonts.
The terms "punched card", "punch card", and "punchcard" were all commonly used, as were "IBM card" and "Hollerith card" (after Herman Hollerith). IBM used "IBM card" or, later, "punched card" at first mention in its documentation and thereafter simply "card" or "cards". Specific formats were often indicated by the number of character positions available, e.g. "80-column card". A sequence of cards that is input to or output from some step in an application's processing is called a "card deck" or simply "deck". The rectangular, round, or oval bits of paper punched out were called chad ("chads") or "chips" (in IBM usage). Sequential card columns allocated for a specific use, such as names, addresses, multi-digit numbers, etc., are known as a "field". The first card of a group of cards, containing fixed or indicative information for that group, is known as a "master card". Cards that are not master cards are "detail cards".
The Hollerith punched cards used for the 1890 U.S. census were blank. Following that, cards commonly had printing such that the row and column position of a hole could be easily seen. Printing could include having fields named and marked by vertical lines, logos, and more. "General purpose" layouts (see, for example, the IBM 5081 below) were also available. For applications requiring master cards to be separated from following detail cards, the respective cards had different upper corner diagonal cuts and thus could be separated by a sorter. Other cards typically had one upper corner diagonal cut so that cards not oriented correctly, or cards with different corner cuts, could be identified.
Herman Hollerith was awarded three patents in 1889 for electromechanical tabulating machines. These patents described both paper tape and rectangular cards as possible recording media. The card shown in of January 8 was printed with a template and had hole positions arranged close to the edges so they could be reached by a railroad conductor's ticket punch, with the center reserved for written descriptions. Hollerith was originally inspired by railroad tickets that let the conductor encode a rough description of the passenger:
When use of the ticket punch proved tiring and error prone Hollerith developed the pantograph "keyboard punch". It featured an enlarged diagram of the card, indicating the positions of the holes to be punched. A printed reading board could be placed under a card that was to be read manually.
Hollerith envisioned a number of card sizes. In an article he wrote describing his proposed system for tabulating the 1890 U.S. census, Hollerith suggested a card 3 inches by 5½ inches of Manila stock "would be sufficient to answer all ordinary purposes." The cards used in the 1890 census had round holes, 12 rows and 24 columns. A reading board for these cards can be seen at the Columbia University Computing History site. At some point, became the standard card size. These are the dimensions of the then current paper currency of 1862–1923.
Hollerith's original system used an ad-hoc coding system for each application, with groups of holes assigned specific meanings, e.g. sex or marital status. His tabulating machine had up to 40 counters, each with a dial divided into 100 divisions, with two indicator hands; one which stepped one unit with each counting pulse, the other which advanced one unit every time the other dial made a complete revolution. This arrangement allowed a count up to 9,999. During a given tabulating run counters were assigned specific holes or, using relay logic, combination of holes.
Later designs led to a card with ten rows, each row assigned a digit value, 0 through 9, and 45 columns.
This card provided for fields to record multi-digit numbers that tabulators could sum, instead of their simply counting cards. Hollerith's 45 column punched cards are illustrated in Comrie's "The application of the Hollerith Tabulating Machine to Brown's Tables of the Moon".
By the late 1920s customers wanted to store more data on each punched card. Thomas J. Watson Sr., IBM’s head, asked two of his top inventors, Clair D. Lake and J. Royden Pierce, to independently develop ways to increase data capacity without increasing the size of the punched card. Pierce wanted to keep round holes and 45 columns, but allow each column to store more data. Lake suggested rectangular holes, which could be spaced more tightly, allowing 80 columns per punched card, thereby nearly doubling the capacity of the older format. Watson picked the latter solution, introduced as "The IBM Card", in part because it was compatible with existing tabulator designs and in part because it could be protected by patents and give the company a distinctive advantage.
This IBM card format, introduced in 1928,
has rectangular holes, 80 columns, and 12 rows. Card size is exactly by inches (187.325 mm × 82.55 mm). The cards are made of smooth stock, thick. There are about 143 cards to the inch (143/2.54round0/cm). In 1964, IBM changed from square to round corners. They come typically in boxes of 2000 cards or as continuous form cards. Continuous form cards could be both pre-numbered and pre-punched for document control (checks, for example).
Initially designed to record responses to Yes–no questions, support for numeric, alphabetic and special characters was added through the use of columns and zones. The top three positions of a column are called zone punching positions, 12 (top), 11, and 0 (0 may be either a zone punch or a digit punch). For decimal data the lower ten positions are called digit punching positions, 0 (top) through 9. An arithmetic sign can be specified for a decimal field by overpunching the field's rightmost column with a zone punch: 12 for plus, 11 for minus (CR). For Pound sterling pre-decimalization currency a penny column represents the values zero through eleven; 10 (top), 11, then 0 through 9 as above. An arithmetic sign can be punched in the adjacent shilling column. Zone punches had other uses in processing, such as indicating a master card.
12| x xxxxxxxxx
11| x xxxxxxxxx
Reference: Note: The 11 and 12 zones were also called the X and Y zones, respectively.
In 1931 IBM began introducing upper-case letters and special characters (Powers-Samas had developed the first commercial alphabetic punched card representation in 1921). The 26 letters have two punches (zone [12,11,0] + digit [1–9]). The languages of Germany, Sweden, Denmark, Norway, Spain, Portugal and Finland require up to three additional letters; their punching is not shown here. Most special characters have two or three punches (zone [12,11,0, or none] + digit [2–7] + 8); a few special characters were exceptions: "&" is 12 only, "-" is 11 only, and "/" is 0 + 1). The Space character has no punches. The information represented in a column by a combination of zones [12, 11, 0] and digits [0–9] is dependent on the use of that column. For example, the combination "12-1" is the letter "A" in an alphabetic column, a plus signed digit "1" in a signed numeric column, or an unsigned digit "1" in a column where the "12" has some other use. The introduction of EBCDIC in 1964 defined columns with as many as six punches (zones [12,11,0,8,9] + digit [1–7]). IBM and other manufacturers used many different 80-column card character encodings. A 1969 American National Standard defined the punches for 128 characters and was named the "Hollerith Punched Card Code" (often referred to simply as "Hollerith Card Code"), honoring Hollerith.
For some computer applications, binary formats were used, where each hole represented a single binary digit (or "bit"), every column (or row) is treated as a simple bit field, and every combination of holes is permitted.
For example, on the IBM 701 and IBM 704, card data was read, using an IBM 711, into memory in row binary format. For each of the twelve rows of the card, 72 of the 80 columns would be read into two 36-bit words; a control panel was used to select the 72 columns to be read. Software would translate this data into the desired form. One convention was to use columns 1 through 72 for data, and columns 73 through 80 to sequentially number the cards, as shown in the picture above of a punched card for FORTRAN. Such numbered cards could be sorted by machine so that if a deck was dropped the sorting machine could be used to arrange it back in order. This convention continued to be used in FORTRAN, even in later systems where the data in all 80 columns could be read.
As a prank punched cards could be made where every possible punch position had a hole. Such "lace cards" lacked structural strength, and would frequently buckle and jam inside the machine.
The IBM 80-column punched card format dominated the industry, becoming known as just IBM cards, even though other companies made cards and equipment to process them.
One of the most common punched card formats is the IBM 5081 card format, a general purpose layout with no field divisions. This format has digits printed on it corresponding to the punch positions of the digits in each of the 80 columns. Other punched card vendors manufactured cards with this same layout and number.
Long cards were available with a scored stub on either end which, when torn off, left an 80 column card. The torn off card is called a "stub card".
80-column cards were available scored, on either end, creating both a "short card" and a "stub card" when torn apart. Short cards can be processed by other IBM machines. A common length for stub cards was 51 columns. Stub cards were used in applications requiring tags, labels, or carbon copies.
According to the IBM Archive: "IBM's Supplies Division introduced the Port-A-Punch in 1958 as a fast, accurate means of manually punching holes in specially scored IBM punched cards. Designed to fit in the pocket, Port-A-Punch made it possible to create punched card documents anywhere. The product was intended for "on-the-spot" recording operations—such as physical inventories, job tickets and statistical surveys—because it eliminated the need for preliminary writing or typing of source documents."
In 1969 IBM introduced a new, smaller, round-hole, 96-column card format along with the IBM System/3 low-end business computer. These cards have tiny (1 mm), circular holes, smaller than those in paper tape. Data is stored in 6-bit BCD, with three rows of 32 characters each, or 8-bit EBCDIC. In this format, each column of the top tiers are combined with two punch rows from the bottom tier to form an 8-bit byte, and the middle tier is combined with two more punch rows, so that each card contains 64 bytes of 8-bit-per-byte binary coded data.
This format was never very widely used; It was IBM-only, but they did not support it on any equipment beyond the System/3, where it was quickly superseded by the 1973 IBM 3740 Data Entry System using 8-inch floppy disks.
The Powers/Remington Rand card format was initially the same as Hollerith's; 45 columns and round holes. In 1930, Remington Rand leap-frogged IBM's 80 column format from 1928 by coding two characters in each of the 45 columns – producing what is now commonly called the 90-column card. There are two sets of six rows across each card. The rows in each set are labeled 0, 1/2, 3/4, 5/6, 7/8 and 9. The even numbers in a pair are formed by combining that punch with a 9 punch. Alphabetic and special characters use 3 or more punches.
The British Powers-Samas company used a variety of card formats for their unit record equipment. They began with 45 columns and round holes. Later 36, 40 and 65 column cards were provided. A 130 column card was also available - formed by dividing the card into two rows, each row with 65 columns and each character space with 5 punch positions. A 21 column card was comparable to the IBM Stub card.
IBM's Fred M. Carroll developed a series of rotary presses that were used to produce punched cards, including a 1921 model that operated at 460 cards per minute (cpm). In 1936 he introduced a completely different press that operated at 850 cpm. Carroll's high-speed press, containing a printing cylinder, revolutionized the company's manufacturing of punched cards. It is estimated that between 1930 and 1950, the Carroll press accounted for as much as 25 percent of the company's profits.
Discarded printing plates from these card presses, each printing plate the size of an IBM card and formed into a cylinder, often found use as desk pen/pencil holders, and even today are collectible IBM artifacts (every card layout had its own printing plate).
In the mid-1930s a box of 1,000 cards cost $1.05.
While punched cards have not been widely used for a generation, the impact was so great for most of the 20th century that they still appear from time to time in popular culture. For example:
metaphor... symbol of the "system"—first the registration system and then bureaucratic systems more generally ... a symbol of alienation ... Punched cards were the symbol of information machines, and so they became the symbolic point of attack. Punched cards, used for class registration, were first and foremost a symbol of uniformity. ... A student might feel "he is one of out of 27,500 IBM cards" ... The president of the Undergraduate Association criticized the University as "a machine ... IBM pattern of education."... Robert Blaumer explicated the symbolism: he referred to the "sense of impersonality... symbolized by the IBM technology."...
––Steven Lubar
A common example of the requests often printed on punched cards which were to be individually handled, especially those intended for the public to use and return is "Do Not Fold, Spindle or Mutilate" (in the UK - "Do not bend, spike, fold or mutilate"). Coined by Charles A. Phillips, it became a motto for the post-World War II era (even though many people had no idea what spindle meant), and was widely mocked and satirized. Some 1960s students at Berkeley wore buttons saying: "Do not fold, spindle or mutilate. I am a student". The motto was also used for a 1970 book by Doris Miles Disney with a plot based around an early computer dating service and a 1971 made-for-TV movie based on that book, and a similarly titled 1967 Canadian short film, "Do Not Fold, Staple, Spindle or Mutilate".
Processing of punched cards was handled by a variety of machines, including: | https://en.wikipedia.org/wiki?curid=24420 |
Philippi
Philippi (; , "Philippoi") was a major Greek city northwest of the nearby island, Thasos. Its original name was Crenides (, "Krenides" "Fountains") after its establishment by Thasian colonists in 360/359 BC. The city was renamed by Philip II of Macedon in 356 BC and abandoned in the 14th century after the Ottoman conquest. The present municipality, Filippoi, is located near the ruins of the ancient city and is part of the region of East Macedonia and Thrace in Kavala, Greece. It was classified as a UNESCO World Heritage Site in 2016.
Thasian colonists established a settlement at Krenides (meaning "springs") in Thrace in 360/359 BC near the head of the Aegean Sea at the foot of Mt. Orbelos, now called Mt. Lekani, about north-west of Kavalla, on the northern border of the marsh that, in antiquity, covered the entire plain separating it from the Pangaion Hills to the south. In 356 BC, King Philip II of Macedon conquered the city and renamed it to Philippi.
The Macedonian conquerors of the town aimed to take control of the neighbouring gold mines and to establish a garrison at a strategic passage: the site controlled the route between Amphipolis and Neapolis, part of the great royal route which runs east-west across Macedonia and which the Roman Republic reconstructed in the 2nd century BC as part of the "Via Egnatia". Philip II endowed the city with important fortifications, which partially blocked the passage between the swamp and Mt. Orbelos, and sent colonists to occupy it. Philip also had the marsh partially drained, as the writer Theophrastus ( 371 – 287 BC) attests. Philippi preserved its autonomy within the kingdom of Macedon and had its own political institutions (the "Assembly" of the "demos"). The discovery of new gold mines near the city, at Asyla, contributed to the wealth of the kingdom and Philip established a mint there. The city became fully integrated into the kingdom during the reign (221 to 179 BC) of Philip V of Macedon.
The city contained 2,000 people.
When the Romans destroyed the Antigonid dynasty of Macedon in the Third Macedonian War (168 BC), they divided the kingdom into four separate states ("merides"). Amphipolis (rather than Philippi) became the capital of the eastern Macedonian state.
Almost nothing is known about the city in this period, but archeological remains include walls, the Greek theatre, the foundations of a house under the Roman forum and a little temple dedicated to a hero cult. This monument covers the tomb of a certain Exekestos, is possibly situated on the agora and is dedicated to the κτίστης ("ktistēs"), the foundation hero of the city.
The city reappears in the sources during the Liberators' civil war (43–42 BC) that followed the assassination of Julius Caesar in 44 BC. Caesar's heirs Mark Antony and Octavian confronted the forces of the assassins Marcus Junius Brutus and Gaius Cassius Longinus at the Battle of Philippi on the plain to the west of the city during October in 42 BC. Antony and Octavian won this final battle against the partisans of the Republic. They released some of their veteran soldiers, probably from Legion XXVIII, to colonize the city, which was refounded as "Colonia Victrix Philippensium". From 30 BC Octavian established his control of the Roman state, becoming Roman emperor from 27 BC. He reorganized the colony and established more settlers there, veterans (possibly from the Praetorian Guard) and other Italians. The city was renamed "Colonia Iulia Philippensis", and then "Colonia Augusta Iulia Philippensis" after January, 27 BC, when Octavian received the title Augustus from the Roman Senate.
Following this second renaming, and perhaps after the first, the territory of Philippi was centuriated (divided into squares of land) and distributed to the colonists. The city kept its Macedonian walls, and its general plan was modified only partially by the construction of a forum, a little to the east of the site of Greek agora. It was a "miniature Rome", under the municipal law of Rome, and governed by two military officers, the "duumviri", who were appointed directly from Rome, similar to Roman colonies
The colony recognized its dependence on the mines that brought it its privileged position on the "Via Egnatia". Many monuments evidence its wealth - particularly imposing considering the relatively small size of the urban area: the forum, laid out in two terraces on both sides of the main road, was constructed in several phases between the reigns of the Emperors Claudius (41–54 AD) and Antoninus Pius (138–161), and the theatre was enlarged and expanded in order to hold Roman games. An abundance of Latin inscriptions also testifies to the prosperity of the city.
The New Testament records a visit to the city by the apostle Paul during his second missionary journey (likely in AD 49 or 50)(). On the basis of the Acts of the Apostles and the letter to the Philippians, early Christians concluded that Paul had founded their community. Accompanied by Silas, by Timothy and possibly by Luke (the author of the Acts of the Apostles), Paul is believed to have preached for the first time on European soil in Philippi. According to the New Testament, Paul visited the city on two other occasions, in 56 and 57. The Epistle to the Philippians dates from around 61-62 and is believed to show the immediate effects of Paul's instruction.
The development of Christianity in Philippi is indicated by a letter from Polycarp of Smyrna addressed to the community in Philippi around AD 160 and by funerary inscriptions.
The first church described in the city is a small building that was probably originally a small prayer-house. This "Basilica of Paul", identified by a mosaic inscription on the pavement, is dated around 343 from a mention by the bishop Porphyrios, who attended the Council of Serdica that year.
Despite Philippi having one of the oldest congregations in Europe, attestation of a bishopric dates only from the 4th century.
The prosperity of the city in the 5th and 6th centuries was attributed to Paul and to his ministry. As in other cities, many new ecclesiastical buildings were constructed at this time. Seven different churches were built in Philippi between the mid-4th century and the end of the 6th, some of which competed in size and decoration with the most beautiful buildings in Thessalonica, or with those of Constantinople. The relationship of the plan and of the architectural decoration of Basilica B with Hagia Sophia and with Saint Irene in Constantinople accorded a privileged place to this church in the history of early Christian art. The complex cathedral which took the place of the Basilica of Paul at the end of the 5th century, constructed around an octagonal church, also rivaled the churches of Constantinople.
In the same age, the Empire rebuilt the fortifications of the city in order to better defend against growing instability in the Balkans. In 473 Ostrogothic troops of Theodoric Strabo besieged the city; they failed to take it but burned down the surrounding villages.
Already weakened by the Slavic invasions at the end of the 6th century - which ruined the agrarian economy of Macedonia - and probably also by the Plague of Justinian in 547, the city was almost totally destroyed by an earthquake around 619, from which it never recovered. There was a small amount of activity there in the 7th century, but the city was now hardly more than a village.
The Byzantine Empire possibly maintained a garrison there, but in 838 the Bulgarians under "kavhan" Isbul took the city and celebrated their victory with a monumental inscription on the stylobate in Basilica B, now partially in ruins. The site of Philippi was so strategically sound that the Byzantines attempted to recapture it around 850. Several seals of civil servants and other Byzantine officials, dated to the first half of the 9th century, prove the presence of Byzantine armies in the city.
Around 969, Emperor Nicephorus II Phocas rebuilt the fortifications on the acropolis and in part of the city. These gradually helped to weaken Bulgar power and to strengthen the Byzantine presence in the area. In 1077 Bishop Basil Kartzimopoulos rebuilt part of the defenses inside the city. The city began to prosper once more, as witnessed by the Arab geographer Al Idrisi, who mentions it as a centre of business and wine production around 1150.
After a brief occupation by the Franks after the Fourth Crusade and the capture of Constantinople in 1204, the city was captured by the Serbs. Still, it remained a notable fortification on the route of the ancient "Via Egnatia"; in 1354, the pretender to the Byzantine throne, Matthew Cantacuzenus, was captured there by the Serbs.
The city was abandoned at an unknown date. When the French traveller Pierre Belon visited the area in the 1540s there remained nothing but ruins, used by the Turks as a quarry. The name of the city survived - at first in a Turkish village on the nearby plain, Philibedjik (Filibecik, "Little Filibe" in Turkish), which has since disappeared, and then in a Greek village in the mountains.
Noted or briefly described by 16th century travellers, the first archaeological description of the city was made in 1856 by Perrot, then in 1861 by Léon Heuzey and Henri Daumet in their famous "Mission archéologique de Macédoine". The first excavations did not begin until the summer of 1914, and were soon interrupted by the First World War. The excavations, carried out by the École française d'Athènes, were renewed in 1920 and continued until 1937. During this time the Greek theatre, the forum, Basilicas A and B, the baths and the walls were excavated. After the Second World War, Greek archaeologists returned to the site. From 1958 to 1978, the Société Archéologique, then the Service archéologique and the University of Thessalonica uncovered the bishop's quarter and the octagonal church, large private residences, a new basilica near the Museum and two others in the necropolis to the east of the city.
Translated from the , retrieved February 11, 2005. That article, in turn, gives the following references: | https://en.wikipedia.org/wiki?curid=24425 |
Victoria, Crown Princess of Sweden
Victoria, Crown Princess of Sweden, Duchess of Västergötland (Victoria Ingrid Alice Désirée, born 14 July 1977) is the heir apparent to the Swedish throne, as the eldest child of King Carl XVI Gustaf. If she ascends to the throne as expected, she would be Sweden's fourth queen regnant (after Margaret, Christina and Ulrika Eleonora) and the first since 1720.
Victoria was born on 14 July 1977 at 21:45 CET at the Karolinska Hospital in Solna, Stockholm County, Sweden, and is the oldest child of King Carl XVI Gustaf and Queen Silvia. She is a member of the House of Bernadotte. Born as a princess of Sweden, she was designated crown princess in 1979 (SFS 1979:932) ahead of her younger brother. Her place as first in the line of succession formally went into effect on 1 January 1980 with the parliamentary change to the Act of Succession that introduced absolute primogeniture.
Her given names honour various relatives. Her first name comes primarily from her great-great-grandmother Victoria of Baden, queen consort of Sweden. Her other names honour her great-aunt Ingrid of Sweden; her maternal grandmother, Alice Soares de Toledo; and her ancestor Désirée Clary, queen consort of Sweden; and her paternal aunt and godmother, Princess Désirée.
She was baptised at The Royal Palace Church on 27 September 1977. Her godparents were Crown Prince Harald of Norway (later king of Norway), her maternal uncle, Ralf Sommerlath, Princess Beatrix of the Netherlands (later queen of the Netherlands, 1980–2013), and her aunt Princess Désirée, Baroness Silfverschiöld. The Crown Princess was confirmed in the summer of 1992 at Räpplinge church on the island of Öland.
Victoria studied for a year (1996–97) at the Catholic University of the West at Angers in France, and in the fall term of 1997 participated in a special program following the work of the "Riksdag". From 1998 to 2000, Victoria resided in the United States, where she studied various subjects at Yale University, New Haven, Connecticut.
In May 1999, she was an intern at the Swedish Embassy in Washington, D.C. Victoria completed a study program at the Government Offices in 2001. In 2003, Victoria's education continued with visits to Swedish businesses, a study and intern program in agriculture and forestry, as well as completion of the basic soldier training at SWEDINT (the Swedish Armed Forces International Centre).
In 2006, Victoria enrolled in the Ministry for Foreign Affairs' Diplomat Program, running from September 2006 to June 2007. The program is a training program for young future diplomats and gives an insight to the ministry's work, Swedish foreign and security policies and Sweden's relations with the rest of the world. In June 2009, she graduated with a Bachelor of Arts degree from Uppsala University.
She speaks Swedish, English, French and German.
Victoria was made crown princess on 1 January 1980 by the 1979 change to the Act of Succession of 1810 ("Successionsordningen"). This constitutional introduced absolute primogeniture, meaning that the throne would be inherited by the monarch's eldest child without regard to gender. King Carl XVI Gustaf objected to the reform after it occurred—not because he objected to women entering the line of succession, but because he was upset about his son being stripped of the crown prince status he had held since birth.
When she became heir, she also was made Duchess of Västergötland, one of the historical provinces of Sweden. Prior to this constitutional change, the heir apparent to the throne was her younger brother, Carl Philip. He is now fourth in line to the throne, behind Victoria and her children.
Victoria's declaration of majority took place in the Hall of State at the Royal Palace of Stockholm on 14 July 1995. As of the day she turned 18, she became eligible to act as Head of State when the King is not in country. Victoria made her first public speech on this occasion. Located on the dais in the background was the same silver throne on which her father was seated at his enthronement, in actual use from 1650 and up until this ceremony.
As heir apparent to the throne, Victoria is a working member of the Swedish Royal Family with her own agenda of official engagements. Victoria attends the regular Advisory Council on Foreign Affairs and the information councils with Government ministers headed by the King, and steps in as a temporary regent (Riksföreståndare) when needed.
Victoria has made many official trips abroad as a representative of Sweden. Her first major official visit on her own was to Japan in 2001, where she promoted Swedish tourism, design, music, gastronomy and environmental sustainability during the "Swedish Style" event. That same year, Victoria also travelled to the West Coast of the United States, where she participated in the celebrations of the Nobel centenary.
In 2002, she paid official visits to United States, Spain, Uganda, Ethiopia, and Kosovo where she visited Camp Victoria. In 2003, she made official visits to Egypt and the United States. In early 2004, she paid an official visit to Saudi Arabia, as a part of a large official business delegation from Sweden, and in October 2004, she travelled to Hungary.
Crown Princess Victoria was given her own household in October 2004. It is headed by the Marshal of the Court, and serves to coordinate the official engagements of The Crown Princess.
In January 2005, Victoria made a long official visit to Australia, promoting Swedish style and businesses, and in April she visited Bangladesh and Sri Lanka to follow aid work and become informed about the work in the aftermath of the tsunami. In April 2005, Victoria made an official visit to Japan where she visited the Expo 2005 in Aichi, laid the foundation for a new IKEA store in Yokohama together with Princess Takamado and met with Emperor Akihito, Empress Michiko, Crown Prince Naruhito and Sayako Kuroda. In June 2005, Victoria travelled to Turkey on an official visit where she participated in the Swedish Business Seminar and Sweden Day celebrations in Ankara during a historic visit, which was organised by the Swedish Embassy in Ankara and Swedish Trade Council in Istanbul. Victoria also visited the historic sights such as the Blue Mosque, Topkapı Palace and Hagia Sophia. This was the first official Royal visit from Sweden to Turkey since 1934. In September 2005, she made an official visit to China.
In March 2006, Victoria made an official visit to Brazil where she followed the Volvo Ocean Race and visited projects supported by the World Childhood Foundation, such as the Abrigo Rainha Sílvia. In December, she paid a four-day official visit to Paris where she attended a French-Swedish soirée arranged by the Swedish Chamber of Commerce, the Swedish Trade Council and the Swedish Embassy, during which she also awarded the Prix d’Excellence 2006. The visit to Paris also included events with the Swedish Club in Paris, attendance at a church service in the Sofia Church (the Swedish church in Paris), a study visit to the OECD headquarters and meetings with the Secretary-General José Ángel Gurría, the Swedish Ambassador to the OECD, Gun-Britt Andersson, and other senior officials. She also attended a gala dinner hosted by La Fondation Pour L’Enfance at Versailles.
She is a member of the Honorary Board of the International Paralympic Committee.
In 2011, it was announced that Victoria would continue working throughout her pregnancy. In 2012, she took her maternity leave one day prior to the birth of her daughter Estelle and her husband Daniel revealed that he would take his paternity leave and switch parental roles with Victoria when Estelle began preschool.
In January 2016, UN Secretary General Ban Ki-moon appointed The Crown Princess as a member of Sustainable Development Goals Advocates for Agenda 2030. The Crown Princess is therefore one of 16 ambassadors in the Sustainable Development Goals (SDG) Advocacy Group. The group's task is to promote the UN's Sustainable Development Goals – Agenda 2030 – in various ways. The Crown Princess primarily works with issues concerning water and health.
The Crown Princess Victoria's Fund was set up in 1997 and is run as a part of Radiohjälpen, the fundraising branch of Sveriges Television and Sveriges Radio. The fund's aim is to provide support for leisure and recreational activities for children and young people with functional disabilities or chronic illnesses.
The Crown Princess Victoria Fund's means mainly derive from donations by the public, but large companies such as Arla Foods, Swedbank and AB Svenska Returpack are constant sponsor partners. Additional support comes from The Association of Swedish Bakers & Confectioners who every year arrange a national “princess cake week” during which the participating cafés and bakeries give 2.50 SEK per sold princess pastry and 10 SEK per sold princess cake to the fund. The result of this fund-raising drive is usually presented to Victoria herself on her name day on 12 March every year; in 2007, the total amount was 200,000 SEK. Congratulatory and memorial cards are also issued by Radiohjälpen benefitting the fund, a simple way to pay respects and do a good deed in one act. In 2006, The Crown Princess Victoria Fund raised a total of 5.5 million SEK.
Every year Victoria visits one or several clubs or projects that have been granted money. These visits are not announced via the official royal diary but kept private; instead Sveriges Television often accompanies her and airs short programs from these visits at some time during the year.
Victoria's first boyfriend was Daniel Collert. They socialised in the same circles, went to the same school and were already friends when their romance developed in the mid-1990s. When Victoria moved to the United States in 1998 to study and recover from her eating disorders, Collert moved with her across the Atlantic and settled in New York. In September 2000, Victoria's relationship with Collert was confirmed in an interview with her at Expo 2000. The relationship ended in 2001.
In May 2002, Swedish newspaper "Expressen" reported that Victoria had a new boyfriend, her personal trainer at Master Training, Daniel Westling. When the news broke and the media turned its attention on him, it was obvious that he did not like being in the public eye. Once Westling was photographed crossing a street against a red light in order to avoid a camera. In July 2002, Victoria and Daniel Westling were pictured kissing for the first time at a birthday party for Caroline Kreuger, a close friend of Victoria.
In a popular personal report called "Tre dagar med Victoria", which profiled her work during a three-day period that aired on TV4 in December 2004, Victoria commented on criticism directed at Westling, “Many unfair things are written. I understand that there is speculation, but some day justice will be done there, too.” Victoria also gave her opinion that happiness is important, and that these days it is not so much about background and pedigree but about two people who have to live with each other. She said that if they are not happy and comfortable with each other, it is impossible to do a good job.
Swedish media often speculated about upcoming engagements and marriages for Victoria. On 24 February 2009, rumours that wedding plans were imminent became particularly intense preceding an information council between the King and Prime Minister Fredrik Reinfeldt. Under the terms of the Swedish Act of Succession, the Government, upon the request of the King, gives the final consent for a dynastic marriage of a prince or princess of Sweden. The prince or princess otherwise loses their right to the throne. Later that day, it was confirmed that permission had been granted and that Victoria would marry Daniel Westling in the summer of 2010. The wedding date was set in Stockholm Cathedral for 19 June 2010, the 34th anniversary of her parents' marriage. Her engagement ring features a solitaire round brilliant-cut diamond mounted on white gold.
The wedding took place on 19 June 2010. Guests including royalty and ambassadors from various countries were invited to the wedding ceremony which took place at Stockholm Cathedral. After the wedding the newlyweds were driven through Stockholm in a coach and then rowed in the antique royal barge "Vasaorden" to the royal palace where the wedding banquet was held. On the evening before the wedding, there was a gala concert dedicated to the couple in the Stockholm Concert Hall.
On 17 August 2011, the Swedish royal court announced that Crown Princess Victoria was pregnant and expecting the couple's first child in March 2012. On 23 February 2012, Victoria gave birth to Princess Estelle, Duchess of Östergötland, in the Karolinska University Hospital. Their second child, Prince Oscar, Duke of Skåne, was born on 2 March 2016 in the same hospital.
Victoria has dyslexia, as do her father King Carl XVI Gustaf and her brother Prince Carl Philip.
In 1996, it was established that Victoria suffered from anorexia; this was not confirmed until the next year. Already at that time she was getting professional help, but given her public position in Sweden it was getting increasingly difficult to handle the situation. Victoria had planned to study at Uppsala University, but after intense media speculation and public discussion when pictures of an evidently emaciated Victoria in sleeveless dresses at the Order of the Innocence's ball and the gala dinner for the incoming state visit from Austria surfaced in April 1997, the Royal Court decided to confirm what was feared.
After a press release from the Royal Court in November 1997 announced that Victoria had eating disorders, plans changed for her and she moved to the United States where she received professional help and studied at Yale University. By making this drastic decision, Victoria lived an anonymous life while getting professional help and recovering without having to worry about media speculations or if people were recognizing her on the streets.
In June 1999, Victoria said, "It was a really hard time. This kind of illness is hard, not only for the individual but also for the people close to him or her. Today I'm fine."
In November 2002, the book "Victoria, Victoria!" came out, speaking further about her eating disorder. Victoria said: "I felt like an accelerating train, going right down... during the whole period. I had eating disorders and was aware of it, my anguish was enormous. I really hated how I looked like, how I was... I, Victoria, didn’t exist. It felt like everything in my life and around me was controlled by others. The one thing I could control was the food I put in me". She further said that "What happened cost and I was the one who stood for the payments. Now I’m feeling well and with the insights I’ve acquired through this I can hopefully help someone else".
Victoria suffers from prosopagnosia, which makes it difficult to recognize familiar faces. In an interview in 2008, she called it a "big drawback" in her capacity because she finds it very hard to remember names and faces. | https://en.wikipedia.org/wiki?curid=24427 |
Pope Zosimus
Pope Zosimus was the bishop of Rome from 18 March 417 to his death on 26 December 418. He was born in Mesoraca, Calabria. Zosimus took a decided part in the protracted dispute in Gaul as to the jurisdiction of the See of Arles over that of Vienne, giving energetic decisions in favour of the former, but without settling the controversy. His fractious temper coloured all the controversies in which he took part, in Gaul, Africa and Italy, including Rome, where at his death the clergy were very much divided.
According to the "Liber Pontificalis", Zosimus was a Greek and his father's name was Abramius. Historian Adolf von Harnack deduced from this that the family was of Jewish origin, but this has been rejected by Louis Duchesne.
The consecration of Zosimus as bishop of Rome took place on 18 March 417. The festival was attended by Bishop Patroclus of Arles, who had been raised to that see in place of Bishop Heros of Arles, who had been deposed by Constantius III. Patroclus gained the confidence of the new pope at once; as early as 22 March he received a papal letter which conferred upon him the rights of a metropolitan over all the bishops of the Gallic provinces of Viennensis and Narbonensis I and II. In addition, he was made a kind of papal vicar for the whole of Gaul, with no Gallic ecclesiastic being permitted to journey to Rome without bringing with him a certificate of identity from Patroclus.
In the year 400, Arles had been substituted for Trier as the residence of the chief government official of the civil Diocese of Gaul, the "Prefectus Praetorio Galliarum". Patroclus, who enjoyed the support of the commander Constantine, used this opportunity to procure for himself the position of supremacy above mentioned, by winning over Zosimus to his ideas. The bishops of Vienne, Narbonne, and Marseille regarded this elevation of the See of Arles as an infringement of their rights, and raised objections which occasioned several letters from Zosimus. The dispute, however, was not settled until the pontificate of Pope Leo I.
Caelestius, a proponent of Pelagianism who had been condemned by the preceding pope, Innocent I, came to Rome to appeal to the new pope, having been expelled from Constantinople. In the summer of 417, Zosimus held a meeting of the Roman clergy in the Basilica of St. Clement before which Caelestius appeared. The propositions drawn up by the deacon Paulinus of Milan, on account of which Caelestius had been condemned at Carthage in 411, were laid before him. Caelestius refused to condemn these propositions, at the same time declaring in general that he accepted the doctrine expounded in the letters of Pope Innocent and making a confession of faith which was approved. The pope was won over by the conduct of Caelestius, and said that it was not certain whether he had really maintained the false doctrine rejected by Innocent, and therefore Zosimus considered the action of the African bishops against Caelestius too hasty. He wrote at once in this sense to the bishops of the African province, and called upon those who had anything to bring against Caelestius to appear at Rome within two months.
After he received from Pelagius a confession of faith, together with a new treatise on free will, Zosimus held a new synod of the Roman clergy, before which both these writings were read. The assembly held the statements to be orthodox, and Zosimus again wrote to the African bishops defending Pelagius and reproving his accusers, among whom were the Gallic bishops Hero and Lazarus. Archbishop Aurelius of Carthage quickly called a synod, which sent a reply to Zosimus in which it was argued that the pope had been deceived by heretics. In his answer Zosimus declared that he had settled nothing definitely, and wished to settle nothing without consulting the African bishops. After the new synodal letter of the African council of 1 May 418 to the pope, and after the steps taken by the emperor Honorius against the Pelagians, Zosimus issued his "Tractoria", in which Pelagianism and its authors were finally condemned.
Shortly after this, Zosimus became involved in a dispute with the African bishops in regard to the right of clerics who had been condemned by their bishops to appeal to the Roman See. When the priest Apiarius of Sicca had been excommunicated by his bishop on account of his crimes, he appealed directly to the pope, without regard to the regular course of appeal in Africa, which was exactly prescribed. The pope at once accepted the appeal, and sent legates with credentials to Africa to investigate the matter. Another, potentially wiser, course would have been to have first referred the case of Apiarius to the ordinary course of appeal in Africa itself. Zosimus next made the further mistake of basing his action on a reputed canon of the First Council of Nicaea, which was in reality a canon of the Council of Sardica. In the Roman manuscripts the canons of Sardica followed those of Nicaea immediately, without an independent title, while the African manuscripts contained only the genuine canons of Nicaea, so that the canon appealed to by Zosimus was not contained in the African copies of the Nicene canons. This mistake ignited a serious disagreement over the appeal, which continued after the death of Zosimus.
Besides the writings of the pope already mentioned, there are extant other letters to the bishops of the Byzantine province in Africa, in regard to a deposed bishop, and to the bishops of Gaul and Spain in respect to Priscillianism and ordination to the different grades of the clergy. The "Liber Pontificalis" attributes to Zosimus a decree on the wearing of the maniple by deacons, and on the dedication of Easter candles in the country parishes; also a decree forbidding clerics to visit taverns. Zosimus was buried in the sepulchral Basilica of Saint Lawrence outside the Walls. | https://en.wikipedia.org/wiki?curid=24429 |
Pope Innocent IV
Pope Innocent IV (; c. 1195 – 7 December 1254), born Sinibaldo Fieschi, was the head of the Catholic Church from 25 June 1243 to his death in 1254.
Fieschi was born in Genoa and studied at the universities of Parma and Bologna. Considered a fine canonist, he served in the Curia for Pope Honorius III. Pope Gregory IX made Fieschi a cardinal and appointed him governor of the March of Ancona in 1235. He was elected pope in 1243 and took the name Innocent IV. He inherited an ongoing dispute over lands seized by the Holy Roman Emperor, and the following year relocated to France to escape imperial plots against him in Rome. He returned to Rome after the death of the Emperor in 1250.
Born in Genoa (although some sources say Manarola) in an unknown year, Sinibaldo was the son of Beatrice Grillo and Ugo Fieschi, Count of Lavagna. The Fieschi were a noble merchant family of Liguria. Sinibaldo received his education at the universities of Parma and Bologna and, for a time, taught canon law at Bologna. It is pointed out by Agostino Paravicini-Bagliani, however, that there is no "documentary" evidence of such a professorship. From 1216-1227 he was Canon of the Cathedral of Parma. He was considered one of the best canonists of his time, and was called to serve Pope Honorius III in the Roman Curia as "Auditor causarum", from 11 November 1226 to 30 May 1227. He was then promoted to the office of Vice-Chancellor of the Holy Roman Church (from 31 May to 23 September 1227), though he retained the office and the title for a time after he was named Cardinal.
Vice-Chancellor Sinibaldo Fieschi was created Cardinal-Priest of San Lorenzo in Lucina on 18 September 1227 by Pope Gregory IX (1227-1241). He later served as papal governor of the March of Ancona, from 17 October 1235 until 1240.
It is widely repeated, from the 17th century on, that he became bishop of Albenga in 1235, but there is no foundation to this claim.
Innocent's immediate predecessor was Pope Celestine IV, elected 25 October 1241, whose reign lasted a mere fifteen days. The events of Innocent IV's pontificate are therefore inextricably linked to the policies dominating the reigns of popes Innocent III, Honorius III and Gregory IX.
Gregory had been demanding the return of portions of the Papal States taken over by Holy Roman Emperor Frederick II when he died. The Pope had called a general council so he could depose the emperor with the support of Europe's spiritual leaders, but Frederick had seized two cardinals traveling to the council in hopes of intimidating the curia. The two prelates remained incarcerated and missed the conclave that immediately elected Celestine. The conclave that reconvened after his death fell into camps supporting contradictory policies about how to treat with the emperor.
After a year and a half of contentious debate and coercion, a papal election finally reached a unanimous decision. Cardinal de' Fieschi very reluctantly accepted election as Pope 25 June 1243, taking the name Innocent IV. As Cardinal de' Fieschi, Sinibaldo had been on friendly terms with Frederick, even after his excommunication. The Emperor also greatly admired the cardinal's wisdom, having enjoyed discussions with him from time to time.
Following the election the witty Frederick remarked that he had lost the friendship of a cardinal but made up for it by gaining the enmity of a pope.
His jest notwithstanding, Frederick's letter to the new pontiff was couched in respectful terms, offering Innocent congratulations and success, also expressing hope for an amicable settlement of the differences between the empire and the papacy. Negotiations leading to this objective began shortly afterwards, but proved abortive. Innocent refused to back down from his demands, Frederick II refused to acquiesce, and the dispute continued, its major point of contention being the reinstatement of Lombardy to the Patrimony of St Peter.
The Emperor's machinations caused a good deal of anti-papal feeling to rise in Italy, particularly in the Papal States, and imperial agents encouraged plots against papal rule. Realizing how untenable his position in Rome was growing, Innocent IV secretly and hurriedly withdrew, fleeing Rome on 7 June 1244. Traveling in disguise, Innocent made his way to Sutri and Civitavecchia, to Genoa, his birthplace, where he arrived on 7 July. From there, on 5 October, he fled to France, where he was joyously welcomed. Making his way to Lyon, where he arrived on November 29, 1244, Innocent was happily greeted by the magistrates of the city.
Finding himself now in secure surroundings and out of the reach of Frederick II, Innocent summoned, in a sermon preached on December 27, 1244, as many bishops as could get to Lyon (140 bishops were present), to attend what became the 13th General (Ecumenical) Council of the Church, the first to be held in Lyon. The bishops met for three public sessions: 28 June, 5 July, and 17 July 1245. Their principal business was to subjugate the Emperor Frederick II.
An earlier pope, Gregory IX (1227-1241), had issued letters on 9 June 1239, ordering all the bishops of France to confiscate all Talmuds in the possession of the Jews. Agents were to raid each synagogue on the first Saturday of Lent of 1240, and seize the books, placing them in the custody of the Dominicans or the Franciscans. The Bishop of Paris was ordered to see to it that copies of the Pope's mandate reached all the bishops of France, England, Aragon, Navarre, Castile and León, and Portugal. On 20 June 1239, there was another letter, addressed to the Bishop of Paris, the Prior of the Dominicans and the Minister of the Franciscans, calling for the burning of all copies of the Talmud, and any obstructionists to be visited with ecclesiastical censures. On the same day he wrote to the King of Portugal ordering him to see to it that all copies of the Talmud be seized and turned over to the Dominicans or Franciscans. King Louis IX of France on account of these letters held a trial in Paris in 1240, which ultimately found the Talmud guilty of 35 alleged charges. 24 cartloads of the Talmud were burned.
Initially, Innocent IV continued Gregory IX's policy. In a letter of 9 May 1244, he wrote to King Louis IX, ordering the Talmud and any books with Talmudic glosses to be examined by the Regent Doctors of the University of Paris, and if condemned by them, to be burned. However, an argument was presented that this policy was a negation of the Church's traditional stance of tolerance toward Judaism. On 5 July 1247, Pope Innocent wrote to the bishops of Germany and the Bishops of Gallia (France) that, because both ecclesiastical and lay persons were lawlessly plundering the property of the Jews, and falsely stating that at Eastertime they sacrificed and ate the heart of a little child, the bishops should see to it that the Jews not be attacked or molested because of these or other reasons. In the year 1247, in a letter of 2 August to King Louis of France, he reversed his stance on the Talmud, and wrote letters to the effect that the Talmud should be censored rather than burned. Innocent IV's words were met with the disapproval of Odo of Châteauroux, Cardinal Bishop of Tusculum and former Chancellor of the University of Paris. Nonetheless, Pope Innocent IV's policy was continued by subsequent popes.
The First Council of Lyon of 1245 had the fewest participants of any General Council before it. However three patriarchs and the Latin emperor of Constantinople attended, along with about 150 bishops, most of them prelates from France and Spain. They were able to come quickly, and Innocent could rely on their help. Bishops from the rest of Europe outside Spain and France feared retribution from Frederick, while many other bishops were prevented from attending either by the invasions of the Mongols (Tartars) in the Far East or Muslim incursions in the Middle East.
In session, Frederick II's position was defended by Taddeo of Suessa, who renewed in his master's name all the promises made before, but refused to give the guarantees the pope demanded. Unable to end the impasse Taddeo was horrified to hear the fathers of the Council solemnly depose and excommunicate the Emperor on 17 July, while absolving all his subjects from allegiance.
The political agitation over these acts convulsed Europe. The turmoil relaxed only with Frederick's death in December 1250, which removed the proximate threat to Innocent's life and permitted his return to Italy. He departed Lyon on 19 April 1251, and arrived in Genoa on 18 May. On 1 July, he was at Milan, accompanied by only three cardinals and the Latin Patriarch of Constantinople. He stayed there until mid-September, when he began an inspection tour of Lombardy, heading for Bologna. On 5, November, he reached Perugia. From 1251–53 the Pope stayed at Perugia until it was safe for him to bring the papal court back to Rome. He finally saw Rome again in the first week of October, 1253. He left Rome on 27 April 1254, for Assisi and then Anagni. He immediately threw himself into the problems surrounding the succession to the possessions of Frederick II, both as German Emperor and as King of Sicily. In both cases, Innocent continued Pope Gregory IX's policy of opposition to the Hohenstaufen, supporting whatever opposition there could be found to that House. This papal stance embroiled Italy in one conflict after another for the next three decades. Innocent IV himself, following after the papal army which was seeking to destroy Frederick's son Manfred, died in Naples on 7 December 1254.
While in Perugia, on 15 May 1252, Innocent IV issued the papal bull "Ad extirpanda", composed of thirty-eight 'laws', and advised civil authorities in Italy to treat heretics as criminals, and proscribed parameters limiting the use of torture to compel disclosures "as thieves and robbers of material goods are made to accuse their accomplices and confess the crimes they have committed."
As Innocent III had before him, Innocent IV saw himself as the Vicar of Christ, whose power was above earthly kings. Innocent, therefore, had no objection to intervening in purely secular matters. He appointed Afonso III administrator of Portugal, and lent his protection to Ottokar, the son of the King of Bohemia. The Pope even sided with King Henry III against both nobles and bishops of England, despite the king's harassment of Edmund Rich, the Archbishop of Canterbury and Primate of All England, and the royal policy of having the income of a vacant bishopric or benefice delivered to the royal coffers, rather than handed over to a papal Administrator (usually a member of the Curia) or a Papal collector of revenue, or delivered directly to the Pope.
The warlike tendencies of the Mongols also concerned the Pope, and he sent a papal nuncio to the Mongol Empire in an attempt to strike an agreement. Innocent decreed that he, as Vicar of Christ, could make non-Christians accept his dominion and even exact punishment should they violate the non-God centered commands of the Ten Commandments. This policy was held more in theory than in practice and was eventually repudiated centuries later.
The papal preoccupation with imperial matters and secular princes caused the spirituality of the Church to suffer. Taxation increased in the Papal States and the complaints of the inhabitants grew considerably.
In August 1253, after much worry about the order's insistence on absolute poverty, Innocent finally approved the rule of the 2nd Order of the Franciscans, the Poor Clares, founded by St. Clare of Assisi, the friend of St Francis.
In 1246 Edmund Rich, former archbishop of Canterbury (died 1240), was named a saint. In 1250 Innocent proclaimed the pious Queen Margaret (died 1093), wife of King Malcolm III of Scotland, a saint. The Dominican priest Peter of Verona, martyred by Albigensian heretics in 1252, was canonized, as was Stanislaus of Szczepanów, the Polish Archbishop of Cracow, both in 1253.
Innocent IV is often credited as helping to create the idea of legal personality, "persona ficta" as it was originally written, which has led to the idea of corporate personhood. This allowed monasteries and universities the ability to act as a single legal entity, allowing for their existence to be more continuous and for monks pledged to poverty to nonetheless be part of an organization that could own infrastructure, but as "fictional people" they could not be excommunicated or considered guilty of delict, that is, negligence to action that is not contractually required. This meant that punishment of individuals within an organization would reflect less on the organization itself than it would if the person running such an organization was said to own it rather than be a constituent of it, and was meant to provide stability.
Innocent IV was responsible for the eventual deposition of King Sancho II of Portugal at the request of his brother Afonso (later King Afonso III of Portugal). One of the arguments he used against Sancho II in his "Grandi non immerito" text was his status as a minor upon inheriting the throne from his father Afonso II.
In 1245, Innocent IV issued bulls and sent an envoy in the person of Giovanni da Pian del Carpine (accompanied by Benedict the Pole) to the "Emperor of the Tartars". The message asked the Mongol ruler to become a Christian and stop his aggression against Europe. The Khan Güyük replied in 1246 in a letter written in Persian that is still preserved in the Vatican Library, demanding the submission of the Pope and the other rulers of Europe.
In 1245 Innocent had sent another mission, through another route, led by Ascelin of Lombardia, also bearing letters. The mission met with the Mongol ruler Baichu near the Caspian Sea in 1247. The reply of Baichu was in accordance with that of Güyük, but it was accompanied by two Mongolian envoys to the Papal seat in Lyon, Aïbeg and Serkis. In the letter Guyuk demanded that the Pope appear in person at the Mongol imperial headquarters, Karakorum in order that “we might cause him to hear every command that there is of the jasaq”. The envoys met with Innocent IV in 1248, who again appealed to the Mongols to stop their killing of Christians.
Innocent IV would also send other missions to the Mongols in 1245: the mission of André de Longjumeau and the possibly aborted mission of Laurent de Portugal.
The remainder of Innocent's life was largely directed to schemes for compassing the overthrow of Manfred of Sicily, the natural son of Frederick II, whom the towns and the nobility had for the most part received as his father's successor. Innocent aimed to incorporate the whole Kingdom of Sicily into the Papal States, but he lacked the necessary economic and political power. Therefore, after a failed agreement with Charles of Anjou, he invested Edmund Crouchback, the nine-year-old son of King Henry III of England, with that kingdom on 14 May 1254.
In the same year, Innocent excommunicated Frederick II's other son, Conrad IV, King of Germany, but the latter died a few days after the investiture of Edmund. At the beginning of June, 1254, Innocent moved to Anagni, where he awaited Manfred's reaction to the event, especially considering that Conrad's heir, Conradin, had been entrusted to Papal tutelage by King Conrad's testament. Manfred submitted, although probably only to gain time and counter the menace from Edmund, and accepted the title of papal vicar for southern Italy. Innocent could therefore enjoy a moment in which he was the acknowledged sovereign, in theory at least, of most of the peninsula. Innocent overplayed his hand, however, by accepting the fealty of Amalfi directly to the Papacy instead of to the Kingdom of Sicily on 23 October. Manfred immediately, on October 26, fled from Teano, where he had established his headquarters, and headed to Lucera to his Saracen troops.
Manfred had not lost his nerve, and organized resistance to papal aggression. Supported by his faithful Saracen troops, he began using military force to make rebellious barons and towns submit to his authority as Regent for his nephew. Realizing that Manfred had no intention of submitting to the Papacy or to anyone else, Innocent and his papal army headed south from his summer residence at Anagni on October 8, intending to confront Manfred's forces. On 27 October 1254 the Pope entered the city of Naples. It was on a sick bed at Naples that Innocent IV heard of Manfred's victory at Foggia on December 2 against the Papal forces, led by the new Papal Legate, Cardinal Guglielmo Fiesch, the Pope's nephew. The tidings are said to have precipitated Pope Innocent's death on 7 December 1254 in Naples. From triumph to disaster had taken only a few months.
Innocent's learning gave to the world an "Apparatus in quinque libros decretalium", a commentary on papal decrees. He is also remembered for issuing the papal bull "Ad extirpanda", which authorized the use of torture by the Inquisition for eliciting confessions from heretics.
Shortly after Innocent's election as pope, his nephew Opizzo was elevated to the Latin Patriarchate of Antioch. In December, 1251, Innocent IV appointed another nephew, Ottobuono, Cardinal Deacon of S. Andriano. Ottobuono was elected Pope Adrian V in 1276.
Innocent was succeeded by Pope Alexander IV (Rinaldo de' Conti). | https://en.wikipedia.org/wiki?curid=24430 |
Paranasal sinuses
Paranasal sinuses are a group of four paired air-filled spaces that surround the nasal cavity. The maxillary sinuses are located under the eyes; the frontal sinuses are above the eyes; the ethmoidal sinuses are between the eyes and the sphenoidal sinuses are behind the eyes. The sinuses are named for the facial bones in which they are located.
Humans possess four paired paranasal sinuses, divided into subgroups that are named according to the bones within which the sinuses lie:
The paranasal air sinuses are lined with respiratory epithelium (ciliated pseudostratified columnar epithelium).
Paranasal sinuses form developmentally through excavation of bone by air-filled sacs (pneumatic diverticula) from the nasal cavity. This process begins prenatally (intrauterine life), and it continues through the course of an organism's lifetime.
The results of experimental studies suggest that the natural ventilation rate of a sinus with a single sinus ostium (opening) is extremely slow. Such limited ventilation may be protective for the sinus, as it would help prevent drying of its mucosal surface and maintain a near-sterile environment with high carbon dioxide concentrations and minimal pathogen access. Thus composition of gas content in the maxillary sinus is similar to venous blood, with high carbon dioxide and lower oxygen levels compared to breathing air.
At birth only the maxillary sinus and the ethmoid sinus are developed but not yet pneumatized; only by the age of seven they are fully aerated. The sphenoid sinus appears at the age of three, and the frontal sinuses first appear at the age of six, and fully develop during adulthood.
The paranasal sinuses are joined to the nasal cavity via small orifices called ostia. These become blocked easily by allergic inflammation, or by swelling in the nasal lining that occurs with a cold. If this happens, normal drainage of mucus within the sinuses is disrupted, and sinusitis may occur. Because the maxillary posterior teeth are close to the maxillary sinus, this can also cause clinical problems if any disease processes are present, such as an infection in any of these teeth. These clinical problems can include secondary sinusitis, the inflammation of the sinuses from another source such as an infection of the adjacent teeth.
These conditions may be treated with drugs such as decongestants, which cause vasoconstriction in the sinuses; reducing inflammation; by traditional techniques of nasal irrigation; or by corticosteroid.
Malignancies of the paranasal sinuses comprise approximately 0.2% of all malignancies. About 80% of these malignancies arise in the maxillary sinus. Men are much more often affected than women. They most often occur in the age group between 40 and 70 years. Carcinomas are more frequent than sarcomas. Metastases are rare. Tumours of the sphenoid and frontal sinuses are extremely rare.
Sinus is a Latin word meaning a "fold", "curve", or "bay". Compare "sine".
Paranasal sinuses occur in many other animals, including most mammals, birds, non-avian dinosaurs, and crocodilians. The bones occupied by sinuses are quite variable in these other species. | https://en.wikipedia.org/wiki?curid=24437 |
PAL
Phase Alternating Line (PAL) is a colour encoding system for analogue television used in broadcast television systems in most countries broadcasting at 625-line / 50 field (25 frame) per second (576i). It was one of three major analogue colour television standards, the others being NTSC and SECAM.
Almost all of the countries using PAL are currently in the process of conversion, or have already converted standards to DVB, ISDB or DTMB.
This page primarily discusses the PAL colour encoding system. The articles on broadcast television systems and analogue television further describe frame rates, image resolution and audio modulation.
PAL was adopted by most European countries, by all African countries that had never been a Belgian or French colony, by Argentina, Brazil, Paraguay, and Uruguay, by most of Asia, and by Oceania.
There were some countries in those regions that did not adopt PAL. They were France, countries that once were part of the Soviet Union, Japan, Myanmar, the Philippines, and Taiwan.
The Phase Alternating Line (PAL) was discovered in 1938 by the German Reichs-Rundfunk-Gesellschaft (RRG) Research and Development Division to resolve the conflicts between companies over the introduction of a nationwide analog television system in Nazi Germany.
In the 1950s, the Western European countries began plans to introduce colour television, and were faced with the problem that the NTSC standard demonstrated several weaknesses, including colour tone shifting under poor transmission conditions, which became a major issue considering Europe's geographical and weather-related particularities. To overcome NTSC's shortcomings, alternative standards were devised, resulting in the development of the PAL and SECAM standards. The goal was to provide a colour TV standard for the European picture frequency of 50 fields per second (50 hertz), and finding a way to eliminate the problems with NTSC.
PAL was developed by Walter Bruch at Telefunken in Hanover, West Germany, with important input from Dr. Kruse and . The format was patented by Telefunken in 1962, citing Bruch as inventor, and unveiled to members of the European Broadcasting Union (EBU) on 3 January 1963. When asked why the system was named "PAL" and not "Bruch" the inventor answered that a "Bruch system" would probably not have sold very well ("Bruch" is the German word for "breakage"). The first broadcasts began in the United Kingdom in June 1967, followed by West Germany later that year. The one BBC channel initially using the broadcast standard was BBC2, which had been the first UK TV service to introduce "625-lines" in 1964. Telefunken PALcolour 708T was the first PAL commercial TV set. It was followed by Loewe-Farbfernseher S 920 & F 900.
Telefunken was later bought by the French electronics manufacturer Thomson. Thomson also bought the "Compagnie Générale de Télévision" where Henri de France developed SECAM, the first European Standard for colour television. Thomson, now called Technicolour SA, also owns the RCA brand and licences it to other companies; Radio Corporation of America, the originator of that brand, created the NTSC colour TV standard before Thomson became involved.
The term PAL was often used informally and somewhat imprecisely to refer to the 625-line/50 Hz (576i) television system in general, to differentiate from the 525-line/60 Hz (480i) system generally used with NTSC. Accordingly, DVDs were labelled as PAL or NTSC (referring to the line count and frame rate) even though technically the discs carry neither PAL nor NTSC encoded signal. CCIR 625/50 and EIA 525/60 are the proper names for these (line count and field rate) standards; PAL and NTSC on the other hand are methods of encoding colour information in the signal.
Both the PAL and the NTSC system use a quadrature amplitude modulated subcarrier carrying the chrominance information added to the luminance video signal to form a composite video baseband signal. The frequency of this subcarrier is 4.43361875 MHz for PAL and NTSC 4.43, compared to 3.579545 MHz for NTSC 3.58. The SECAM system, on the other hand, uses a frequency modulation scheme on its two line alternate colour subcarriers 4.25000 and 4.40625 MHz.
The name "Phase Alternating Line" describes the way that the phase of part of the colour information on the video signal is reversed with each line, which automatically corrects phase errors in the transmission of the signal by cancelling them out, at the expense of vertical frame colour resolution. Lines where the colour phase is reversed compared to NTSC are often called PAL or phase-alternation lines, which justifies one of the expansions of the acronym, while the other lines are called NTSC lines. Early PAL receivers relied on the human eye to do that cancelling; however, this resulted in a comb-like effect known as Hanover bars on larger phase errors. Thus, most receivers now use a chrominance analogue delay line, which stores the received colour information on each line of display; an average of the colour information from the previous line and the current line is then used to drive the picture tube. The effect is that phase errors result in saturation changes, which are less objectionable than the equivalent hue changes of NTSC. A minor drawback is that the vertical colour resolution is poorer than the NTSC system's, but since the human eye also has a colour resolution that is much lower than its brightness resolution, this effect is not visible. In any case, NTSC, PAL, and SECAM all have chrominance bandwidth (horizontal colour detail) reduced greatly compared to the luminance signal.
The 4.43361875 MHz frequency of the colour carrier is a result of 283.75 colour clock cycles per line plus a 25 Hz offset to avoid interferences. Since the line frequency (number of lines per second) is 15625 Hz (625 lines × 50 Hz ÷ 2), the colour carrier frequency calculates as follows: 4.43361875 MHz = 283.75 × 15625 Hz + 25 Hz.
The frequency 50 Hz is the optional refresh frequency of the monitor to be able to create an illusion of motion, while 625 lines means the vertical lines or resolution that the PAL system supports.
The original colour carrier is required by the colour decoder to recreate the colour difference signals. Since the carrier is not transmitted with the video information it has to be generated locally in the receiver. In order that the phase of this locally generated signal can match the transmitted information, a 10 cycle burst of colour subcarrier is added to the video signal shortly after the line sync pulse, but before the picture information, during the so-called back porch. This colour burst is not actually in phase with the original colour subcarrier, but leads it by 45 degrees on the odd lines and lags it by 45 degrees on the even lines. This swinging burst enables the colour decoder circuitry to distinguish the phase of the R-Y vector which reverses every line.
PAL usually has 576 visible lines compared with 486 lines with NTSC, meaning that PAL has a 20% higher resolution, in fact it even has a higher resolution than Enhanced Definition standard (854x486). Most TV output for PAL and NTSC use interlaced frames meaning that even lines update on one field and odd lines update on the next field. Interlacing frames gives a smoother motion with half the frame rate. NTSC is used with a frame rate of 60i or 30p whereas PAL generally uses 50i or 25p; both use a high enough frame rate to give the illusion of fluid motion. This is due to the fact that NTSC is generally used in countries with a utility frequency of 60 Hz and PAL in countries with 50 Hz, although there are many exceptions. Both PAL and NTSC have a higher frame rate than film which uses 24 frames per second. PAL has a closer frame rate to that of film, so most films are sped up 4% to play on PAL systems, shortening the runtime of the film and, without adjustment, slightly raising the pitch of the audio track. Film conversions for NTSC instead use 3:2 pull down to spread the 24 frames of film across 60 interlaced fields. This maintains the runtime of the film and preserves the original audio, but may cause worse interlacing artefacts during fast motion.
NTSC receivers have a tint control to perform colour correction manually. If this is not adjusted correctly, the colours may be faulty. The PAL standard automatically cancels hue errors by phase reversal, so a tint control is unnecessary yet Saturation control can be more useful. Chrominance phase errors in the PAL system are cancelled out using a 1H delay line resulting in lower saturation, which is much less noticeable to the eye than NTSC hue errors.
However, the alternation of colour information—Hanover bars—can lead to picture grain on pictures with extreme phase errors even in PAL systems, if decoder circuits are misaligned or use the simplified decoders of early designs (typically to overcome royalty restrictions). In most cases such extreme phase shifts do not occur. This effect will usually be observed when the transmission path is poor, typically in built up areas or where the terrain is unfavourable. The effect is more noticeable on UHF than VHF signals as VHF signals tend to be more robust.
In the early 1970s some Japanese set manufacturers developed decoding systems to avoid paying royalties to Telefunken. The Telefunken licence covered any decoding method that relied on the alternating subcarrier phase to reduce phase errors. This included very basic PAL decoders that relied on the human eye to average out the odd/even line phase errors. One solution was to use a 1H analogue delay line to allow decoding of only the odd or even lines. For example, the chrominance on odd lines would be switched directly through to the decoder and also be stored in the delay line. Then, on even lines, the stored odd line would be decoded again. This method effectively converted PAL to NTSC. Such systems suffered hue errors and other problems inherent in NTSC and required the addition of a manual hue control.
PAL and NTSC have slightly divergent colour spaces, but the colour decoder differences here are ignored.
The SECAM patents predate those of PAL by several years (1956 vs. 1962). Its creator, Henri de France, in search of a response to known NTSC hue problems, came up with ideas that were to become fundamental to both European systems, namely:
1) colour information on two successive TV lines is very similar and vertical resolution can be halved without serious impact on perceived visual quality
2) more robust colour transmission can be achieved by spreading information on two TV lines instead of just one
3) information from the two TV lines can be recombined using a delay line.
SECAM applies those principles by transmitting alternately only one of the U and V components on each TV line, and getting the other from the delay line. QAM is not required, and frequency modulation of the subcarrier is used instead for additional robustness (sequential transmission of U and V was to be reused much later in Europe's last "analog" video systems: the MAC standards).
SECAM is free of both hue and saturation errors. It is not sensitive to phase shifts between the colour burst and the chrominance signal, and for this reason was sometimes used in early attempts at colour video recording, where tape speed fluctuations could get the other systems into trouble. In the receiver, it did not require a quartz crystal (which was an expensive component at the time) and generally could do with lower accuracy delay lines and components.
SECAM transmissions are more robust over longer distances than NTSC or PAL. However, owing to their FM nature, the colour signal remains present, although at reduced amplitude, even in monochrome portions of the image, thus being subject to stronger cross colour.
One serious drawback for studio work is that the addition of two SECAM signals does not yield valid colour information, due to its use of frequency modulation. It was necessary to demodulate the FM and handle it as AM for proper mixing, before finally remodulating as FM, at the cost of some added complexity and signal degradation. In its later years, this was no longer a problem, due to the wider use of component and digital equipment.
PAL can work without a delay line, but this configuration, sometimes referred to as "poor man's PAL", could not match SECAM in terms of picture quality. To compete with it at the same level, it had to make use of the main ideas outlined above, and as a consequence PAL had to pay licence fees to SECAM. Over the years, this contributed significantly to the estimated 500 million francs gathered by the SECAM patents (for an initial 100 million francs invested in research).
Hence, PAL could be considered as a hybrid system, with its signal structure closer to NTSC, but its decoding borrowing much from SECAM.
There were initial specifications to use colour with the French 819 line format (system E). However, "SECAM E" only ever existed in development phases. Actual deployment used the 625 line format. This made for easy interchange and conversion between PAL and SECAM in Europe. Conversion was often not even needed, as more and more receivers and VCRs became compliant with both standards, helped in this by the common decoding steps and components. When the SCART plug became standard, it could take RGB as an input, effectively bypassing all the colour coding formats' peculiarities.
When it comes to home VCRs, all video standards use what is called "colour under" format. Colour is extracted from the high frequencies of the video spectrum, and moved to the lower part of the spectrum available from tape. Luminance then uses what remains of it, above the colour frequency range. This is usually done by heterodyning for PAL (as well as NTSC). But the FM nature of colour in SECAM allows for a cheaper trick: division by 4 of the subcarrier frequency (and multiplication on replay). This became the standard for SECAM VHS recording in France. Most other countries kept using the same heterodyning process as for PAL or NTSC and this is known as MESECAM recording (as it was more convenient for some Middle East countries that used both PAL and SECAM broadcasts).
Another difference in colour management is related to the proximity of successive tracks on the tape, which is a cause for chroma crosstalk in PAL. A cyclic sequence of 90° chroma phase shifts from one line to the next is used to overcome this problem. This is not needed in SECAM, as FM provides sufficient protection.
Regarding early (analogue) videodiscs, the established Laserdisc standard supported only NTSC and PAL. However, a different optical disc format, the Thomson transmissive optical disc made a brief appearance on the market. At some point, it used a modified SECAM signal (single FM subcarrier at 3.6 MHz). The media's flexible and transmissive material allowed for direct access to both sides without flipping the disc, a concept that reappeared in multi-layered DVDs about fifteen years later.
For PAL-B/G the signal has these characteristics.
After 0.9 µs a colourburst of cycles is sent. Most rise/fall times are in range. Amplitude is 100% for white level, 30% for black, and 0% for sync.
The CVBS electrical amplitude is Vpp and impedance of 75 Ω.
The vertical timings are:
As PAL is interlaced, every two fields are summed to make a complete picture frame.
Luminance, formula_1, is derived from red, green, and blue (formula_2) signals:
formula_4 and formula_5 are used to transmit chrominance. Each has a typical bandwidth of 1.3 MHz.
Composite PAL signal formula_8timing where formula_9.
Subcarrier frequency formula_10 is 4.43361875 MHz (±5 Hz) for PAL-B/D/G/H/I/N.
Many countries have turned off analogue transmissions, so the following does not apply anymore, except for using devices which output broadcast signals, such as video recorders.
The majority of countries using or having used PAL have television standards with 625 lines and 50 fields per second, differences concern the audio carrier frequency and channel bandwidths. The variants are:
Systems B and G are similar. System B specifies 7 MHz channel bandwidth, while System G specifies 8 MHz channel bandwidth. Australia used System B for VHF and UHF channels. Similarly, Systems D and K are similar except for the bands they use: System D is only used on VHF (except in mainland China), while System K is only used on UHF. Although System I is used on both bands, it has only been used on UHF in the United Kingdom.
In Brazil, PAL is used in conjunction with the 525 line, 59.94 field/s system M, using (very nearly) the NTSC colour subcarrier frequency. Exact colour subcarrier frequency of PAL-M is 3.575611 MHz. Almost all other countries using system M use NTSC.
The PAL colour system (either baseband or with any RF system, with the normal 4.43 MHz subcarrier unlike PAL-M) can also be applied to an NTSC-like 525-line (480i) picture to form what is often known as "PAL-60" (sometimes "PAL-60/525", "Quasi-PAL" or "Pseudo PAL"). PAL-M (a broadcast standard) however should not be confused with "PAL-60" (a video playback system—see below).
In Argentina, Paraguay and Uruguay the PAL-N variant is used. It employs the 625 line/50 field per second waveform of PAL-B/G, D/K, H, and I, but on a 6 MHz channel with a chrominance subcarrier frequency of 3.582056 MHz (917/4*H) very similar to NTSC (910/4*H).
PAL-N uses the YDbDr colour space.
VHS tapes recorded from a PAL-N or a PAL-B/G, D/K, H, or I broadcast are indistinguishable because the downconverted subcarrier on the tape is the same. A VHS recorded off TV (or released) in Europe will play in colour on any PAL-N VCR and PAL-N TV in Argentina, Paraguay and Uruguay. Likewise, any tape recorded in Argentina, Paraguay or Uruguay off a PAL-N TV broadcast can be sent to anyone in European countries that use PAL (and Australia/New Zealand, etc.) and it will display in colour. This will also play back successfully in Russia and other SECAM countries, as the USSR mandated PAL compatibility in 1985—this has proved to be very convenient for video collectors.
People in Argentina, Paraguay and Uruguay usually own TV sets that also display NTSC-M, in addition to PAL-N. Direct TV also conveniently broadcasts in NTSC-M for North, Central, and South America. Most DVD players sold in Argentina, Paraguay and Uruguay also play PAL discs—however, this is usually output in the European variant (colour subcarrier frequency 4.433618 MHz), so people who own a TV set which only works in PAL-N (plus NTSC-M in most cases) will have to watch those PAL DVD imports in black and white (unless the TV supports RGB SCART) as the colour subcarrier frequency in the TV set is the PAL-N variation, 3.582056 MHz.
In the case that a VHS or DVD player works in PAL (and not in PAL-N) and the TV set works in PAL-N (and not in PAL), there are two options:
Some DVD players (usually lesser known brands) include an internal transcoder and the signal can be output in NTSC-M, with some video quality loss due to the system's conversion from a 625/50 PAL DVD to the NTSC-M 525/60 output format. A few DVD players sold in Argentina, Paraguay and Uruguay also allow a signal output of NTSC-M, PAL, or PAL-N. In that case, a PAL disc (imported from Europe) can be played back on a PAL-N TV because there are no field/line conversions, quality is generally excellent.
Extended features of the PAL specification, such as Teletext, are implemented quite differently in PAL-N. PAL-N supports a modified 608 closed captioning format that is designed to ease compatibility with NTSC originated content carried on line 18, and a modified teletext format that can occupy several lines.
Some special VHS video recorders are available which can allow viewers the flexibility of enjoying PAL-N recordings using a standard PAL ( 625/50 Hz ) colour TV, or even through multi-system TV sets. Video recorders like Panasonic NV-W1E (AG-W1 for the US), AG-W2, AG-W3, NV-J700AM, Aiwa HV-M110S, HV-M1U, Samsung SV-4000W and SV-7000W feature a digital TV system conversion circuitry.
The PAL L (Phase Alternating Line with L-sound system) standard uses the same video system as PAL-B/G/H (625 lines, 50 Hz field rate, 15.625 kHz line rate), but with 6 MHz video bandwidth rather than 5.5 MHz. This requires the audio subcarrier to be moved to 6.5 MHz. An 8 MHz channel spacing is used for PAL-L.
The BBC tested their pre-war 405 line monochrome system with all three colour standards including PAL, before the decision was made to abandon 405 and transmit colour on 625/System I only.
The PAL colour system is usually used with a video format that has 625 lines per frame (576 visible lines, the rest being used for other information such as sync data and captioning) and a refresh rate of 50 interlaced fields per second (compatible with 25 full frames per second), such systems being B, G, H, I, and N (see broadcast television systems for the technical details of each format).
This ensures video interoperability. However, as some of these standards (B/G/H, I and D/K) use different sound carriers (5.5 MHz, 6.0 MHz 6.5 MHz respectively), it may result in a video image without audio when viewing a signal broadcast over the air or cable. Some countries in Eastern Europe which formerly used SECAM with systems D and K have switched to PAL while leaving other aspects of their video system the same, resulting in the different sound carrier. Instead, other European countries have changed completely from SECAM-D/K to PAL-B/G.
The PAL-N system has a different sound carrier, and also a different colour subcarrier, and decoding on incompatible PAL systems results in a black-and-white image without sound.
The PAL-M system has a different sound carrier and a different colour subcarrier, and does not use 625 lines or 50 frames/second. This would result in no video or audio at all when viewing a European signal.
Recently manufactured PAL television receivers can typically decode all of these systems except, in some cases, PAL-M and PAL-N. Many of receivers can also receive Eastern European and Middle Eastern SECAM, though rarely French-broadcast SECAM (because France used a quasi-unique positive video modulation, system L) unless they are manufactured for the French market. They will correctly display plain CVBS or S-video SECAM signals. Many can also accept baseband NTSC-M, such as from a VCR or game console, and RF modulated NTSC with a PAL standard audio subcarrier (i.e., from a modulator), though not usually broadcast NTSC (as its 4.5 MHz audio subcarrier is not supported). Many sets also support NTSC with a 4.43 MHz subcarrier.
Many 1990s-onwards videocassette recorders sold in Europe can play back NTSC tapes. When operating in this mode most of them do not output a true (625/25) PAL signal, but rather a hybrid consisting of the original NTSC line standard (525/30), but with colour converted to PAL 4.43 MHz—this is known as "PAL 60" (also "quasi-PAL" or "pseudo-PAL") with "60" standing for 60 Hz (for 525/30), instead of 50 Hz (for 625/25). Some video game consoles also output a signal in this mode. Notably, the PlayStation 2 did not actually offer a true PAL 60 mode; while many PlayStation 2 games did offer a "PAL 60" mode as an option, the console would in fact generate an NTSC signal during 60 Hz operation. Most newer television sets can display such a signal correctly, but some will only do so (if at all) in black and white and/or with flickering/foldover at the bottom of the picture, or picture rolling (however, many old TV sets can display the picture properly by means of adjusting the V-Hold and V-Height knobs—assuming they have them). Some TV tuner cards or video capture cards will support this mode (although software/driver modification can be required and the manufacturers' specs may be unclear). A "PAL 60" signal is similar to an NTSC (525/30) signal, but with the usual PAL chrominance subcarrier at 4.43 MHz (instead of 3.58 as with NTSC and South American PAL variants) and with the PAL-specific phase alternation of the red colour difference signal between the lines.
Some DVD players offer a choice of PAL vs NTSC output for NTSC discs.
Below countries and territories currently use or once used the PAL system. Many of these have converted or are currently converting PAL to DVB-T (most countries), DVB-T2 (most countries), DTMB (China, Hong Kong and Macau) or ISDB (Sri Lanka, Maldives, Botswana and part of South America).
The following countries no longer use PAL for terrestrial broadcasts, and are in process of converting from PAL (cable) to DVB-T.
The PAL system is analog. There was an attempt to manufacture equipment that digitizes the PAL signal in the 1980s, but it was not commercially successful. Digital devices such as digital television, modern game consoles, computers, etc., use color component systems where the R, G, and B signals are transmitted over three different cables, or Y (luminance), RY, and BY (difference from color). In these cases only the number of total horizontal lines is taken into account—625 in digital PAL and 525 in NTSC—and the frame rate—25 frames/s in PAL Digital and 30 frames/s in digital NTSC. Systems using the MPEG-2 standard, such as DVD and satellite television, cable television, or digital terrestrial television (DTT) have practically nothing to do with PAL. | https://en.wikipedia.org/wiki?curid=24438 |
Polo
Polo is a horseback mounted team sport. It is one of the world's oldest known team sports.
The concept of the game and its variants date back from the 6th century BC to the 1st century AD. The sport originated from equestrian games played by nomadic Iranian and Turkic peoples. Polo was at first a training game for cavalry units, usually the Persian king’s guard or other elite troops. It is now popular around the world, with well over 100 member countries in the Federation of International Polo. It is played professionally in 16 countries. It was an Olympic sport from 1900 to 1936.
Polo has been called "the sport of kings". It has become a spectator sport for equestrians and society, often supported by sponsorship.
The game is played by two opposing teams with the objective of scoring goals by using a long-handled wooden mallet to hit a small hard ball through the opposing team's goal. Each team has four mounted riders, and the game usually lasts one to two hours, divided into periods called chukkas (or "chukkers").
Arena polo has similar rules, and is played with three players per team. The playing area is smaller, enclosed, and usually of compacted sand or fine aggregate, often indoors. Arena polo has more maneuvering due to space limitations, and uses an air inflated ball, slightly larger than the hard field polo ball. Standard mallets are used, though slightly larger head "arena mallets" are an option.
Although the exact origins of the game are unknown, it most likely began as a simple game played by mounted Azerbaijan, Iranian and Turkic nomads in Central Asia, with the current form originating in Iran (Persia) and spreading east and west. In time, polo became a Persian national sport played extensively by the nobility. Women played as well as men. During the period of the Parthian Empire (247 BC to 224 AD), the sport had great patronage under the kings and noblemen. According to the "Oxford Dictionary of Late Antiquity", polo (known as "čowgān" in Middle Persian, i.e. chovgan), was a Persian ball game and an important pastime in the court of the Sasanian Empire (224–651). It was also part of royal education for the Sasanian ruling class. Emperor Shapur II learnt to play polo when he was seven years old in 316 AD. Known as "chowgan", it is still played in the region today.
Valuable for training cavalry, the game was played from Constantinople to Japan by the Middle Ages. The game also spread south to Arabia and to India and Tibet.
Abbasid Baghdad had a large polo ground outside its walls, and one of the city's early 13th century gates, the Bab al Halba, was named after these nearby polo grounds. The game continued to be supported by Mongol rulers of Persia in the 13th century, as well as under the Safavid dynasty. In the 17th century, Naqsh-i Jahan Square in Isfahan was built as a polo field by King Abbas I. The game was also learnt by the neighbouring Byzantine Empire at an early date. A "tzykanisterion" (stadium for playing "tzykanion", the Byzantine name for polo) was built by emperor Theodosius II (r. 408–450) inside the Great Palace of Constantinople. Emperor Basil I (r. 867–886) excelled at it; Emperor Alexander (r. 912–913) died from exhaustion while playing and John I of Trebizond (r. 1235–1238) died from a fatal injury during a game.
After the Muslim conquests to the Ayyubid and Mameluke dynasties of Egypt and the Levant, their elites favoured it above all other sports. Notable sultans such as Saladin and Baybars were known to play it and encourage it in their court. Polo sticks were featured on the Mamluk precursor to modern-day playing cards.
The game spread to South Asia where it has had a strong presence in the north western areas of present-day Pakistan (including Gilgit, Chitral, Hunza and Baltistan) since at least the 15th–16th century. The name "polo" is said to have been derived from the Balti word "pulu", meaning ball. Qutubuddin Aibak, the Turkoman slave from Central Asia who later became the Sultan of Delhi in Northern India from 1206 to 1210, suffered an accidental death during a game of polo when his horse fell and he was impaled on the pommel of his saddle. Polo likely travelled via the Silk Road to China where it was popular in the Tang dynasty capital of Chang'an, and also played by women, who wore male dress to do so; many Tang dynasty tomb figures of female players survive. According to the Oxford Dictionary of Late Antiquity, the popularity of polo in Tang China was "bolstered, no doubt, by the presence of the Sasanian court in exile".
A polo-obsessed noblewoman was buried with her donkeys on 6 October 878 AD in Xi’an, China.
An archaic variation of polo, regionally referred to as "buzkashi" or "kokpar", is still played in parts of Asia.
The modern game of polo is derived from Manipur, India, where the game was known as 'sagol kangjei', 'kanjai-bazee', or 'pulu'. It was the anglicised form of the last, referring to the wooden ball that was used, which was adopted by the sport in its slow spread to the west. The first polo club was established in the town of Silchar in Assam, India, in 1833.
The origins of the game in Manipur are traced to early precursors of Sagol Kangjei. This was one of three forms of hockey in Manipur, the other ones being field hockey (called khong kangjei) and wrestling-hockey (called mukna kangjei). Local rituals such as those connected to the Marjing, the winged-pony god of polo and the creation-ritual episodes of the Lai Haraoba festival enacting the life of his son, Khoriphaba, the polo-playing god of sports. These may indicate an origin earlier than the historical records of Manipur. Later, according to "Chaitharol-Kumbaba", a royal chronicle of King Kangba who ruled Manipur much earlier than Nongda Lairen Pakhangba (33 AD) introduced sagol kangjei (kangjei on horseback). Further regular playing of this game commenced in 1605 during the reign of King Khagemba under newly framed rules of the game. However it was the first Mughal emperor, Babur, who popularised the sport in India and ultimately made a significant influence on England.
In Manipur, polo is traditionally played with seven players to a side. The players are mounted on the indigenous Manipuri pony, which stands less than . There are no goal posts, and a player scores simply by hitting the ball out of either end of the field. Players strike the ball with the long side of the mallet head, not the end. Players are not permitted to carry the ball, although blocking the ball with any part of the body except the open hand is permitted. The sticks are made of cane, and the balls are made from the roots of bamboo. Players protected their legs by attaching leather shields to their saddles and girths.
In Manipur, the game was played even by commoners who owned a pony. The kings of Manipur had a royal polo ground within the ramparts of their Kangla Fort. Here they played manung kangjei bung (literally, "inner polo ground"). Public games were held, as they are still today, at the Mapan Kangjei Bung (literally "Outer Polo Ground"), a polo ground just outside the Kangla. Weekly games called Hapta Kangjei (Weekly Polo) were also played in a polo ground outside the current Palace.
The oldest polo ground in the world is the Imphal Polo Ground in Manipur State. The history of this polo ground is contained in the royal chronicle "Cheitharol Kumbaba" starting from AD 33. Lieutenant (later Major General) Joseph Ford Sherer, the father of modern polo visited the state and played on this polo ground in the 1850s. Lord Curzon, the Viceroy of India visited the state in 1901 and measured the polo ground as "225 yards long and 110 yards wide" .
The Cachar Club established in 1859 is located on Club Road in the heart of Silchar city in Assam. In 1862 the oldest polo club still in existence, Calcutta Polo Club, was established by two British soldiers, Sherer and Captain Robert Stewart. Later they spread the game to their peers in England. The British are credited with spreading polo worldwide in the late 19th century and the early 20th century at the height of its empire. Military officers imported the game to Britain in the 1860s. The establishment of polo clubs throughout England and western Europe followed after the formal codification of rules. The 10th Hussars at Aldershot, Hants, introduced polo to England in 1834. The game's governing body in the United Kingdom is the Hurlingham Polo Association, which drew up the first set of formal British rules in 1874, many of which are still in existence.
This version of polo played in the 19th century was different from the faster form that was played in Manipur. The game was slow and methodical, with little passing between players and few set plays that required specific movements by participants without the ball. Neither players nor horses were trained to play a fast, non-stop game. This form of polo lacked the aggressive methods and required fewer equestrian skills. From the 1800s to the 1910s, a host of teams representing Indian principalities dominated the international polo scene.
The Champions polo league was launched in Jaipur in 2016. It is a new version of polo, similar to the T20 format of cricket. The pitch was made smaller and accommodated a huge audience. The first event of the World Champions Polo League took place in Bhavnagar, Gujarat, with six teams and room for 10,000 spectators. The rules were changed and the duration was made shorter.
British settlers in the Argentine pampas started practising polo during their free time. Among them, David Shennan is credited with having organised the first formal polo game of the country in 1875, at Estancia El Negrete, located in the province of Buenos Aires.
The sport spread quickly between the skilful gauchos, and several clubs opened in the following years in the towns of Venado Tuerto, Cañada de Gómez, Quilmes, Flores and later (1888) Hurlingham. In 1892 The River Plate Polo Association was founded and constituted the basis for the current Asociación Argentina de Polo. In the Olympic Games held in Paris in 1924 a team composed by Juan Miles, Enrique Padilla, Juan Nelson, Arturo Kenny, G. Brooke Naylor and A. Peña obtained the first gold medal for the country's olympic history; this also occurred in Berlin 1936 with players Manuel Andrada, Andrés Gazzotti, Roberto Cavanagh, Luis Duggan, Juan Nelson, Diego Cavanagh, and Enrique Alberdi.
The game spread across the country, and Argentina is credited globally as the capital of polo; Argentina is notably the country with the largest number ever of 10 handicap players in the world.
Five teams were able to gather four 10 handicap players each, to make 40 handicap teams: Coronel Suárez, 1975, 1977–1979 (Alberto Heguy, Juan Carlos Harriott, Alfredo Harriot and Horacio Heguy); La Espadaña, 1989–1990 (Carlos Gracida, Gonzalo Pieres, Alfonso Pieres y Ernesto Trotz Jr.); Indios Chapaleufú, 1992–1993 (Bautista Heguy, Gonzalo Heguy, Horacio Heguy Jr. and Marcos Heguy); La Dolfina, 2009–2010 (Adolfo Cambiaso Jr., Lucas Monteverde, Mariano Aguerre y Bartolomé Castagnola); Ellerstina, 2009 (Facundo Pieres, Gonzalo Pieres Jr., Pablo Mac Donough and Juan Martín Nero).
The three major polo tournaments in Argentina, known as "Triple Corona" ("Triple Crown"), are Hurlingham Polo Open, Tortugas Polo Open and Palermo Polo Open. Polo season usually lasts from October to December.
Polo has found popularity throughout the rest of the Americas, including Brazil, Chile, Mexico, and the United States of America.
James Gordon Bennett Jr. on 16 May 1876 organised what was billed as the first polo match in the United States at Dickel's Riding Academy at 39th Street and Fifth Avenue in New York City. The historical record states that James Gordon Bennett established the Westchester Polo Club on 6 May 1876, and on 13 May 1876, the Jerome Park Racetrack in Westchester County (now Bronx County) was the site of the "first" American outdoor polo match.
H. L. Herbert, James Gordon Bennett and August Belmont financed the original New York Polo Grounds. Herbert stated in a 1913 article that they formed the Westchester Club "after" the "first" outdoor game was played on 13 May 1876. This contradicts the historical record of the club being established before the Jerome Park game.
There is ample evidence that the first to play polo in America were actually the English Texans. "The Galveston News" reported on 2 May 1876 that Denison Texas had a polo club which was before James Gordon Bennett established his Westchester Club or attempted to play the "first" game. The Denison team sent a letter to James Gordon Bennett challenging him to a match. The challenge was published 2 June 1876, in "The Galveston Daily News". By the time the article came out on 2 June, the Denison Club had already received a letter from Bennett indicating the challenge was offered before the "first" games in New York.
There is also an urban legend that the first game of polo in America was played in Boerne, Texas, at retired British officer Captain Glynn Turquand's famous Balcones Ranch The Boerne, Texas, legend also has plenty of evidence pointing to the fact that polo was played in Boerne before James Gordon Bennett Jr. ever picked up a polo mallet.
During the early part of the 20th century, under the leadership of Harry Payne Whitney, polo changed to become a high-speed sport in the United States, differing from the game in England, where it involved short passes to move the ball towards the opposition's goal. Whitney and his teammates used the fast break, sending long passes downfield to riders who had broken away from the pack at a full gallop.
In the late 1950s, champion polo player and Director of the Long Island Polo Association, Walter Scanlon, introduced the "short form", or "European" style, four period match, to the game of polo.
All tournaments and levels of play and players are organized within and between polo clubs, including membership, rules, safety, fields and arenas.
The rules of polo are written for the safety of both players and horses. Games are monitored by umpires. A whistle is blown when an infraction occurs, and penalties are awarded. Strategic plays in polo are based on the "line of the ball", an imaginary line that extends through the ball in the line of travel. This line traces the ball's path and extends past the ball along that trajectory. The line of the ball defines rules for players to approach the ball safely. The "line of the ball" changes each time the ball changes direction. The player who hits the ball generally has the right of way, and other players cannot cross the line of the ball in front of that player. As players approach the ball, they ride on either side of the line of the ball giving each access to the ball. A player can cross the line of the ball when it does not create a dangerous situation. Most infractions and penalties are related to players improperly crossing the line of the ball or the right of way. When a player has the line of the ball on his right, he has the right of way. A "ride-off" is when a player moves another player off the line of the ball by making shoulder-to-shoulder contact with the other players' horses.
The defending player has a variety of opportunities for his team to gain possession of the ball. He can push the opponent off the line or steal the ball from the opponent. Another common defensive play is called "hooking." While a player is taking a swing at the ball, his opponent can block the swing by using his mallet to hook the mallet of the player swinging at the ball. A player may hook only if he is on the side where the swing is being made or directly behind an opponent. A player may not purposely touch another player, his tack or pony with his mallet. Unsafe hooking is a foul that will result in a penalty shot being awarded. For example, it is a foul for a player to reach over an opponent's mount in an attempt to hook.
The other basic defensive play is called the bump or ride-off. It's similar to a body check in hockey. In a ride-off, a player rides his pony alongside an opponent's mount in order to move an opponent away from the ball or to take him out of a play. It must be executed properly so that it does not endanger the horses or the players. The angle of contact must be safe and can not knock the horses off balance, or harm the horses in any way. Two players following the line of the ball and riding one another off have the right of way over a single man coming from any direction.
Like in hockey or basketball, fouls are potentially dangerous plays that infringe on the rules of the game. To the novice spectator, fouls may be difficult to discern. There are degrees of dangerous and unfair play and penalty shots are awarded depending based on the severity of the foul and where the foul was committed on the polo field. White lines on the polo field indicate where the mid-field, sixty, forty and thirty yard penalties are taken.
The official set of rules and rules interpretations are reviewed and published annually by each country's polo association. Most of the smaller associations follow the rules of the Hurlingham Polo Association, the national governing body of the sport of polo in the United Kingdom, and the United States Polo Association.
Outdoor or field polo lasts about one and a half to two hours and consists of four to eight seven-minute chukkas, between or during which players change mounts. At the end of each seven-minute chukka, play continues for an additional 30 seconds or until a stoppage in play, whichever comes first. There is a four-minute interval between chukkas and a ten-minute halftime. Play is continuous and is only stopped for rule infractions, broken tack (equipment) or injury to horse or player. The object is to score goals by hitting the ball between the goal posts, no matter how high in the air. If the ball goes wide of the goal, the defending team is allowed a free 'knock-in' from the place where the ball crossed the goal line, thus getting ball back into play.
Arena polo has rules similar to the field version, and is less strenuous for the player. It is played in a enclosed arena, much like those used for other equestrian sports; the minimum size is . There are many arena clubs in the United States, and most major polo clubs, including the Santa Barbara Polo & Racquet Club, have active arena programmes. The major differences between the outdoor and indoor games are: speed (outdoor being faster), physicality/roughness (indoor/arena is more physical), ball size (indoor is larger), goal size (because the arena is smaller the goal is smaller), and some penalties. In the United States and Canada, collegiate polo is arena polo; in the UK, collegiate polo is both.
Forms of arena polo include beach polo, played in many countries between teams of three riders on a sand surface, and cowboy polo, played almost exclusively in the western United States by teams of five riders on a dirt surface.
Another modern variant is snow polo, which is played on compacted snow on flat ground or a frozen lake. The format of snow polo varies depending on the space available. Each team generally consists of three players and a bright coloured light plastic ball is preferred.
Snow polo is not the same sport as ice polo, which was popular in the US in the late 1890s. The sport resembled ice hockey and bandy but died out entirely in favour of the Canadian ice hockey rules.
A popular combination of the sports of polo and lacrosse is the game of polocrosse, which was developed in Australia in the late 1930s.
These sports are considered as separate sports because of the differences in the composition of teams, equipment, rules, game facilities etc.
Polo is not played exclusively on horseback. Such polo variants are mostly played for recreational or tourist purposes; they include canoe polo, cycle polo, camel polo, elephant polo, golfcart polo, Segway polo and yak polo. In the early 1900s in the United States, cars were used instead of horses in the sport of Auto polo. Hobby Horse Polo is using hobby horses instead of ponies. It uses parts of the polo rules but has its own specialities, as e.g. 'punitive sherries'. The Hobby Horse variant started 1998 as a fun sport in south western Germany and lead 2002 to the foundation of the First Kurfürstlich-Kurpfälzisch Polo-Club in Mannheim. In the meantime it gained further interest in other German cities.
The mounts used are called 'polo ponies', although the term pony is purely traditional and the mount is actually a full-sized horse. They range from high at the withers, and weigh . The polo pony is selected carefully for quick bursts of speed, stamina, agility and manoeuvrability. Temperament is critical; the horse must remain responsive under pressure and not become excited or difficult to control. Many are Thoroughbreds or Thoroughbred crosses. They are trained to be handled with one hand on the reins, and to respond to the rider's leg and weight cues for moving forward, turning and stopping. A well trained horse will carry its rider smoothly and swiftly to the ball and can account for 60 to 75 percent of the player's skill and net worth to his team.
Polo pony training generally begins at age three and lasts from about six months to two years. Most horses reach full physical maturity at about age five, and ponies are at their peak of athleticism and training at around age six or seven. However, without any accidents, polo ponies may have the ability to play until they are 18 to 20 years of age.
Each player must have more than one horse, to allow for tired mounts to be replaced by fresh ones between or even during chukkas. A player's "string" of polo ponies may number two or three in Low Goal matches (with ponies being rested for at least a chukka before reuse), four or more for Medium Goal matches (at least one per chukka), and even more for the highest levels of competition.
Each team consists of four mounted players, which can be mixed teams of both men and women.
Each position assigned to a player has certain responsibilities:
Polo must be played right-handed in order to prevent head-on collisions.
The rules for equipment vary in details between the hosting authorities, but are always for the safety of the players and mounts.
Mandatory equipment includes a protective helmet with chinstrap worn at all times by all players and mounted grooms. They must be to the locally accepted safety standard, "PAS015" (UK), "NOCSAE" (USA). A faceguard is commonly integral with a helmet.
Polo boots and kneeguards are mandatory in the UK during official play, and boots are recommended for all play everywhere. The UK also recommends goggles, elbow pads and gum shields. A shirt or jersey is required that distinguishes the player's team, and is not black and white stripes like an umpire shirt.
White polo pants or trousers are worn during official play. Polo gloves are commonly worn to protect from working the reins and mallet.
Not permitted is any equipment that may harm horses, such as certain spurs or whips.
The modern outdoor polo ball is made of a high-impact plastic. Historically they have been made of bamboo, leather covered cork, hard rubber, and for many years willow root. Originally the British used a white painted leather covered cricket ball.
The regulation outdoor polo ball is to in diameter and weighs to .
Plastic balls were introduced in the 1970s. They are less prone to breakage and much cheaper.
The indoor and arena polo ball is leather-covered and inflated, and is about in diameter.
It must be not less than or more than in circumference. The weight must be not less than or more than . In a bounce test from on concrete at , the rebound should be a minimum of and a maximum of at the inflation rate specified by the manufacturer. This provides for a hard and lively ball.
The polo mallet comprises a cane shaft with a rubber-wrapped grip, a webbed thong, called a sling, for wrapping around the thumb, and a wooden cigar-shaped head. The shaft is made of manau-cane (not bamboo, which is hollow) although a small number of mallets today are made from composite materials. Composite materials are usually not preferred by top players because the shaft of composite mallets can't absorb vibrations as well as traditional cane mallets. The mallet head is generally made from a hardwood called tipa, approximately 9" inches long. The mallet head weighs from to , depending on player preference and the type of wood used, and the shaft can vary in weight and flexibility depending on the player's preference. The weight of the mallet head is of important consideration for the more seasoned players. Female players often use lighter mallets than male players. For some polo players, the length of the mallet depends on the size of the horse: the taller the horse, the longer the mallet. However, some players prefer to use a single length of mallet regardless of the height of the horse. Either way, playing horses of differing heights requires some adjustment by the rider. Variable lengths of the mallet typically range from to . The term "mallet" is used exclusively in US English; British English prefers the term "polo stick". The ball is struck with the broad sides of the mallet head rather than its round and flat tips.
Polo saddles are English-style, close contact, similar to jumping saddles; although most polo saddles lack a flap under the billets. Some players will not use a saddle blanket. The saddle has a flat seat and no knee support; the rider adopting a forward-leaning seat and closed knees dissimilar to a classical dressage seat. A breastplate is added, usually attached to the front billet. A standing martingale must be used: so, a breastplate is a necessity for safety. The tie-down is usually supported by a neck strap. Many saddles also have an overgirth. The stirrup irons are heavier than most, and the stirrup leathers are wider and thicker, for added safety when the player stands in the stirrups. The legs of the pony are wrapped with polo wraps from below the knee to the fetlock to minimize pain. Jumping (open front) or gallop boots are sometimes used along with the polo wraps for added protection. Often, these wraps match the team colours. The pony's mane is most often roached (hogged), and its tail is docked or braided so that it will not snag the rider's mallet.
Polo is ridden with double reins for greater accuracy of signals. The bit is frequently a gag bit or Pelham bit. In both cases, the gag or shank rein will be the bottom rein in the rider's hands, while the snaffle rein will be the top rein. If a gag bit is used, there will be a drop noseband in addition to the cavesson, supporting the tie-down. One of the rein sets may alternately be draw reins.
The playing field is , the area of approximately six soccer fields or 9 American football fields (10 acres)., while arena polo is 96 x 46 metres. The playing field is carefully maintained with closely mowed turf providing a safe, fast playing surface. Goals are posts which are set eight yards apart, centred at each end of the field. The surface of a polo field requires careful and constant grounds maintenance to keep the surface in good playing condition. During half-time of a match, spectators are invited to go onto the field to participate in a polo tradition called "divot stamping", which was developed not only to help replace the mounds of earth (divots) that are torn up by the horses' hooves, but also to afford spectators the opportunity to walk about and socialise.
Polo is played professionally in many countries, notably Argentina, Australia, Brazil, Canada, Chile, Dominican Republic, France, Germany, Iran, India, New Zealand, Mexico, Pakistan, Jamaica, Spain, South Africa, Switzerland, the United Kingdom, and the United States, and is now an active sport in 77 countries. Although its tenure as an Olympic sport was limited to 1900–1939, in 1998 the International Olympic Committee recognised it as a sport with a bona fide international governing body, the Federation of International Polo. The World Polo Championship is held every three years by the Federation.
Polo is unique among team sports in that amateur players, often the team patrons, routinely hire and play alongside the sport's top professionals.
The most important tournaments of the world, at club level, are Abierto de Tortugas, Abierto de Hurlingham and Abierto Argentino de Polo, all of them in Argentina (la "Triple Corona").
Polo has been played in Malaysia and Singapore, both of which are former British colonies, since being introduced to Malaya during the late 19th century. Royal Johor Polo Club was formed in 1884 and Singapore Polo Club was formed in 1886. The oldest polo club in the modern country of Malaysia is Selangor Polo Club, founded in 1902. It was largely played by royalty and the political and business elite.
Polo was played at the 2007 Southeast Asian Games and 2017 Southeast Asian Games. Nations that competed in the tournament were Indonesia, Singapore, Malaysia, Thailand and Philippines (2007) and Brunei, Malaysia, Singapore and Thailand (2017). The 2007 tournament's gold medal was won by the Malaysian team, followed by Singapore with silver and Thailand with bronze while the 2017 tournament's gold medal was won by Malaysia, followed by Thailand with silver and Brunei with bronze.
The traditional or 'free style' "Polo" or "Pulu" of Northern Pakistan is still played avidly in its native region, and the annual Shandur Polo Festival at Shandur Top in Chitral District. It is an internationally famed event attended by many enthusiasts from all over the world. The Shandur polo ground is said to be the highest polo ground in the world, at approximately 3,734 metres,
The recent resurgence in south-east Asia has resulted in its popularity in cities such as Pattaya, Kuala Lumpur and Jakarta. In Pattaya alone, there are three active polo clubs: Polo Escape, Siam Polo Park and the Thai Polo and Equestrian Club. Indonesia has a polo club (Nusantara Polo Club). More recently, Janek Gazecki and Australian professional Jack "Ruki" Baillieu have organised polo matches in parks "around metropolitan Australia, backed by wealthy sponsors."
A Chinese Equestrian Association has been formed with two new clubs in China itself: the Beijing Sunny Time Polo Club, founded by Xia Yang in 2004 and the Nine Dragons Hill Polo Club in Shanghai, founded in 2005.
Polo is not widely spread in West Asia, but still counts five active clubs in Iran, four active polo clubs in the UAE, one club in Bahrain and The Royal Jordanian Polo Club in Amman, Jordan.
Polo in Iran is governed by the Polo Federation of Iran. There are five polo clubs in Iran: Ghasr-e Firoozeh, Nowroozabad, Army Ground Forces, Kanoon-e Chogan and Nesf-e Jahan. Iran possesses some of the best grass polo fields in the region. The country currently has over 100 registered players of which approximately 15% are women. Historically, Kurdish and Persian Arabian horses were the most widely used for polo. This was probably also the case in ancient times. Today Thoroughbreds are being increasingly used alongside the Kurdish and Persian Arabian horses. Some players have also been experimenting with Anglo-Arabians. Iranians still refer to the game of polo by its original Persian name of "Chogan", which means mallet. Iranians still maintain some of the ancient rituals of the game in official polo matches.
The governing body of polo in India is the Indian Polo Association.
Polo first began its Irish history in 1870 with the first official game played on Gormanstown Strand, Co. Meath. Three years later the All Ireland Polo Club was founded by Mr. Horace Rochford in the Phoenix Park. Since then the sport has continued to grow with a further seven clubs opening around the country. The sport has also been made more accessible by these clubs by the creation of more affordable training programmes, such as the beginner to pro programme at Polo Wicklow.
The governing body in the United Kingdom is the Hurlingham Polo Association, dating from 1875, which amalgamated with the County Polo Association in 1949. The UK Armed Forces Polo Association oversees the sport in the three armed services.
The United States Polo Association (USPA) is the governing body for polo in the U.S. The U.S. is the only country that has separate women's polo, run by the United States Women's Polo Federation.
Sagol Kangjei, discussed above, is arguably a version of polo though it can also be seen as the precursor of modern outdoor polo. | https://en.wikipedia.org/wiki?curid=24443 |
Page description language
In digital printing, a page description language (PDL) is a computer language that describes the appearance of a printed page in a higher level than an actual output bitmap (or generally raster graphics). An overlapping term is printer control language, which includes Hewlett-Packard's Printer Command Language (PCL). PostScript is one of the most noted page description languages. The markup language adaptation of the PDL is the page description markup language.
Page description languages are text (human-readable) or binary data streams, usually intermixed with text or graphics to be printed. They are distinct from graphics application programming interfaces (APIs) such as GDI and OpenGL that can be called by software to generate graphical output.
Various page description languages exist: | https://en.wikipedia.org/wiki?curid=24444 |
Pope Felix I
Pope Felix I was the bishop of Rome from 5 January 269 to his death on 30 December 274.
A Roman by birth, Felix was chosen to be pope on 5 January 269, in succession to Dionysius, who had died on 26 December 268.
Felix was the author of an important dogmatic letter on the unity of Christ's Person. He received Emperor Aurelian's aid in settling a theological dispute between the anti-Trinitarian Paul of Samosata, who had been deprived of the bishopric Antioch by a council of bishops for heresy, and the orthodox Domnus, Paul's successor. Paul refused to give way, and in 272 Aurelian was asked to decide between the rivals. He ordered the church building to be given to the bishop who was "recognized by the bishops of Italy and of the city of Rome" (Felix). See Eusebius, Hist. Ecc. vii. 30.
The text of that letter was later interpolated by a follower of Apollinaris in the interests of his sect.
The notice about Felix in the "Liber Pontificalis" ascribes to him a decree that Masses should be celebrated on the tombs of martyrs ("Hic constituit supra memorias martyrum missas celebrare"). The author of this entry was evidently alluding to the custom of celebrating Mass privately at the altars near or over the tombs of the martyrs in the crypts of the catacombs (missa ad corpus), while the solemn celebration always took place in the basilicas built over the catacombs. This practice, still in force at the end of the fourth century, dates apparently from the period when the great cemeterial basilicas were built in Rome, and owes its origin to the solemn commemoration services of martyrs, held at their tombs on the anniversary of their burial, as early as the third century. Felix probably issued no such decree, but the compiler of the "Liber Pontificalis" attributed it to him because he made no departure from the custom in force in his time.
The acts of the Council of Ephesus give Pope Felix as a martyr; but this detail, which occurs again in the biography of the pope in the "Liber Pontificalis", is unsupported by any authentic earlier evidence and is manifestly due to a confusion of names. According to the notice in the "Liber Pontificalis", Felix erected a basilica on the Via Aurelia; the same source also adds that he was buried there. The latter detail is evidently an error, for the fourth-century Roman calendar of feasts says that Pope Felix was interred in the Catacomb of Callixtus on the Via Appia. The statement of the "Liber Pontificalis" concerning the pope's martyrdom results obviously from a confusion with a Roman martyr of the same name buried on the Via Aurelia, and over whose grave a church was built. In the Roman "Feriale" or calendar of feasts, referred to above, the name of Felix occurs in the list of Roman bishops ("Depositio episcoporum"), and not in that of the martyrs.
According to the above-mentioned detail of the "Depositio episcoporum", Felix was interred in the catacomb of Callixtus on 30 December, "III Kal. Jan." (third day to the calends of January) in the Roman dating system. Saint Felix I is mentioned as Pope and Martyr, with a simple feast, on 30 May. This date, given in the "Liber Pontificalis" as that of his death (III Kal. Jun.), is probably an error which could easily occur through a transcriber writing "Jun." for "Jan." This error persisted in the General Roman Calendar until 1969 (see General Roman Calendar of 1960), by which time the mention of Saint Felix I was reduced to a commemoration in the weekday Mass by decision of Pope Pius XII (see General Roman Calendar of Pope Pius XII). Thereafter, the feast of Saint Felix I, no longer mentioned in the General Roman Calendar, is celebrated on his true day of death, 30 December, and without the qualification of "martyr".
According to more recent studies, the oldest liturgical books indicate that the saint honoured on 30 May was a little-known martyr buried on the Via Aurelia, who was mistakenly identified with Pope Felix I, an error similar to but less curious than the identification in the liturgical books, until the mid-1950s, of the martyr saint celebrated on 30 July with the antipope Felix II. | https://en.wikipedia.org/wiki?curid=24445 |
Peptide bond
A peptide bond is an amide type of covalent chemical bond linking two consecutive alpha-amino acids from C1 (carbon number one) of one alpha-amino acid and N2 (nitrogen number two) of another, along a peptide or protein chain.
It can also be called an eupeptide bond to separate it from an isopeptide bond, a different type of amide bond between two amino acids.
When two amino acids form a "dipeptide" through a "peptide bond" it is a type of condensation reaction. In this kind of condensation, two amino acids approach each other, with the non-side chain (C1) carboxylic acid moiety of one coming near the non-side chain (N2) amino moiety of the other. One loses a hydrogen and oxygen from its carboxyl group (COOH) and the other loses a hydrogen from its amino group (NH2). This reaction produces a molecule of water (H2O) and two amino acids joined by a peptide bond (-CO-NH-). The two joined amino acids are called a dipeptide.
The amide bond is synthesized when the carboxyl group of one amino acid molecule reacts with the amino group of the other amino acid molecule, causing the release of a molecule of water (H2O), hence the process is a dehydration synthesis reaction.
The formation of the peptide bond consumes energy, which, in organisms, is derived from ATP. Peptides and proteins are chains of amino acids held together by peptide bonds (and sometimes by a few isopeptide bonds). Organisms use enzymes to produce nonribosomal peptides, and ribosomes to produce proteins via reactions that differ in details from dehydration synthesis.
Some peptides, like alpha-amanitin, are called ribosomal peptides as they are made by ribosomes, but many are nonribosomal peptides as they are synthesized by specialized enzymes rather than ribosomes. For example, the tripeptide glutathione is synthesized in two steps from free amino acids, by two enzymes: glutamate–cysteine ligase (forms an isopeptide bond, which is not a peptide bond) and glutathione synthetase (forms a peptide bond).
A peptide bond can be broken by hydrolysis (the addition of water). In the presence of water they will break down and release 8–16 kilojoule/mol (2–4 kcal/mol) of Gibbs energy. This process is extremely slow, with the half life at 25 °C of between 350 and 600 years per bond.
In living organisms, the process is normally catalyzed by enzymes known as peptidases or proteases, although there are reports of peptide bond hydrolysis caused by conformational strain as the peptide/protein folds into the native structure. This non-enzymatic process is thus not accelerated by transition state stabilization, but rather by ground state destabilization.
The wavelength of absorption A for a peptide bond is 190–230 nm (which makes it particularly susceptible to UV radiation).
Significant delocalisation of the lone pair of electrons on the nitrogen atom gives the group a partial double bond character. The partial double bond renders the amide group planar, occurring in either the cis or trans isomers. In the unfolded state of proteins, the peptide groups are free to isomerize and adopt both isomers; however, in the folded state, only a single isomer is adopted at each position (with rare exceptions). The trans form is preferred overwhelmingly in most peptide bonds (roughly 1000:1 ratio in trans:cis populations). However, X-Pro peptide groups tend to have a roughly 30:1 ratio, presumably because the symmetry between the formula_1 and formula_2 atoms of proline makes the cis and trans isomers nearly equal in energy (see figure, below).The dihedral angle associated with the peptide group (defined by the four atoms formula_3) is denoted formula_4; formula_5 for the cis isomer (synperiplanar conformation) and formula_6 for the trans isomer (antiperiplanar conformation). Amide groups can isomerize about the C'-N bond between the cis and trans forms, albeit slowly (formula_720 seconds at room temperature). The transition states formula_8 requires that the partial double bond be broken, so that the activation energy is roughly 80 kilojoule/mol (20 kcal/mol). However, the activation energy can be lowered (and the isomerization catalyzed) by changes that favor the single-bonded form, such as placing the peptide group in a hydrophobic environment or donating a hydrogen bond to the nitrogen atom of an X-Pro peptide group. Both of these mechanisms for lowering the activation energy have been observed in peptidyl prolyl isomerases (PPIases), which are naturally occurring enzymes that catalyze the cis-trans isomerization of X-Pro peptide bonds.
Conformational protein folding is usually much faster (typically 10–100 ms) than cis-trans isomerization (10–100 s). A nonnative isomer of some peptide groups can disrupt the conformational folding significantly, either slowing it or preventing it from even occurring until the native isomer is reached. However, not all peptide groups have the same effect on folding; nonnative isomers of other peptide groups may not affect folding at all.
Due to its resonance stabilization, the peptide bond is relatively unreactive under physiological conditions, even less than similar compounds such as esters. Nevertheless, peptide bonds can undergo chemical reactions, usually through an attack of an electronegative atom on the carbonyl carbon, breaking the carbonyl double bond and forming a tetrahedral intermediate. This is the pathway followed in proteolysis and, more generally, in N-O acyl exchange reactions such as those of inteins. When the functional group attacking the peptide bond is a thiol, hydroxyl or amine, the resulting molecule may be called a cyclol or, more specifically, a thiacyclol, an oxacyclol or an azacyclol, respectively. | https://en.wikipedia.org/wiki?curid=24446 |
Privy Council of the United Kingdom
The Privy Council of the United Kingdom is a formal body of advisers to the Sovereign of the United Kingdom. Its membership mainly comprises senior politicians who are current or former members of either the House of Commons or the House of Lords.
The Privy Council formally advises the sovereign on the exercise of the Royal Prerogative, and corporately (as Queen-in-Council) it issues executive instruments known as Orders in Council, which among other powers enact Acts of Parliament. The Council also holds the delegated authority to issue Orders of Council, mostly used to regulate certain public institutions. The Council advises the sovereign on the issuing of Royal Charters, which are used to grant special status to incorporated bodies, and city or borough status to local authorities. Otherwise, the Privy Council's powers have now been largely replaced by its executive committee, the Cabinet of the United Kingdom.
Certain judicial functions are also performed by the Queen-in-Council, although in practice its actual work of hearing and deciding upon cases is carried out day-to-day by the Judicial Committee of the Privy Council. The Judicial Committee consists of senior judges appointed as Privy Counsellors: predominantly Justices of the Supreme Court of the United Kingdom and senior judges from the Commonwealth. The Privy Council formerly acted as the High Court of Appeal for the entire British Empire (other than for the United Kingdom itself). It continues to hear judicial appeals from some other independent Commonwealth countries, as well as Crown Dependencies and British Overseas Territories.
The Privy Council of the United Kingdom was preceded by the Privy Council of Scotland and the Privy Council of England. The key events in the formation of the modern Privy Council are given below:
In Anglo-Saxon England, Witenagemot was an early equivalent to the Privy Council of England. During the reigns of the Norman monarchs, the English Crown was advised by a royal court or "curia regis", which consisted of magnates, ecclesiastics and high officials. The body originally concerned itself with advising the sovereign on legislation, administration and justice. Later, different bodies assuming distinct functions evolved from the court. The courts of law took over the business of dispensing justice, while Parliament became the supreme legislature of the kingdom. Nevertheless, the Council retained the power to hear legal disputes, either in the first instance or on appeal. Furthermore, laws made by the sovereign on the advice of the Council, rather than on the advice of Parliament, were accepted as valid. Powerful sovereigns often used the body to circumvent the Courts and Parliament. For example, a committee of the Council—which later became the Court of the Star Chamber—was during the 15th century permitted to inflict any punishment except death, without being bound by normal court procedure. During Henry VIII's reign, the sovereign, on the advice of the Council, was allowed to enact laws by mere proclamation. The legislative pre-eminence of Parliament was not restored until after Henry VIII's death. Though the royal Council retained legislative and judicial responsibilities, it became a primarily administrative body. The Council consisted of forty members in 1553, but the sovereign relied on a smaller committee, which later evolved into the modern Cabinet.
By the end of the English Civil War, the monarchy, House of Lords, and Privy Council had been abolished. The remaining parliamentary chamber, the House of Commons, instituted a Council of State to execute laws and to direct administrative policy. The forty-one members of the Council were elected by the House of Commons; the body was headed by Oliver Cromwell, "de facto" military dictator of the nation. In 1653, however, Cromwell became Lord Protector, and the Council was reduced to between thirteen and twenty-one members, all elected by the Commons. In 1657, the Commons granted Cromwell even greater powers, some of which were reminiscent of those enjoyed by monarchs. The Council became known as the Protector's Privy Council; its members were appointed by the Lord Protector, subject to Parliament's approval.
In 1659, shortly before the restoration of the monarchy, the Protector's Council was abolished. Charles II restored the Royal Privy Council, but he, like previous Stuart monarchs, chose to rely on a small group of advisers. Under George I even more power transferred to this committee. It now began to meet in the absence of the sovereign, communicating its decisions to him after the fact.
Thus, the British Privy Council, as a whole, ceased to be a body of important confidential advisers to the sovereign; the role passed to a committee of the Council, now known as the Cabinet.
The sovereign, when acting on the Council's advice, is known as the "King-in-Council" or "Queen-in-Council". The members of the Council are collectively known as "The Lords of Her Majesty's Most Honourable Privy Council" (sometimes "The Lords and others of ..."). The chief officer of the body is the Lord President of the Council, who is the fourth highest Great Officer of State, a Cabinet member and normally, either the Leader of the House of Lords or of the House of Commons. Another important official is the Clerk, whose signature is appended to all orders made in the Council.
Both "Privy Counsellor" and "Privy Councillor" may be correctly used to refer to a member of the Council. The former, however, is preferred by the Privy Council Office, emphasising English usage of the term "Counsellor" as "one who gives counsel", as opposed to "one who is a member of a council". A Privy Counsellor is traditionally said to be ""sworn of"" the Council after being received by the sovereign.
The sovereign may appoint anyone a Privy Counsellor, but in practice appointments are made only on the advice of Her Majesty's Government. The majority of appointees are senior politicians, including Ministers of the Crown, the few most senior figures of the Loyal Opposition, the Parliamentary leader of the third-largest party, a couple of the most senior figures in the devolved British governments and senior politicians from Commonwealth countries. Besides these, the Council includes a very few members of the Royal Family (usually the consort and heir apparent only), a few dozen judges from British and Commonwealth countries, a few clergy and a small number of senior civil servants.
There is no statutory limit to its membership. Members have no automatic right to attend all Privy Council meetings, and only some are summoned regularly to meetings (in practice at the Prime Minister's discretion).
The Church of England's three senior bishops – the Archbishop of Canterbury, the Archbishop of York and the Bishop of London – become Privy Counsellors upon appointment. Senior members of the Royal Family may also be appointed, but this is confined to the current consort and heir apparent and consort. The Private Secretary to the Sovereign is always appointed a Privy Counsellor, as are the Lord Chamberlain, the Speaker of the House of Commons, and the Lord Speaker. Justices of the Supreme Court of the United Kingdom, judges of the Court of Appeal of England and Wales, senior judges of the Inner House of the Court of Session (Scotland's highest law court) and the Lord Chief Justice of Northern Ireland also join the Privy Council "ex officio".
The balance of Privy Counsellors is largely made up of politicians. The Prime Minister, Cabinet ministers and the Leader of HM Opposition are traditionally sworn of the Privy Council upon appointment. Leaders of major parties in the House of Commons, First Ministers of the devolved assemblies, some senior Ministers outside Cabinet, and on occasion other respected senior parliamentarians are appointed Privy Counsellors.
Because Privy Counsellors are bound by oath to keep matters discussed at Council meetings secret, the appointment of the Leaders of Opposition Parties as Privy Counsellors allows the Government to share confidential information with them "on Privy Council terms". This usually only happens in special circumstances, such as in matters of national security. For example, Tony Blair met Iain Duncan Smith (then Leader of HM Opposition) and Charles Kennedy (then Leader of the Liberal Democrats) "on Privy Council terms" to discuss the evidence for Iraq's weapons of mass destruction.
Although the Privy Council is primarily a British institution, officials from some other Commonwealth realms are also appointed. By 2000, the most notable instance was New Zealand, whose Prime Minister, senior politicians, Chief Justice and Court of Appeal Justices were traditionally appointed Privy Counsellors. However, appointments of New Zealand members have since been discontinued. The Prime Minister, the Speaker, the Governor-General and the Chief Justice of New Zealand are still accorded the style "Right Honourable", but without membership of the Council. Until the late 20th century, the Prime Ministers and Chief Justices of Canada and Australia were also appointed Privy Counsellors. Canada also has its own Privy Council, the Queen's Privy Council for Canada ("see" below). Prime Ministers of some other Commonwealth countries that retain the Queen as their sovereign continue to be sworn of the Council.
It was formerly regarded by the Privy Council as criminal, and possibly treasonous, to disclose the oath administered to Privy Counsellors as they take office. However, the oath was officially made public by the Blair Government in a written parliamentary answer in 1998, as follows. It had also been read out in full in the House of Lords during debate by Lord Rankeillour on 21 December 1932.
A form of this oath dates back to at least 1570.
Privy counsellors can choose to affirm their allegiance in similar terms, should they prefer not to take a religious oath. At the induction ceremony, the order of precedence places Anglicans (being those of the established church) before others.
The initiation ceremony for newly appointed privy counsellors is held in private, and typically requires kneeling on a stool before the sovereign and then kissing hands. According to "The Royal Encyclopaedia": "The new privy counsellor or minister will extend his or her right hand, palm upwards, and, taking the Queen's hand lightly, will kiss it with no more than a touch of the lips." The ceremony has caused difficulties for privy counsellors who advocate republicanism; Tony Benn said in his diaries that he kissed his own thumb, rather than the Queen's hand, while Jeremy Corbyn reportedly did not kneel. Not all members of the privy council go through the initiation ceremony; appointments are frequently made by an Order in Council, although it is "rare for a party leader to use such a course."
Membership is conferred for life. Formerly, the death of a monarch ("demise of the Crown") brought an immediate dissolution of the Council, as all Crown appointments automatically lapsed. By the 18th century, it was enacted that the Council would not be dissolved until up to six months after the demise of the Crown. By convention, however, the sovereign would reappoint all members of the Council after its dissolution. In practice, therefore, membership continued without a break. In 1901, the law was changed to ensure that Crown Appointments became wholly unaffected by any succession of monarch.
The sovereign, however, may remove an individual from the Privy Council. Former MP Elliot Morley was expelled on 8 June 2011, following his conviction on charges of false accounting in connection with the British parliamentary expenses scandal. Before this, the last individual to be expelled from the Council against his will was Sir Edgar Speyer, Bt., who was removed on 13 December 1921 for collaborating with the enemy German Empire, during the First World War.
Individuals can choose to resign, sometimes to avoid expulsion. Three members voluntarily left the Privy Council in the 20th century: John Profumo, who resigned on 26 June 1963; John Stonehouse, who resigned on 17 August 1976 and Jonathan Aitken, who resigned on 25 June 1997 following allegations of perjury.
So far, three Privy Counsellors have resigned in the 21st century, coincidentally all in the same year. On 4 February 2013, Chris Huhne announced that he would voluntarily leave the Privy Council after pleading guilty to perverting the course of justice. Lord Prescott stood down on 6 July 2013, in protest against delays in the introduction of press regulation, expecting others to follow. Denis MacShane resigned on 9 October 2013, before a High Court hearing at which he pleaded guilty of false accounting and was subsequently imprisoned.
Meetings of the Privy Council are normally held once each month wherever the sovereign may be in residence at the time. The quorum, according to the Privy Council Office, is three, though some statutes provide for other quorums (for example, section 35 of the Opticians Act 1989 provides for a lower quorum of two).
The sovereign attends the meeting, though his or her place may be taken by two or more Counsellors of State. Under the Regency Acts 1937 to 1953, Counsellors of State may be chosen from among the sovereign's spouse and the four individuals next in the line of succession who are over 21 years of age (18 for the heir to the throne). Customarily the sovereign remains standing at meetings of the Privy Council, so that no other members may sit down, thereby keeping meetings short. The Lord President reads out a list of Orders to be made, and the sovereign merely says "Approved".
Few Privy Counsellors are required to attend regularly. The settled practice is that day-to-day meetings of the Council are attended by four Privy Counsellors, usually the relevant Minister to the matters pertaining. The Cabinet Minister holding the office of Lord President of the Council, currently Jacob Rees-Mogg , invariably presides. Under Britain's modern conventions of parliamentary government and constitutional monarchy, every order made in Council is drafted by a Government Department and has already been approved by the Minister responsible – thus actions taken by the Queen-in-Council are formalities required for validation of each measure.
Full meetings of the Privy Council are held only when the reigning sovereign announces his or her own engagement (which last happened on 23 November 1839, in the reign of Queen Victoria); or when there is a demise of the Crown, either by the death or abdication of the monarch. A full meeting of the Privy Council was also held on 6 February 1811, when George, Prince of Wales was sworn in as Prince Regent by Act of Parliament. The current statutes regulating the establishment of a regency in the case of minority or incapacity of the sovereign also require any regents to swear their oaths before the Privy Council.
In the case of a demise of the Crown, the Privy Council – together with the Lords Spiritual, the Lords Temporal, the Lord Mayor and Aldermen of the City of London as well as representatives of Commonwealth realms – makes a proclamation declaring the accession of the new sovereign and receives an oath from the new monarch relating to the security of the Church of Scotland, as required by law. It is also customary for the new sovereign to make an allocution to the Privy Council on that occasion, and this Sovereign's Speech is formally published in "The London Gazette". Any such Special Assembly of the Privy Council, convened to proclaim the accession of a new sovereign and witness the monarch's statutory oath, is known as an Accession Council. The last such meetings were held on 6 and 8 February 1952: as Elizabeth II was abroad when the last demise of the Crown took place, the Accession Council met twice, once to proclaim the sovereign (meeting of 6 February 1952), and then again after the new queen had returned to Britain, to receive from her the oath required by statute (meeting of 8 February 1952).
The sovereign exercises executive authority by making Orders in Council upon the advice of the Privy Council. Orders-in-Council, which are drafted by the government rather than by the sovereign, are secondary legislation and are used to make government regulations and to make government appointments. Furthermore, Orders-in-Council are used to grant Royal Assent for Measures of the National Assembly for Wales, and laws passed by the legislatures of British Crown dependencies.
Distinct from Orders-in-Council are Orders of Council: the former are issued by the sovereign upon the advice of the Privy Council, whereas the latter are made by members of the Privy Council without requiring the sovereign's approval. They are issued under the specific authority of Acts of Parliament, and most commonly are used for the regulation of public institutions.
The sovereign also grants Royal Charters on the advice of the Privy Council. Charters bestow special status to incorporated bodies; they are used to grant "chartered" status to certain professional, educational or charitable bodies, and sometimes also city and borough status to towns. The Privy Council therefore deals with a wide range of matters, which also includes university and livery company statutes, churchyards, coinage and the dates of bank holidays. The Privy Council formerly had sole power to grant academic degree-awarding powers and the title of "university", but following the Higher Education and Research Act 2017 these powers have been given to the Office for Students for educational institutions in England.
The Privy Council comprises a number of committees:
The Accession Council is made up of Privy Counsellors, Great Officers of State, members of the House of Lords, the Lord Mayor of the City of London, the Aldermen of the City of London, High Commissioners of Commonwealth realms, and senior civil servants. It is a ceremonial body which assembles in St James's Palace upon the death of a monarch, to make formal proclamation of the accession of the successor to the throne.
The Baronetage Committee was established by a 1910 Order in Council, during Edward VII's reign, to scrutinise all succession claims (and thus reject doubtful ones) to be placed on the Roll of Baronets.
The Cabinet of the United Kingdom is the collective decision-making body of Her Majesty's Government of the United Kingdom, composed of the Prime Minister and 21 cabinet ministers, the most senior of the government ministers.
The Committee for the Affairs of Jersey and Guernsey recommends approval of Channel Islands legislation.
The Committee for the purposes of the Crown Office Act 1877 consists of the Lord Chancellor and Lord Privy Seal as well as a Secretary of State. The Committee, which last met in 1988, is concerned with the design and usage of wafer seals.
The Judicial Committee of the Privy Council, consists of senior judges who are Privy Counsellors. The decision of the Committee is presented in the form of "advice" to the monarch, but in practice it is always followed by the sovereign (as Crown-in-Council), who formally approves the recommendation of the Judicial Committee.
Within the United Kingdom, the Judicial Committee hears appeals from ecclesiastical courts, the Court of Admiralty of the Cinque Ports, prize courts and the Disciplinary Committee of the Royal College of Veterinary Surgeons, appeals against schemes of the Church Commissioners and appeals under certain Acts of Parliament (e.g., the House of Commons Disqualification Act 1975). The Crown-in-Council was formerly the Supreme Appeal Court for the entire British Empire, but a number of Commonwealth countries have now abolished the right to such appeals. The Judicial Committee continues to hear appeals from several Commonwealth countries, from British Overseas Territories, Sovereign Base Areas and Crown dependencies. The Judicial Committee had direct jurisdiction in cases relating to the Scotland Act 1998, the Government of Wales Act 1998 and the Northern Ireland Act 1998, but this was transferred to the new Supreme Court of the United Kingdom in 2009.
The Lords Commissioners are Privy Counsellors appointed by the Monarch of the United Kingdom to exercise, on their behalf, certain functions relating to Parliament which would otherwise require the monarch's attendance at the Palace of Westminster. These include the opening and prorogation of Parliament, the confirmation of a newly elected Speaker of the House of Commons and the granting of Royal Assent. In current practice, the Lords Commissioners usually include the Lord Chancellor, the Archbishop of Canterbury, the leaders of the three major parties in the House of Lords, the convener of the House of Lords Crossbenchers, and the Lord Speaker.
The Scottish Universities Committee considers proposed amendments to the statutes of Scotland's four ancient universities.
The Universities Committee, which last met in 1995, considers petitions against statutes made by Oxford and Cambridge universities and their colleges.
In addition to the Standing Committees, "ad hoc" Committees are notionally set up to consider and report on Petitions for Royal charters of Incorporation and to approve changes to the bye-laws of bodies created by Royal Charter.
Committees of Privy Counsellors are occasionally established to examine specific issues. Such Committees are independent of the
Privy Council Office and therefore do not report directly to the Lord President of the Council. Examples of such Committees include:
The Civil Service is formally governed by Privy Council Orders, as an exercise of the Royal prerogative. One such order implemented HM Government's ban of GCHQ staff from joining a Trade Union. Another, the Civil Service (Amendment) Order in Council 1997, permitted the Prime Minister to grant up to three political advisers management authority over some Civil Servants.
In the 1960s, the Privy Council made an order to evict the 2,000 inhabitants of the 65-island Chagos Archipelago in the Indian Ocean, in preparation for the establishment of a joint United States–United Kingdom military base on the largest outlying island, Diego Garcia, some distant. In 2000 the Court of Appeal ruled the 1971 Immigration Ordinance preventing resettlement unlawful. In 2004, the Privy Council, under Jack Straw's tenure, overturned the ruling. In 2006 the High Court of Justice found the Privy Council's decision to be unlawful. Sir Sydney Kentridge described the treatment of the Chagossians as "outrageous, unlawful and a breach of accepted moral standards": Justice Kentridge stated that there was no known precedent "for the lawful use of prerogative powers to remove or exclude an entire population of British subjects from their homes and place of birth", and the Court of Appeal were persuaded by this argument, but the Law Lords (at that time the UK's highest law court) found its decision to be flawed and overturned the ruling by a 3–2 decision thereby upholding the terms of the Ordinance.
The Privy Council as a whole is termed "The Most Honourable" whilst its members individually, the Privy Counsellors, are entitled to be styled "The Right Honourable".
Each Privy Counsellor has the right of personal access to the sovereign. Peers were considered to enjoy this right individually; members of the House of Commons possess the right collectively. In each case, personal access may only be used to tender advice on public affairs.
Only Privy Counsellors can signify royal consent to the examination of a Bill affecting the rights of the Crown.
Members of the Privy Council are privileged to be given advance notice of any prime ministerial decision to commit HM Armed Forces in enemy action.
Privy Counsellors have the right to sit on the steps of the Sovereign's Throne in the Chamber of the House of Lords during debates, a privilege which was shared with heirs apparent of those hereditary peers who were to become members of the House of Lords before Labour's partial Reform of the Lords in 1999, diocesan bishops of the Church of England yet to be Lords Spiritual, retired bishops who formerly sat in the House of Lords, the Dean of Westminster, Peers of Ireland, the Clerk of the Crown in Chancery, and the Gentleman Usher of the Black Rod. While Privy Counsellors have the right to sit on the steps of the Sovereign's Throne they do so only as observers and are not allowed to participate in any of the workings of the House of Lords. Nowadays this privilege is rarely exercised. A notable recent instance of the exercising of this privilege was used by the Prime Minister, Theresa May, and David Lidington, who watched the opening of the debate of the European Union (Notification of Withdrawal) Bill 2017 in the House of Lords.
Privy Counsellors are accorded a formal rank of precedence, if not already having a higher one. At the beginning of each new Parliament, and at the discretion of the Speaker, those members of the House of Commons who are Privy Counsellors usually take the oath of allegiance before all other members except the Speaker and the Father of the House (who is the member of the House who has the longest continuous service). Should a Privy Counsellor rise to speak in the House of Commons at the same time as another Honourable Member, the Speaker usually gives priority to the "Right Honourable" Member. This parliamentary custom, however, was discouraged under New Labour after 1998, despite the Government not being supposed to exert influence over the Speaker.
All those sworn of the Privy Council are accorded the style "The Right Honourable", but some nobles automatically have higher styles: non-royal dukes are styled "The Most Noble" and marquesses, "The Most Honourable". Modern custom as recommended by "Debrett's" is to use the post-nominal letters "PC" in a social style of address for peers who are Privy Counsellors. For commoners, "The Right Honourable" is sufficient identification of their status as a Privy Counsellor and they do not use the post-nominal letters "PC". The Ministry of Justice revises current practice of this convention from time to time.
The Privy Council is one of the four principal councils of the sovereign. The other three are the courts of law, the "Commune Concilium" (Common Council, or Parliament) and the "Magnum Concilium" (Great Council, or the assembly of all the Peers of the Realm). All are still in existence, or at least have never been formally abolished, but the "Magnum Concilium" has not been summoned since 1640 and was considered defunct even then.
Several other Privy Councils have advised the sovereign. England and Scotland once had separate Privy Councils (the Privy Council of England and Privy Council of Scotland). The Acts of Union 1707 united the two countries into the Kingdom of Great Britain and in 1708 the Parliament of Great Britain abolished the Privy Council of Scotland. Thereafter there was one Privy Council of Great Britain sitting in London. Ireland, on the other hand, continued to have a separate Privy Council even after the Act of Union 1800. The Privy Council of Ireland was abolished in 1922, when the southern part of Ireland separated from the United Kingdom; it was succeeded by the Privy Council of Northern Ireland, which became dormant after the suspension of the Parliament of Northern Ireland in 1972. No further appointments have been made since then, and only three appointees were still living as of November 2017.
Canada has had its own Privy Council—the Queen's Privy Council for Canada—since 1867. While the Canadian Privy Council is specifically "for Canada", the Privy Council discussed above is not "for the United Kingdom"; to clarify the ambiguity where necessary, the latter was traditionally referred to as the Imperial Privy Council. Equivalent organs of state in other Commonwealth realms, such as Australia and New Zealand, are called Executive Councils. | https://en.wikipedia.org/wiki?curid=24451 |
Prime Minister of India
The Prime Minister of India (IAST: "Bhārat ke Pradhānamantrī") is the leader of the executive of the Government of India. The Prime Minister is the chief adviser to the President of India and the head of the Union Council of Ministers. They can be a member of any of the two houses of the Parliament of India—the Lok Sabha (House of the People) and the Rajya Sabha (Council of the States); but has to be a member of the political party or coalition, having a majority in the Lok Sabha.
The Prime Minister is the senior-most member of cabinet in the executive of government in a parliamentary system. The prime minister selects and can dismiss members of the cabinet; allocates posts to members within the government; and is the presiding member and chairperson of the cabinet.
The Union Cabinet headed by the prime minister is appointed by the president of India to assist the latter in the administration of the affairs of the executive. Union cabinet is collectively responsible to the Lok Sabha as per of the Constitution of India. The prime minister has to enjoy the confidence of a majority in the Lok Sabha and shall resign if they are unable to prove majority when instructed by the president.
India follows a parliamentary system in which the prime minister is the presiding head of the government and chief of the executive of the government. In such systems, the head of state, or, the head of state's official representative (i.e., the monarch, president, or governor-general) usually holds a purely ceremonial position and acts—on most matters—only on the advice of the prime minister.
The prime minister—if they are not already—shall become a member of parliament within six months of beginning his/her tenure. A prime minister is expected to work with other central ministers to ensure the passage of bills by the parliament.
Since 1947, there have been 14 different prime ministers. The first few decades after 1947 saw the Indian National Congress' (INC) almost complete domination over the political map of India. India's first prime minister—Jawaharlal Nehru—took oath on 15 August 1947. Nehru went on to serve as prime minister for 17 consecutive years, winning four general elections in the process. His tenure ended in May 1964, on his death. After the death of Nehru, Lal Bahadur Shastri—a former home minister and a leader of the Congress party—ascended to the position of Prime Minister. Shastri's tenure saw the Indo-Pakistani War of 1965. Shashtri subsequently died of a reported heart attack in Tashkent, after signing the Tashkent Declaration.
After Shastri, Indira Gandhi—Nehru's daughter—was elected as the country's first woman prime minister. Indira's first term in office lasted 11 years, in which she took steps such as nationalisation of banks; end of allowances and political posts, which were received by members of the royal families of the erstwhile princely states of British India. In addition, events such as the Indo-Pakistani War of 1971; the establishment of a sovereign Bangladesh; accession of Sikkim to India, through a referendum in 1975; and India's first nuclear test in Pokhran occurred during Indira's first term. In 1975, President Fakhruddin Ali Ahmed—on Indira's advice—imposed a state of emergency, therefore, bestowing the government with the power to rule by decree, the period is known for human right violations.
After widespread protests, the emergency was lifted in 1977, and a general election was to be held. All of the political parties of the opposition—after the conclusion of the emergency—fought together against the Congress, under the umbrella of the Janata Party, in the general election of 1977, and were successful in defeating the Congress. Subsequently, Morarji Desai—a former deputy prime minister—became the first non-Congress prime minister of the country. The government of Prime Minister Desai was composed of groups with opposite ideologies, in which unity and co-ordination were difficult to maintain. Ultimately, after two and a half years as PM; on 28 July 1979, Morarji tendered his resignation to the president; and his government fell. Thereafter, Charan Singh—a deputy prime minister in Desai's cabinet—with outside, conditional support from Congress, proved a majority in Lok Sabha and took oath as prime minister. However, Congress pulled its support shortly after, and Singh had to resign; he had a tenure of 5 months, the shortest in the history of the office.
In 1980, after a three-year absence, the Congress returned to power with an absolute majority. Indira Gandhi was elected prime minister a second time. During her second tenure, Operation Blue Star—an Indian Army operation inside the Golden Temple, the most sacred site in Sikhism—was conducted, resulting in reportedly thousands of deaths. Subsequently, on 31 October 1984, Gandhi was shot dead by Satwant Singh and Beant Singh—two of her bodyguards—in the garden of her residence at 1, Safdarjung Road, New Delhi.
After Indira, Rajiv—her eldest son and 40 years old at the time—was sworn in on the evening of 31 October 1984, becoming the youngest person ever to hold the office of prime minister. Rajiv immediately called for a general election. In the subsequent general election, the Congress secured an absolute majority, winning 401 of 552 seats in the Lok Sabha, the maximum number received by any party in the history of India. Vishwanath Pratap Singh—first finance minister and then later defence minister in Gandhi's cabinet—uncovered irregularities, in what became to be known as the Bofors scandal, during his stint at the Ministry of Defence; Singh was subsequently expelled from Congress and formed the Janata Dal and—with the help of several anti-Congress parties—also formed the National Front, a coalition of many political parties.
In the general election of 1989, the National Front—with outside support from the Bharatiya Janata Party (BJP) and the Left Front—came to power. V. P. Singh was elected prime minister. During a tenure of less than a year, Singh and his government accepted the Mandal Commission's recommendations. Singh's tenure came to an end after he ordered the arrest of BJP member Lal Krishna Advani, as a result, BJP withdrew its outside support to the government, V. P. Singh lost the subsequent vote-of-no-confidence 146–320 and had to resign. After V. P. Singh's resignation, Chandra Shekhar along with 64 members of parliament (MPs) floated the Samajwadi Janata Party (Rashtriya), and proved a majority in the Lok Sabha with support from Congress. But Shekhar's premiership did not last long, Congress proceeded to withdraw its support; Shekhar's government fell as a result, and new elections were announced.
In the general election of 1991, Congress—under the leadership of P. V. Narasimha Rao—formed a minority government; Rao became the first PM of South Indian origin. After the dissolution of the Soviet Union, India was on the brink of bankruptcy, so, Rao took steps to liberalise the economy, and appointed Manmohan Singh—an economist and a former governor of the Reserve Bank of India—as finance minister. Rao and Singh then took various steps to liberalise the economy, these resulted in an unprecedented economic growth in India. His premiership, however, was also a witness to the demolition of the Babri Masjid, which resulted in the death of about 2,000 people. Rao, however, did complete five continuous years in office, becoming the first prime minister outside of the Nehru—Gandhi family to do so.
After the end of Rao's tenure in May 1996, the nation saw four prime ministers in a span of three years, "", two tenures of Atal Bihari Vajpayee; one tenure of H. D. Deve Gowda from 1 June 1996 to 21 April 1997; and one tenure of I. K. Gujral from 21 April 1997 to 19 March 1998. The government of Prime Minister Vajpayee—elected in 1998—took some concrete steps. In May 1998—after a month in power—the government announced the conduct of five underground nuclear explosions in Pokhran. In response to these tests, many western countries, including the United States, imposed economic sanctions on India, but, due to the support received from Russia, France, the Gulf countries and some other nations, the sanctions—were largely—not considered successful. A few months later in response to the Indian nuclear tests, Pakistan also conducted nuclear tests. Given the deteriorating situation between the two countries, the governments tried to improve bilateral relations. In February 1999, the India and Pakistan signed the Lahore Declaration, in which the two countries announced their intention to annul mutual enmity, increase trade and use their nuclear capabilities for peaceful purposes. In May 1999, All India Anna Dravida Munnetra Kazhagam withdrew from the ruling National Democratic Alliance (NDA) coalition; Vajpayee's government, hence, became a caretaker one after losing a motion-of-no-confidence 269–270, this coincided with the Kargil War with Pakistan. In the subsequent October 1999 general election, the BJP-led NDA and its affiliated parties secured a comfortable majority in the Lok Sabha, winning 299 of 543 seats in the lower house.
Vajpayee continued the process of economic liberalisation during his reign, resulting in economic growth. In addition to the development of infrastructure and basic facilities, the government took several steps to improve the infrastructure of the country, such as, the National Highways Development Project (NHDP) and the "Pradhan Mantri Gram Sadak Yojana" (PMGSY; IAST: ; Prime Minister Rural Road Scheme), for the development of roads. But during his reign, the 2002 Gujarat communal riots in the state of Gujarat took place; resulting in about 2,000 deaths. Vajpayee's tenure as prime minister came to an end in May 2004, making him the first non-Congress PM to complete a full five-year tenure.
In the 2004 election, the Congress emerged as the largest party in a hung parliament; Congress-led United Progressive Alliance (UPA)—with outside support from the Left Front, the Samajwadi Party (SP) and Bahujan Samaj Party (BSP) among others—proved a majority in the Lok Sabha, and Manmohan Singh was elected prime minister; becoming the first Sikh prime minister of the nation. During his tenure, the country retained the economic momentum gained during Prime Minister Vajpayee's tenure. Apart from this, the government succeeded in getting the "National Rural Employment Guarantee Act, 2005", and the "Right to Information Act, 2005" passed in the parliament. Further, the government strengthened India's relations with nations like Afghanistan; Russia; the Gulf states; and the United States, culminating with the ratification of India–United States Civil Nuclear Agreement near the end of Singh's first term. At the same time, the November 2008 Mumbai terrorist attacks also happened during Singh's first term in office. In the general election of 2009, the mandate of UPA increased. Prime Minister Singh's second term, however, was surrounded by accusations of high-level scandals and corruption. Singh resigned as prime minister on 17 May 2014, after Congress' defeat in the 2014 general election.
In the general election of 2014, the BJP-led NDA got an absolute majority, winning 336 out of 543 Lok Sabha seats; the BJP itself became the first party since 1984 to get a majority in the Lok Sabha. Narendra Modi—the Chief Minister of Gujarat—was elected prime minister, becoming the first prime minister to have been born in an independent India.
Narendra Modi was re-elected as prime minister in 2019 with a bigger mandate than that of 2014. The BJP-led NDA winning 354 seats out of which BJP secured 303 seats.
The Constitution envisions a scheme of affairs in which the president of India is the head of state; in terms of Article 53 with office of the prime minister being the head of Council of Ministers to assist and advise the president in the discharge of his/her constitutional functions. To quote, Article 53, and 75 provide as under;
Like most parliamentary democracies, the president's duties are mostly ceremonial as long as the constitution and the rule of law is obeyed by the cabinet and the legislature. The prime minister of India is the head of government and has the responsibility for executive power. The president's constitutional duty is to preserve, protect and defend the Constitution and the law per . In the constitution of India, the prime minister is mentioned in only four of its articles (articles 74, 75, 78 and 366), however he/she plays a crucial role in the government of India by enjoying majority in the Lok Sabha.
According to Article 84 of the Constitution of India, which sets the principle qualification for member of Parliament, and Article 75 of the Constitution of India, which sets the qualifications for the minister in the Union Council of Ministers, and the argument that the position of prime minister has been described as "primus inter pares" (the first among equals), A prime minister must:
If however a candidate is elected as the prime minister they must vacate their post from any private or government company and may take up the post only on completion of their term.
The prime minister is required to make and subscribe in the presence of the President of India before entering office, the oath of office and secrecy, as per the Third Schedule of the Constitution of India.
Oath of office:
Oath of secrecy:
The prime minister serves on 'the pleasure of the president', hence, a prime minister may remain in office indefinitely, so long as the president has confidence in him/her. However, a prime minister must have the confidence of Lok Sabha, the lower house of the Parliament of India.
However, the term of a prime minister can end before the end of a Lok Sabha's term, if a simple majority of its members no longer have confidence in him/her, this is called a vote-of-no-confidence. Three prime ministers, I. K. Gujral , H. D. Deve Gowda and Atal Bihari Vajpayee have been voted out from office this way. In addition, a prime minister can also resign from office; Morarji Desai was the first prime minister to resign while in office.
Upon ceasing to possess the requisite qualifications to be a member of Parliament subject to the "Representation of the People Act, 1951".
The prime minister leads the functioning and exercise of authority of the government of India. The president of India—subject to eligibility—invites a person who is commanding support of majority members of Lok Sabha to form the government of India—also known as the central government or Union government—at the national level and exercise its powers. In practice the prime minister nominates the members of their council of ministers to the president. They also work upon to decide a core group of ministers (known as the cabinet), as in charge of the important functions and ministries of the government of India.
The prime minister is responsible for aiding and advising the president in distribution of work of the government to various ministries and offices and in terms of the "Government of India (Allocation of Business) Rules, 1961". The co-ordinating work is generally allocated to the Cabinet Secretariat. While the work of the government is generally divided into various Ministries, the prime minister may retain certain portfolios if they are not allocated to any member of the cabinet.
The prime minister—in consultation with the cabinet—schedules and attends the sessions of the houses of parliament and is required to answer the question from the Members of Parliament to them as the in-charge of the portfolios in the capacity as prime minister of India.
Some specific ministries/department are not allocated to anyone in the cabinet but the prime minister themself. The prime minister is usually always in charge/head of:
The prime minister represents the country in various delegations, high level meetings and international organisations that require the attendance of the highest government office, and also addresses to the nation on various issues of national or other importance.
Per of the constitution, the official communication between the union cabinet and the president are through the prime minister. Other wise constitution recognises the prime minister as a member of the union cabinet only outside the sphere of union cabinet.
The prime minister recommends to the president—among others—names for the appointment of:
As the chairperson of Appointments Committee of the Cabinet (ACC), the prime minister—on the non-binding advice of the Cabinet Secretary of India led-Senior Selection Board (SSB)—decides the postings of top civil servants, such as, secretaries, additional secretaries and joint secretaries in the government of India. Further, in the same capacity, the PM decides the assignments of top military personnel such as the Chief of the Army Staff, Chief of the Air Staff, Chief of the Naval Staff and commanders of operational and training commands. In addition, the ACC also decides the posting of Indian Police Service officers—the All India Service for policing, which staffs most of the higher level law enforcement positions at federal and state level—in the government of India.
Also, as the Minister of Personnel, Public Grievances and Pensions, the PM also exercises control over the Indian Administrative Service (IAS), the country's premier civil service, which staffs most of the senior civil service positions; the Public Enterprises Selection Board (PESB); and the Central Bureau of Investigation (CBI), except for the selection of its director, who is chosen by a committee of: (a) the prime minister, as chairperson; (b) the leader of the opposition in Lok Sabha; and (c) the chief justice.
Unlike most other countries, the prime minister does not have much influence over the selection of judges, that is done by a collegium of judges consisting of the Chief Justice of India, four senior most judges of the Supreme Court of India and the chief justice—or the senior-most judge—of the concerned state high court. The executive as a whole, however, has the right to send back a recommended name to the collegium for reconsideration, this, however, is not a full Veto power, and the collegium can still put forward rejected name.
The prime minister acts as the leader of the house of the chamber of parliament—generally the Lok Sabha—he/she belongs to. In this role, the prime minister is tasked with representing the executive in the legislature, he/she is also expected to announce important legislation, and is further expected to respond to the opposition's concerns. Article 85 of the Indian constitution confers the president with the power to convene and end extraordinary sessions of the parliament, this power, however, is exercised only on the advise of the prime minister and his/her council, so, in practice, the prime minister does exercise some control over affairs of the parliament.
Article 75 of the Constitution of India confers the parliament with the power to decide the remuneration and other benefits of the prime minister and other ministers are to be decided by the Parliament. and is renewed from time to time. The original remuneration for the prime minister and other ministers were specified in the Part B of the second schedule of the constitution, which was later removed by an amendment.
In 2010, the prime minister's office reported that he/she does not receive a formal salary, but was only entitled to monthly allowances. That same year "The Economist" reported that, on a purchasing power parity basis, the prime minister received an equivalent of $4106 per year. As a percentage of the country's per-capita GDP (gross domestic product), this is the lowest of all countries "The Economist" surveyed.
The 7, Lok Kalyan Marg—previously called the 7, Race Course Road—in New Delhi, currently serves as the official place of residence for the prime minister of India.
The first residence of the Indian prime minister was Teen Murti Bhavan. His successor Lal Bahadur Shastri chose 10, Janpath as an official residence. Indira Gandhi resided at 1, Safdarjung Road. Rajiv Gandhi became the first prime minister to use 7, Race Course Road as his residence, which was used by his successors.
For ground travel, the prime minister uses a highly modified, armoured version of a Range Rover. The prime minister's motorcade comprises a fleet of vehicles, the core of which consists of at least three armoured BMW 7 Series sedans, two armoured Range Rovers, at least 8-10 BMW X5s, six Toyota Fortuners/Land Cruisers and at least two Mercedes-Benz Sprinter ambulances.
For air travel, Boeing 777-300ERs—designated by the call sign Air India One (AI-1 or AIC001), and maintained by the Indian Air Force—are used. Apart from aircraft, there are several helicopters used such as Mi-8 for carrying the prime minister for travelling a short distance. These aircraft and helicopters are operated by the Indian Air Force.
The Special Protection Group (SPG) is charged with protecting the sitting prime minister and his/her family.
The prime minister's Office (PMO) acts as the principal workplace of the prime minister. The office is located at South Block, and is a 20-room complex, and has the Cabinet Secretariat, the Ministry of Defence and the Ministry of External Affairs adjacent to it. The office is headed by the principal secretary to the prime minister of India, generally a former civil servant, mostly from the Indian Administrative Service (IAS) and rarely from the Indian Foreign Service (IFS).
The Prime Minister's spouse sometimes accompany him on foreign visits. The Prime Minister's family is also assigned protection by the Special Protection Group.
Former prime ministers are entitled to a bungalow, former prime ministers are also entitled the same facilities as those given to a serving cabinet minister, this includes a fourteen-member secretarial staff, for a period of five years; reimbursement of office expenses; six domestic executive-class air tickets each year; and security cover from the Special Protection Group.
In addition, former prime ministers rank seventh on the Indian order of precedence, equivalent to chief ministers of states (within their respective states) and cabinet ministers As a former member of the parliament, the prime minister receives a minimum pension of per month, plus—if he/she served as an MP for more than five years— for every year served.
Some prime ministers have had significant careers after their tenure, including H. D. Deve Gowda, who remained a Member of the Lok Sabha until 2019, and Manmohan Singh continues to be a Member of the Rajya Sabha.
Prime ministers are accorded a state funeral. It is customary for states and union territories to declare a day of mourning on the occasion of death of any former Prime Minister.
Several institutions are named after prime ministers of India. The birth-date of Jawaharlal Nehru is celebrated as children's day in India. Prime Ministers are also commemorated on postage stamps of several countries.
The prime minister presides over various funds.
The National Defence Fund (NDF) was set up the Indian government in 1962, in the aftermath of 1962 Sino-Indian War. The prime minister acts as chairperson of the fund's executive committee, while, the ministers of defence, finance and home act as the members of the executive committee, the finance minister also acts the treasurer of the committee. The secretary of the fund's executive committee is a joint secretary in the prime minister's office, dealing with the subject of NDF. The fund—according to its website—is "entirely dependent on voluntary contributions from the public and does not get any budgetary support.". Donations to the fund are 100% tax-deductible under section 80G of the "Income Tax Act, 1961".
The Prime Minister's National Relief Fund (PMNRF) was set up by the first prime minister of India—Jawaharlal Nehru—in 1948, to assist displaced people from Pakistan. The fund, now, is primarily used to assist the families of those who are killed during natural disasters such as earthquakes, cyclones and flood and secondarily to reimburse medical expenses of people with chronic and deadly diseases. Donations to the PMNRF are 100% tax-deductible under section 80G of the "Income Tax Act, 1961".
The post of Deputy Prime Minister of India is not technically a constitutional post, nor is there any mention of it in an Act of the parliament. But historically, on various occasions, different governments have assigned one of their senior ministers as the 'Deputy Prime Minister'. There is neither constitutional requirement for filling the post of deputy PM, nor does the post provide any kind of special powers. Typically, senior cabinet ministers like the finance minister or the home minister are appointed as Deputy Prime Minister. The post is considered to be the senior most in the cabinet after the prime minister and represents the government in his/her absence. Generally, deputy prime ministers have been appointed to strengthen the coalition governments. The first holder of this post was Vallabhbhai Patel, who was also the home minister in Jawaharlal Nehru's cabinet. | https://en.wikipedia.org/wiki?curid=24452 |
Paraphyly
In taxonomy, a group is paraphyletic if it consists of the group's last common ancestor and all descendants of that ancestor excluding a few—typically only one or two—monophyletic subgroups. The group is said to be paraphyletic "with respect to" the excluded subgroups. The arrangement of the members of a paraphyletic group is called a paraphyly. The term is commonly used in phylogenetics (a subfield of biology) and in linguistics.
The term was coined to apply to well-known taxa like Reptilia (reptiles) which, as commonly named and traditionally defined, is paraphyletic with respect to mammals and birds. Reptilia contains the last common ancestor of reptiles and all descendants of that ancestor, including all extant reptiles as well as the extinct synapsids, except for mammals and birds. Other commonly recognized paraphyletic groups include fish, monkeys, and lizards.
If many subgroups are missing from the named group, it is said to be polyparaphyletic. A paraphyletic group cannot be a clade, or monophyletic group, which is any group of species that includes a common ancestor and "all" of its descendants. Formally, a paraphyletic group is the relative complement of one or more subclades within a clade: removing one or more subclades leaves a paraphyletic group.
The term "paraphyly", or "paraphyletic", derives from the two Ancient Greek words (), meaning "beside, near", and (), meaning "genus, species", and refers to the situation in which one or several monophyletic subgroups of organisms (e.g., genera, species) are "left apart" from all other descendants of a unique common ancestor.
Conversely, the term "monophyly", or "monophyletic", builds on the Ancient Greek prefix (), meaning "alone, only, unique", and refers to the fact that a monophyletic group includes organisms consisting of "all" the descendants of a "unique" common ancestor.
By comparison, the term "polyphyly", or "polyphyletic", uses the Ancient Greek prefix (), meaning "many, a lot of", and refers to the fact that a polyphyletic group includes organisms arising from "multiple" ancestral sources.
Groups that include all the descendants of a common ancestor are said to be "monophyletic". A paraphyletic group is a monophyletic group from which one or more subsidiary clades (monophyletic groups) are excluded to form a separate group. Ereshefsky has argued that paraphyletic taxa are the result of anagenesis in the excluded group or groups.
A group whose identifying features evolved convergently in two or more lineages is "polyphyletic" (Greek πολύς ["polys"], "many"). More broadly, any taxon that is not paraphyletic or monophyletic can be called polyphyletic.
These terms were developed during the debates of the 1960s and 1970s accompanying the rise of cladistics.
Paraphyletic groupings are considered problematic by many taxonomists, as it is not possible to talk precisely about their phylogenetic relationships, their characteristic traits and literal extinction. Related terminology that may be encountered are stem group, chronospecies, budding cladogenesis, anagenesis, or 'grade' groupings. Paraphyletic groups are often a relic from previous erroneous assessments about phylogenic relationships, or from before the rise of cladistics.
The prokaryotes (single-celled life forms without cell nuclei), because they exclude the eukaryotes, a descendant group. Bacteria and Archaea are prokaryotes, but archaea and eukaryotes share a common ancestor that is not ancestral to the bacteria. The prokaryote/eukaryote distinction was proposed by Edouard Chatton in 1937 and was generally accepted after being adopted by Roger Stanier and C.B. van Niel in 1962. The botanical code (the ICBN, now the ICN) abandoned consideration of bacterial nomenclature in 1975; currently, prokaryotic nomenclature is regulated under the ICNB with a starting date of 1 January 1980 (in contrast to a 1753 start date under the ICBN/ICN).
Among plants, dicotyledons (in the traditional sense) are paraphyletic because the group excludes monocotyledons. "Dicotyledon" has not been used as a botanic classification for decades, but is allowed as a synonym of Magnoliopsida. Phylogenetic analysis indicates that the monocots are a development from a dicot ancestor. Excluding monocots from the dicots makes the latter a paraphyletic group.
Among animals, several familiar groups are not, in fact, clades. The order Artiodactyla (even-toed ungulates) is paraphyletic because it excludes Cetaceans (whales, dolphins, etc.). In the ICZN Code, the two taxa are orders of equal rank. Molecular studies, however, have shown that the Cetacea descend from artiodactyl ancestors, although the precise phylogeny within the order remains uncertain. Without the Cetacean descendants the Artiodactyls must be paraphyletic.
The class Reptilia "as traditionally defined" is paraphyletic because it excludes birds (class Aves) and mammals. In the ICZN Code, the three taxa are classes of equal rank. However, mammals hail from the synapsids (which were once described as "mammal-like reptiles") and birds are descended from the dinosaurs (a group of Diapsida), both of which are reptiles. Alternatively, reptiles are paraphyletic because they gave rise to (only) birds. Birds and reptiles together make Sauropsids.
Osteichthyes, bony fish, are paraphyletic when they include only Actinopterygii (ray-finned fish) and Sarcopterygii (lungfish, etc.), excluding tetrapods; more recently, Osteichthyes is treated as a clade, including the tetrapods.
The wasps are paraphyletic, consisting of the narrow-waisted Apocrita without the ants and bees. The sawflies (Symphyta) are similarly paraphyletic, forming all of the Hymenoptera except for the Apocrita, a clade deep within the sawfly tree.
Crustaceans are not a clade because the Hexapoda (insects) are excluded. The modern clade that spans all of them is the Tetraconata.
Species have a special status in systematics as being an observable feature of nature itself and as the basic unit of classification. The phylogenetic species concept requires species to be monophyletic, but paraphyletic species are common in nature. Paraphyly is common in speciation, whereby a mother species (a paraspecies) gives rise to a daughter species without itself becoming extinct. Research indicates as many as 20 percent of all animal species and between 20 and 50 percent of plant species are paraphyletic. Accounting for these facts, some taxonomists argue that paraphyly is a trait of nature that should be acknowledged at higher taxonomic levels.
When the appearance of significant traits has led a subclade on an evolutionary path very divergent from that of a more inclusive clade, it often makes sense to study the paraphyletic group that remains without considering the larger clade. For example, the Neogene evolution of the Artiodactyla (even-toed ungulates, like deer) has taken place in an environment so different from that of the Cetacea (whales, dolphins, and porpoises) that the Artiodactyla are often studied in isolation even though the cetaceans are a descendant group. The prokaryote group is another example; it is paraphyletic because it excludes many of its descendant organisms (the eukaryotes), but it is very useful because it has a clearly defined and significant distinction (absence of a cell nucleus, a plesiomorphy) from its excluded descendants.
Also, paraphyletic groups are involved in evolutionary transitions, the development of the first tetrapods from their ancestors for example. Any name given to these ancestors to distinguish them from tetrapods—"fish", for example—necessarily picks out a paraphyletic group, because the descendant tetrapods are not included.
The term "evolutionary grade" is sometimes used for paraphyletic groups.
Moreover, the concepts of monophyly, paraphyly, and polyphyly have been used in deducing key genes for barcoding of diverse group of species.
Viviparity, the production of offspring without the laying of a fertilized egg, developed independently in the lineages that led to humans ("Homo sapiens") and southern water skinks ("Eulampus tympanum", a kind of lizard). Put another way, at least one of the lineages that led to these species from their last common ancestor contains nonviviparous animals, the pelycosaurs ancestral to mammals; vivipary appeared subsequently in the mammal lineage.
Independently-developed traits like these cannot be used to distinguish paraphyletic groups because paraphyly requires the excluded groups to be monophyletic. Pelycosaurs were descended from the last common ancestor of skinks and humans, so vivipary could be paraphyletic only if the pelycosaurs were part of an excluded monophyletic group. Because this group is monophyletic, it contains all descendants of the pelycosaurs; because it is excluded, it contains no viviparous animals. This does not work, because humans are among these descendants. Vivipary in a group that includes humans and skinks cannot be paraphyletic.
The following list recapitulates a number of paraphyletic groups proposed in the literature, and provides the corresponding monophyletic taxa.
The concept of paraphyly has also been applied to historical linguistics, where the methods of cladistics have found some utility in comparing languages. For instance, the Formosan languages form a paraphyletic group of the Austronesian languages because they consist of the nine branches of the Austronesian family that are not Malayo-Polynesian and are restricted to the island of Taiwan. | https://en.wikipedia.org/wiki?curid=24454 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.