text stringlengths 10 951k | source stringlengths 39 44 |
|---|---|
Screaming Lord Sutch
David Edward Sutch (10 November 1940 – 16 June 1999), also known as 3rd Earl of Harrow, or Screaming Lord Sutch, was an English musician and serial parliamentary candidate. He was the founder of the Official Monster Raving Loony Party and served as its leader from 1983 to 1999, during which time he stood in numerous parliamentary elections. He holds the record for losing more than 40 elections in which he stood from 1963 to 1997. As a singer he variously worked with Keith Moon, Jeff Beck, Jimmy Page, Ritchie Blackmore, Charlie Watts and Nicky Hopkins.
Sutch was born at New End Hospital, Hampstead, London. In the 1960s, inspired by Screamin' Jay Hawkins, he changed his stage name to "Screaming Lord Sutch, 3rd Earl of Harrow", despite having no connection with the peerage. His legal name remained David Edward Sutch.
After his career as an early 1960s rock and roll attraction, it became customary for the UK press to refer to him as "Screaming Lord Sutch", or simply "Lord Sutch". Early works included recordings produced by audio pioneer Joe Meek.
During the 1960s Screaming Lord Sutch was known for his horror-themed stage show, dressing as Jack the Ripper, pre-dating the shock rock antics of Arthur Brown and Alice Cooper. Accompanied by his band, the Savages, he started by coming out of a black coffin (once being trapped inside of it, an incident parodied in the film "Slade in Flame"). Other props included knives and daggers, skulls and "bodies". Sutch booked themed tours, such as 'Sutch and the Roman Empire', where Sutch and the band members would be dressed up as Roman soldiers.
Despite a self-confessed lack of vocal talent, he released horror-themed singles during the early to mid 1960s, the most popular "Jack the Ripper", covered live and on record by garage rock bands including the White Stripes, the Gruesomes, the Black Lips and the Horrors, the latter for their debut album.
In 1963 Sutch and his manager, Reginald Calvert, took over Shivering Sands Army Fort, a Maunsell Fort off Southend, and in 1964 started Radio Sutch, intending to compete with other pirate radio stations such as Radio Caroline. Broadcasts consisted of music and Mandy Rice-Davies reading "Lady Chatterley's Lover". Sutch tired of the station, and sold it to Calvert, after which it was renamed Radio City, and lasted until 1967. In 1966 Calvert was shot dead by Oliver Smedley over a financial dispute. Smedley was acquitted on grounds of self-defence. About this time Ritchie Blackmore left the band. Roger Warwick left to set up an R&B big band for Freddie Mack.
Sutch's album "Lord Sutch and Heavy Friends" was named in a 1998 BBC poll as the worst album of all time, a status it also held in Colin Larkin's book "The Top 1000 Albums of All Time", despite the fact that Jimmy Page, John Bonham, Jeff Beck, Noel Redding and Nicky Hopkins performed on it and helped write it. On the other hand, for fans of the musicians involved, their work is considered well-worth listening to the album, and especially for the recently formed New Yardbirds/Led Zeppelin, offers a first take of the rolling funk-blues riffs and grooves that would define the classic Led Zeppelin sound.
For his follow-up, "Hands of Jack the Ripper", Sutch assembled British rock celebrities for a concert at the Carshalton Park Rock 'n' Roll Festival. The show was recorded (though only Sutch knew), and it was released to the surprise of the musicians. Musicians on the record included Ritchie Blackmore (guitar); Matthew Fisher (keyboard); Carlo Little (drums); Keith Moon (drums); Noel Redding (bass) and Nick Simper (bass).
In 2017 his song "Flashing Lights" was featured in "Logan Lucky", directed by Steven Soderbergh.
In the 1960s Sutch stood in parliamentary elections, often as representative of the National Teenage Party. His first was in 1963, when he contested the by-election in Stratford-upon-Avon caused by the resignation of John Profumo. He gained 208 votes. His next was at the 1966 general election when he stood in Harold Wilson's Huyton constituency. Here he received 585 votes.
He founded the Official Monster Raving Loony Party in 1983 and fought the Bermondsey by-election. In his career he contested over 40 elections. He was recognisable at election counts by his flamboyant clothes and top hat. In 1968 he officially added "lord" to his name by deed poll. In the mid 1980s, the deposit paid by candidates was raised from £150 to £500. This did little to deter Sutch, who increased the number of concerts he performed to pay for campaigns. He achieved his highest poll and vote share at Rotherham in 1994 with 1,114 votes and a 4.2 per cent vote share.
At the Bootle by-election in May 1990, he secured more votes than the candidate of the Continuing Social Democratic Party (SDP), led by former Foreign Secretary David Owen. Within days the SDP dissolved itself. In 1993, when the British National Party gained its first local councillor, Derek Beackon, Sutch pointed out that the Official Monster Raving Loony Party already had six. He holds the record for losing more than 40 elections in which he stood.
He appeared as himself in the first episode of ITV comedy "The New Statesman", coming second ahead of the Labour and SDP, in the 1987 election which saw Alan B'Stard elected to Parliament.
Adverts in the 1990s for Heineken Pilsener boasted that "Only Heineken can do this". One had Sutch at 10 Downing Street after becoming Prime Minister.
In 1999 Sutch starred in a Coco Pops advert as a returning officer announcing the results of its renaming competition.
Sutch was friends with, and at one time lived at the house of, Cynthia Payne.
He had a history of depression, and killed himself by hanging on 16 June 1999, at his mother's house. At the inquest, his fiancée Yvonne Elwood said he had "manic depression".
Sutch is buried beside his mother, who died on 30 April 1997, in the cemetery in Pinner, Middlesex. He was survived by a son, Tristan Lord Gwynne Sutch, born in 1975 to American model Thann Rendessy.
In 1991 his autobiography, "Life as Sutch: The Official Autobiography of a Raving Loony" (written with Peter Chippindale), was published. In 2005 Graham Sharpe, who had known him since the late 1960s, wrote the first biography, "The Man Who Was Screaming Lord Sutch".
Sutch released records from 1961 onwards. Later works include:
Notes:- | https://en.wikipedia.org/wiki?curid=28690 |
Persians
The Persians are an Iranian ethnic group that make up over half the population of Iran. They share a common cultural system and are native speakers of the Persian language, as well as languages closely related to Persian.
The ancient Persians were originally an ancient Iranian people who migrated to the region of Persis, corresponding to the modern province of Fars in southwestern Iran, by the ninth century BC. Together with their compatriot allies, they established and ruled some of the world's most powerful empires, well-recognized for their massive cultural, political, and social influence covering much of the territory and population of the ancient world. Throughout history, Persians have contributed greatly to art and science. Persian literature is one of the world's most prominent literary traditions.
In contemporary terminology, people of Persian heritage native specifically to present-day Afghanistan, Tajikistan, and Uzbekistan are referred to as "Tajiks", whereas those in the Caucasus (primarily in the present-day Republic of Azerbaijan and the Russian federal subject of Dagestan), albeit heavily assimilated, are referred to as "Tats". However, historically, the terms "Tajik" and "Tat" were used as synonymous and interchangeable with "Persian". Many influential Persian figures hailed from outside Iran's present-day borders to the northeast in Central Asia and Afghanistan and to a lesser extent to the northwest in the Caucasus proper. In historical contexts, especially in English, "Persians" may be defined more loosely to cover all subjects of the ancient Persian polities, regardless of ethnic background.
The term "Persian", meaning "from Persia", derives from Latin , itself deriving from Greek (), a Hellenized form of Old Persian (), which evolves into () in modern Persian. In the Bible, particularly in the books of Daniel, Esther, Ezra, and Nehemya, it is given as ().
A Greek folk etymology connected the name to Perseus, a legendary character in Greek mythology. Herodotus recounts this story, devising a foreign son, Perses, from whom the Persians took the name. Apparently, the Persians themselves knew the story, as Xerxes I tried to use it to suborn the Argives during his invasion of Greece, but ultimately failed to do so.
Although Persis (Persia proper) was only one of the provinces of ancient Iran, varieties of this term (e.g., "Persia") were adopted through Greek sources and used as an exonym for all of the Persian Empire for many years. Thus, especially in the Western world, the names "Persia" and "Persian" came to refer to all of Iran and its subjects.
Some medieval and early modern Islamic sources also used cognates of the term "Persian" to refer to various Iranian peoples and languages, including the speakers of Khwarazmian, Mazanderani, and Old Azeri. 10th-century Iraqi historian Al-Masudi refers to "Pahlavi", "Dari", and "Azari" as dialects of the Persian language. In 1333, medieval Moroccan traveler and scholar Ibn Battuta referred to the people of Kabul as a specific sub-tribe of the Persians. Lady Mary (Leonora Woulfe) Sheil, in her observation of Iran during the Qajar era, states that the Kurds and the Leks would consider themselves as belonging to the race of the "old Persians".
On 21 March 1935, former king of Iran Reza Shah of the Pahlavi dynasty issued a decree asking the international community to use the term "Iran", the native name of the country, in formal correspondence. However, the term "Persian" is still historically used to designate the predominant population of the Iranian peoples living in the Iranian cultural continent.
Persia is first attested in Assyrian sources from the third millennium BC in the Old Assyrian form , designating a region belonging to the Sumerians. The name of this region was adopted by a nomadic ancient Iranian people who migrated to the region in the west and southwest of Lake Urmia, eventually becoming known as "the Persians". The ninth-century BC Neo-Assyrian inscription of the Black Obelisk of Shalmaneser III, found at Nimrud, gives it in the Late Assyrian forms and as a region and a people located in the Zagros Mountains, the latter likely having migrated southward and transferred the name of the region with them to what would become Persis (Persia proper, i.e., modern-day Fars), and that is considered to be the earliest attestation to the ancient Persian people.
The ancient Persians were initially dominated by the Assyrians for much of the first three centuries after arriving in the region. However, they played a major role in the downfall of the Neo-Assyrian Empire. The Medes, another group of ancient Iranian people, unified the region under an empire centered in Media, which would become the region's leading cultural and political power of the time by 612 BC. Meanwhile, under the dynasty of the Achaemenids, the Persians formed a vassal state to the central Median power. In 552 BC, the Achaemenid Persians revolted against the Median monarchy, leading to the victory of Cyrus the Great over the throne in 550 BC. The Persians spread their influence to the rest of what is considered to be the Iranian Plateau, and assimilated with the non-Iranian indigenous groups of the region, including the Elamites and the Mannaeans.
At its greatest extent, the Achaemenid Empire stretched from parts of Eastern Europe in the west to the Indus Valley in the east, making it the largest empire the world had yet seen. The Achaemenids developed the infrastructure to support their growing influence, including the establishment of the cities of Pasargadae and Persepolis. The empire extended as far as the limits of the Greek city states in modern-day mainland Greece, where the Persians and Athenians influenced each other in what is essentially a reciprocal cultural exchange. Its legacy and impact on the kingdom of Macedon was also notably huge, even for centuries after the withdrawal of the Persians from Europe following the Greco-Persian Wars.
During the Achaemenid era, Persian colonists settled in Asia Minor. In Lydia (the most important Achaemenid satrapy), near Sardis, there was the Hyrcanian plain, which, according to Strabo, got its name from the Persian settlers that were moved from Hyrcania. Similarly near Sardis, there was the plain of Cyrus, which further signified the presence of numerous Persian settlements in the area. In all these centuries, Lydia and Pontus were reportedly the chief centers for the worship of the Persian gods in Asia Minor. According to Pausanias, as late as the second century AD, one could witness rituals which resembled the Persian fire ceremony at the towns of Hyrocaesareia and Hypaepa. Mithridates III of Cius, a Persian nobleman and part of the Persian ruling elite of the town of Cius, founded the Kingdom of Pontus in his later life, in northern Asia Minor. At the peak of its power, under the infamous Mithridates VI the Great, the Kingdom of Pontus also controlled Colchis, Cappadocia, Bithynia, the Greek colonies of the Tauric Chersonesos, and for a brief time the Roman province of Asia. After a long struggle with Rome in the Mithridatic Wars, Pontus was defeated; part of it was incorporated into the Roman Republic as the province of Bithynia and Pontus, and the eastern half survived as a client kingdom.
Following the Macedonian conquests, the Persian colonists in Cappadocia and the rest of Asia Minor were cut off from their co-religionists in Iran proper, but they continued to practice the Iranian faith of their forefathers. Strabo, who observed them in the Cappadocian Kingdom in the first century BC, records (XV.3.15) that these "fire kindlers" possessed many "holy places of the Persian Gods", as well as fire temples. Strabo, who wrote during the time of Augustus (r. 63 BC-14 AD), almost three hundred years after the fall of the Achaemenid Persian Empire, records only traces of Persians in western Asia Minor; however, he considered Cappadocia "almost a living part of Persia".
The Iranian dominance collapsed in 330 BC following the conquest of the Achaemenid Empire by Alexander the Great, but reemerged shortly after through the establishment of the Parthian Empire in 247 BC, which was founded by a group of ancient Iranian people rising from Parthia. Until the Parthian era, Iranian identity had an ethnic, linguistic, and religious value. However, it did not yet have a political import. The Parthian language, which was used as an official language of the Parthian Empire, left influences on Persian, as well as on the neighboring Armenian language.
The Parthian monarchy was succeeded by the Persian dynasty of the Sasanians in 224 AD. By the time of the Sasanian Empire, a national culture that was fully aware of being Iranian took shape, partially motivated by restoration and revival of the wisdom of "the old sages" (). Other aspects of this national culture included the glorification of a great heroic past and an archaizing spirit. Throughout the period, Iranian identity reached its height in every aspect. Middle Persian, which is the immediate ancestor of Modern Persian and a variety of other Iranian dialects, became the official language of the empire and was greatly diffused among Iranians.
The Parthians and the Sasanians would also extensively interact with the Romans culturally. The Roman–Persian wars and the Byzantine–Sasanian wars would shape the landscape of Western Asia, Europe, the Caucasus, North Africa, and the Mediterranean Basin for centuries. For a period of over 400 years, the Sasanians and the neighboring Byzantines were recognized as the two leading powers in the world. Cappadocia in Late Antiquity, now well into the Roman era, still retained a significant Iranian character; Stephen Mitchell notes in the "Oxford Dictionary of Late Antiquity": "Many inhabitants of Cappadocia were of Persian descent and Iranian fire worship is attested as late as 465".
Following the Arab conquest of the Sasanian Empire in the medieval times, the Arab caliphates established their rule over the region for the next several centuries, during which the long process of the Islamization of Iran took place. Confronting the cultural and linguistic dominance of the Persians, beginning by the Umayyad Caliphate, the Arab conquerors began to establish Arabic as the primary language of the subject peoples throughout their empire, sometimes by force, further confirming the new political reality over the region. The Arabic term , donating "people unable to speak properly", was adopted as a designation for non-Arabs (or non-Arabic speakers), especially the Persians. Although the term had developed a derogatory meaning and implied cultural and ethnic inferiority, it was gradually accepted as a synonym for "Persian" and still remains today as a designation for the Persian-speaking communities native to the modern Arab states of the Middle East. A series of Muslim Iranian kingdoms were later established on the fringes of the declining Abbasid Caliphate, including that of the ninth-century Samanids, under the reign of whom the Persian language was used officially for the first time after two centuries of no attestation of the language, now having received the Arabic script and a large Arabic vocabulary. Persian language and culture continued to prevail after the invasions and conquests by the Mongols and the Turks (including the Ilkhanate, Ghaznavids, Seljuks, Khwarazmians, and Timurids), who were themselves significantly Persianized, further developing in Asia Minor, Central Asia, and South Asia, where Persian culture flourished by the expansion of the Persianate societies, particularly those of Turco-Persian and Indo-Persian blends.
After over eight centuries of foreign rule within the region, the Iranian hegemony was reestablished by the emergence of the Safavid Empire in the 16th century. Under the Safavid Empire, focus on Persian language and identity was further revived, and the political evolution of the empire once again maintained Persian as the main language of the country. During the times of the Safavids and subsequent modern Iranian dynasties such as the Qajars, architectural and iconographic elements from the time of the Sasanian Persian Empire were reincorporated, linking the modern country with its ancient past. Contemporary embracement of the legacy of Iran's ancient empires, with an emphasis on the Achaemenid Persian Empire, developed particularly under the reign of the Pahlavi dynasty, providing the motive of a modern nationalistic pride. Iran's modern architecture was then inspired by that of the country's classical eras, particularly with the adoption of details from the ancient monuments in the Achaemenid capitals Persepolis and Pasargadae and the Sasanian capital Ctesiphon. Fars, corresponding to the ancient province of Persia, with its modern capital Shiraz, became a center of interest, particularly during the annual international Shiraz Arts Festival and the 2,500th anniversary of the founding of the Persian Empire. The Pahlavi rulers modernized Iran, and ruled it until the 1979 Revolution.
In modern Iran, the Persians make up the majority of the population. They are native speakers of the modern dialects of Persian, which serves as the country's official language.
The Persian language belongs to the western group of the Iranian branch of the Indo-European language family. Modern Persian is classified as a continuation of Middle Persian, the official religious and literary language of the Sasanian Empire, itself a continuation of Old Persian, which was used by the time of the Achaemenid Empire. Old Persian is one of the oldest Indo-European languages attested in original text. Samples of Old Persian have been discovered in present-day Iran, Armenia, Egypt, Iraq, Romania (Gherla), and Turkey. The oldest attested text written in Old Persian is from the Behistun Inscription, a multilingual inscription from the time of Achaemenid ruler Darius the Great carved on a cliff in western Iran.
There are several ethnic groups and communities that are either ethnically or linguistically related to the Persian people, living predominantly in Iran, and also within Afghanistan, Tajikistan, Uzbekistan, the Caucasus, Turkey, Iraq, and the Arab states of the Persian Gulf.
The Tajiks are a people native to Tajikistan, Afghanistan, and Uzbekistan who speak Persian in a variety of dialects. The Tajiks of Tajikistan and Uzbekistan are native speakers of Tajik, which is the official language of Tajikistan, and those in Afghanistan speak Dari, one of the two official languages of Afghanistan.
The Tat people, an Iranian people native to the Caucasus (primarily living in the Republic of Azerbaijan and the Russian republic of Dagestan), speak a language (Tat language) that is closely related to Persian. The origin of the Tat people is traced to an Iranian-speaking population that was resettled in the Caucasus by the time of the Sasanian Empire.
The Lurs, an ethnic Iranian people native to western Iran, are often associated with the Persians and the Kurds. They speak various dialects of the Luri language, which is considered to be a descendant of Middle Persian.
The Hazaras, making up the third largest ethnic group in Afghanistan, speak a variety of Persian by the name of Hazaragi, which is more precisely a part of the Dari dialect continuum. The Aimaqs, a semi-nomadic people native to Afghanistan, speak a variety of Persian by the name of Aimaqi, which also belongs to the Dari dialect continuum.
Persian-speaking communities native to modern Arab countries are generally designated as "Ajam", including the Ajam of Bahrain, the Ajam of Iraq, and the Ajam of Kuwait.
From Persis and throughout the Median, Achaemenid, Parthian, and Sasanian empires of ancient Iran to the neighboring Greek city states and the kingdom of Macedon, and later throughout the medieval Islamic world, all the way to modern Iran and others parts of Eurasia, Persian culture has been extended, celebrated, and incorporated. This is due mainly to its geopolitical conditions, and its intricate relationship with the ever-changing political arena once as dominant as the Achaemenid Empire.
The artistic heritage of the Persians is eclectic and has included contributions from both the east and the west. Due to the central location of Iran, Persian art has served as a fusion point between eastern and western traditions. Persians have contributed to various forms of art, including calligraphy, carpet weaving, glasswork, lacquerware, marquetry (khatam), metalwork, miniature illustration, mosaic, pottery, and textile design.
The Persian language is known to have one of the world's oldest and most influential literatures. Old Persian written works are attested on several inscriptions from between the 6th and the 4th centuries BC, and Middle Persian literature is attested on inscriptions from the Parthian and Sasanian eras and in Zoroastrian and Manichaean scriptures from between the 3rd to the 10th century AD. New Persian literature flourished after the Arab conquest of Iran with its earliest records from the 9th century, and was developed as a court tradition in many eastern courts. The "Shahnameh" of Ferdowsi, the works of Rumi, the "Rubaiyat" of Omar Khayyam, the "Panj Ganj" of Nizami Ganjavi, the "Divān" of Hafez, "The Conference of the Birds" by Attar of Nishapur, and the miscellanea of "Gulistan" and "Bustan" by Saadi Shirazi are among the famous works of medieval Persian literature. A thriving contemporary Persian literature has also been formed by the works of writers such as Ahmad Shamlou, Forough Farrokhzad, Mehdi Akhavan-Sales, Parvin E'tesami, Sadegh Hedayat, and Simin Daneshvar, among others.
Not all Persian literature is written in Persian, as works written by Persians in other languages—such as Arabic and Greek—might also be included. At the same time, not all literature written in Persian is written by ethnic Persians or Iranians, as Turkic, Caucasian, and Indic authors have also used Persian literature in the environment of Persianate cultures.
The most notable examples of ancient Persian architecture are the works of the Achaemenids hailing from Persis. Achaemenid architecture, dating from the expansion of the empire around 550 BC, flourished in a period of artistic growth that left a legacy ranging from Cyrus the Great's solemn tomb at Pasargadae to the structures at Persepolis and Naqsh-e Rostam. The Bam Citadel, a massive structure at constructed on the Silk Road in Bam, is from around the 5th century BC. The quintessential feature of Achaemenid architecture was its eclectic nature, with elements from Median architecture, Assyrian architecture, and Asiatic Greek architecture all incorporated.
The architectural heritage of the Sasanian Empire includes, among others, castle fortifications such as the Fortifications of Derbent (located in North Caucasus, now part of Russia), the Rudkhan Castle and the Shapur-Khwast Castle, palaces such as the Palace of Ardashir and the Sarvestan Palace, bridges such as the Shahrestan Bridge and the Shapuri Bridge, the Archway of Ctesiphon, and the reliefs at Taq-e Bostan.
Architectural elements from the time of Iran's ancient Persian empires have been adopted and incorporated in later periods, and were used especially during the modernization of Iran under the reign of the Pahlavi dynasty in order to contribute to the characterization of the modern country with its ancient history.
Xenophon, in his "Oeconomicus", states:
The Persian garden, the earliest examples of which were found throughout the Achaemenid Empire, has an integral position in Persian architecture. Gardens assumed an important place for the Achaemenid monarchs, and utilized the advanced Achaemenid knowledge of water technologies, including aqueducts, earliest recorded gravity-fed water rills, and basins arranged in a geometric system. The enclosure of this symmetrically arranged planting and irrigation by an infrastructure such as a palace created the impression of "paradise". The word "paradise" itself originates from Avestan (Old Persian ; New Persian , ), which literally translates to "walled-around". Characterized by its quadripartite ("čārbāq") design, the Persian garden was evolved and developed into various forms throughout history, and was also adopted in various other cultures in Eurasia. It was inscribed on UNESCO's World Heritage List in June 2011.
Carpet weaving is an essential part of the Persian culture, and Persian rugs are said to be one of the most detailed hand-made works of art.
Achaemenid rug and carpet artistry is well recognized. Xenophon describes the carpet production in the city of Sardis, stating that the locals take pride in their carpet production. A special mention of Persian carpets is also made by Athenaeus of Naucratis in his "Deipnosophistae", as he describes a "delightfully embroidered" Persian carpet with "preposterous shapes of griffins".
The Pazyryk carpet, a Scythian pile-carpet dating back to the 4th century BC that is regarded as the world's oldest existing carpet, depicts elements of Assyrian and Achaemenid designs, including stylistic references to the stone slab designs found in Persian royal buildings.
According to the accounts reported by Xenophon, a great number of singers were present at the Achaemenid court. However, little information is available from the music of that era. The music scene of the Sasanian Empire has a more available and detailed documentation than the earlier periods, and is especially more evident within the context of Zoroastrian musical rituals. Overall, Sasanian music was influential and was adopted in the subsequent eras.
Iranian music, as a whole, utilizes a variety of musical instruments that are unique to the region, and has remarkably evolved since the ancient and medieval times. In traditional Sasanian music, the octave was divided into seventeen tones. By the end of the 13th century, Iranian music also maintained a twelve-interval octave, which resembled the western counterparts.
The Iranian New Year's Day, Nowruz, which translates to "new day", is celebrated by Persians and other peoples of Iran to mark the beginning of spring on the vernal equinox on the first day of Farvardin, the first month of the Iranian calendar, which corresponds to around March 21 in the Gregorian calendar. An ancient tradition that has been preserved in Iran and several other countries that were under the influence of the ancient empires of Iran, Nowruz has been registered on UNESCO's Intangible Cultural Heritage Lists. In Iran, the Nowruz celebrations (incl. Charshanbe Suri and Sizdebedar) begin on the eve of the last Wednesday of the preceding year in the Iranian calendar and last on the 13th day of the new year. Islamic festivals are also widely celebrated by Muslim Persians. | https://en.wikipedia.org/wiki?curid=24607 |
List of Polish painters
Note: Names that cannot be confirmed in Wikipedia database nor through given sources are subject to removal. If you would like to add a new name please consider writing about the artist first.
"This is an alphabetical listing of Polish painters. This list is incomplete. If a notable Polish painter is , please add the name . | https://en.wikipedia.org/wiki?curid=24608 |
Procedural law
Procedural law, adjective law, in some jurisdictions referred to as remedial law, or rules of court comprises the rules by which a court hears and determines what happens in civil, lawsuit, criminal or administrative proceedings. The rules are designed to ensure a fair and consistent application of due process (in the U.S.) or fundamental justice (in other common law countries) to all cases that come before a court.
Substantive law, which refers to the actual claim and defense whose validity is tested through the procedures of procedural law, is different from procedural law.
In the context of procedural law, procedural rights may also refer not exhaustively to rights to information, access to justice, and rights to public participation, with those rights encompassing, general civil and political rights. In environmental law, these procedural Rights have been reflected within the UNECE Convention on "Access to Information, Public Participation in Decision-making and Access to Justice in Environmental Matters" known as the Aarhus Convention (1998).
Although different legal processes aim to resolve many kinds of legal disputes, the legal procedures share some common features. All legal procedure, for example, is concerned with due process. Absent very special conditions, a court can not impose a penalty - civil or criminal - against an individual who has not received
notice of a lawsuit being brought against them, or who has not received a fair opportunity to present evidence for themselves.
The standardization for the means by which cases are brought, parties are informed, evidence is presented, and facts are determined is intended to maximize the fairness of any proceeding. Nevertheless, strict procedural rules have certain drawbacks. For example, they impose specific time limitations upon the parties that may either hasten or (more frequently) slow down the pace of proceedings. Furthermore, a party who is unfamiliar with procedural rules may run afoul of guidelines that have nothing to do with the merits of the case, and yet the failure to follow these guidelines may severely damage the party's chances. Procedural systems are constantly torn between arguments that judges should have greater discretion in order to avoid the rigidity of the rules, and arguments that judges should have less discretion in order to avoid an outcome based more on the personal preferences of the judge than on the law or the facts.
Legal procedure, in a larger sense, is also designed to effect the best distribution of judicial resources. For example, in most courts of general jurisdiction in the United States, criminal cases are given priority over civil cases, because criminal defendants stand to lose their freedom, and should therefore be accorded the first opportunity to have their case heard.
"Procedural law" in contrast to "substantive law" is a concept available in various legal systems and languages. Similar to the English expressions are the Spanish words "derecho adjetivo" and "derecho material" or "derecho sustantivo", as well as the Portuguese terms for them, "direito adjetivo" and "direito substantivo". Other ideas are behind the German expressions "formelles Recht" (or "Verfahrensrecht") and "materielles Recht" as well as the French "droit formel/droit matériel", the Italian "diritto formale/diritto materiale" and the Swedish "formell rätt/materiell rätt"; all of which, taken literally, mean "formal" and "material" law. The same opposition can be found in the Russian legal vocabulary, with "материальное право" for substantive law and "процессуальное право" for procedural. Similar to Russian, in Bulgarian "материално право" means substantive law and "процесуално право" is used for procedural. In Chinese, "procedural law" and "substantive law" are represented by these characters: "程序法" and "实体法".
In Germany, the expressions "formelles Recht" and "materielles Recht" were developed in the 19th century, because only during that time was the Roman "actio" split into procedural and substantive components.
In the European legal systems the Roman law had been of great influence. In ancient times the Roman civil procedure applied to many countries. One of the main issues of the procedure has been the "actio" (similar to the English word "act"). In the procedure of the "legis actiones" the "actio" included both procedural and substantive elements. Because during this procedure the "praetor" had granted, or denied, litigation by granting or denying, respectively, an "actio". By granting the "actio" the "praetor" in the end has created claims. I.e. a procedural act caused substantive claims to exist. Such priority (procedure over substance) is contrary to what we think of the relationship nowadays. But it has not only been an issue of priority and whether the one serves the other. Since the "actio" had been composed of elements of procedure and substance it was difficult to separate both parts again.
Even the scientific handling of law, which developed during medieval times in the new universities in Italy (in particular in Bologna, Mantua), did not come to a full and clear separation. (The English system of "writs" in the Middle Ages had a similar problem to the Roman tradition with the "actio".)
In Germany the unity of procedure and substance in the "actio" definitely was brought to an end with the codification of the "Bürgerliches Gesetzbuch" (BGB) which came into force on January 1, 1900. The expression "Anspruch" (§ 194 of BGB) - meaning "claim" - has been "cleared" from procedural elements. And this was the time for "founding" the terms "formelles / materielles Recht". However, after World War II the expression "formelles Recht" obviously was found to be "contaminated" and to a broad extent has been replaced by "Prozessrecht", narrowing the idea behind it to "law of litigation" (thereby excluding e.g. the law of other procedures and the law on competences). | https://en.wikipedia.org/wiki?curid=24612 |
Pantoum
The pantoum is a poetic form derived from the pantun, a Malay verse form: specifically from the "pantun berkait", a series of interwoven quatrains.
The pantoum is a form of poetry similar to a villanelle in that there are repeating lines throughout the poem. It is composed of a series of quatrains; the second and fourth lines of each stanza are repeated as the first and third lines of the next stanza. The pattern continues for any number of stanzas, except for the final stanza, which differs in the repeating pattern. The first and third lines of the last stanza are the second and fourth of the penultimate; the first line of the poem is the last line of the final stanza, and the third line of the first stanza is the second of the final. Ideally, the meaning of lines shifts when they are repeated although the words remain exactly the same: this can be done by shifting punctuation, punning, or simply recontextualizing.
A four-stanza pantoum is common (although more may be used), and in the final stanza, lines one and three from the first stanza can be repeated, or new lines can be written. The pantoum form is as follows:
Stanza 1
A
B
C
D
Stanza 2
B
E
D
F
Stanza 3
E
G
F
H
Stanza 4
G
I (or A or C)
H
J (or A or C)
The pantoum is derived from the "pantun berkait", a series of interwoven quatrains. An English translation of such a "pantun berkait" appeared in William Marsden's "A Dictionary and Grammar of the Malayan Language" in 1812. Victor Hugo published an unrhymed French version by Ernest Fouinet of this poem in the notes to "Les Orientales" (1829) and subsequent French poets began to make their own attempts at composing original "pantoums". Leconte de Lisle published five pantoums in his "Poèmes tragiques" (1884).
There is also the imperfect pantoum, in which the final stanza differs from the form stated above, and the second and fourth lines may be different from any preceding lines.
Baudelaire's famous poem "Harmonie du soir" is usually cited as an example of the form, but it is irregular. The stanzas rhyme "abba" rather than the expected "abab", and the last line, which is supposed to be the same as the first, is original.
American poets such as John Ashbery, Marilyn Hacker, Donald Justice ("Pantoum of the Great Depression"), Carolyn Kizer, and David Trinidad have done work in this form, as has Irish poet Caitriona O'Reilly.
The December 2015 issue of First Things featured a pantoum by James Matthew Wilson, "The Christmas Preface."
Claude Debussy set Charles Baudelaire's "Harmonie du soir" in his "Cinq poèmes de Charles Baudelaire" in the form of a pantoum. Perhaps inspired by this setting, Maurice Ravel entitled the second movement of his Piano Trio, "Pantoum (Assez vif)". While Ravel never commented on the significance of the movement's title, Brian Newbould has suggested that the poetic form is reflected in the way the two themes are developed in alternation.
Neil Peart used the form (with one difference from the format listed above) for the lyrics of "The Larger Bowl (A Pantoum)", the fourth track on Rush's 2007 album "Snakes & Arrows", also released as a single. | https://en.wikipedia.org/wiki?curid=24613 |
Pope Sylvester II
Pope Sylvester II (–12 May 1003), originally known as Gerbert of Aurillac, was a French-born scholar and teacher who served as the bishop of Rome and ruled the Papal States from 999 to his death. He endorsed and promoted study of Arab and Greco-Roman arithmetic, mathematics, and astronomy, reintroducing to Europe the abacus and armillary sphere, which had been lost to Latin Europe since the end of the Greco-Roman era. He is said to be the first to introduce in Europe the decimal numeral system using Hindu-Arabic numerals.
Gerbert was born about 946 in the town of Belliac, near the present-day commune of Saint-Simon, Cantal, France. Around 963, he entered the Monastery of St. Gerald of Aurillac. In 967, Count Borrell II of Barcelona (947–992) visited the monastery, and the abbot asked the count to take Gerbert with him so that the lad could study mathematics in Catalonia and acquire there some knowledge of Arabic learning.
Gerbert studied under the direction of Bishop Atto of Vich, some 60 km north of Barcelona, and probably also at the nearby Monastery of Santa Maria de Ripoll. Like all Catalan monasteries, contained manuscripts from Muslim Spain and especially from Cordoba, one of the intellectual centres of Europe at that time: the library of al-Hakam II, for example, had thousands of books (from Science to Greek philosophy). This is where Gerbert was introduced to mathematics and astronomy. Borrell II was facing major defeat from the Andalusian powers so he sent a delegation to Córdoba to request a truce. Bishop Atto was part of the delegation that met with al-Ḥakam II, who received him with honor. Gerbert was fascinated by the stories of the Mozarab Christian bishops and judges who dressed and talked like the Arabs, well-versed in mathematics and natural sciences like the great teachers of the Islamic madrasahs. This sparked Gerbert's veneration for the Arabs and his passion for mathematics and astronomy.
Gerbert learned of Hindu–Arabic digits and applied this knowledge to the abacus, but probably without the numeral zero. According to the 12th-century historian William of Malmesbury, Gerbert got the idea of the computing device of the abacus from a Spanish Arab. The abacus that Gerbert reintroduced into Europe had its length divided into 27 parts with 9 number symbols (this would exclude zero, which was represented by an empty column) and 1,000 characters in all, crafted out of animal horn by a shieldmaker of Rheims. According to his pupil Richer, Gerbert could perform speedy calculations with his abacus that were extremely difficult for people in his day to think through in using only Roman numerals. Due to Gerbert's reintroduction, the abacus became widely used in Europe once again during the 11th century.
Although lost to Europe since the terminus of the Greco-Roman era, Gerbert reintroduced the astronomical armillary sphere to Latin Europe via the Islamic civilization of Al-Andalus, which was at that time at the "cutting edge" of civilization. The details of Gerbert's armillary sphere are revealed in letters from Gerbert to his former student and monk Remi of Trèves and to his colleague Constantine, the abbot of Micy, as well as the accounts of his former student and French nobleman Richer, who served as a monk in Rheims. Richer stated that Gerbert discovered that stars coursed in an oblique direction across the night sky. Richer described Gerbert's use of the armillary sphere as a visual aid for teaching mathematics and astronomy in the classroom.
Historian Oscar G. Darlington asserts that Gerbert's division by 60 degrees instead of 360 allowed the lateral lines of his sphere to equal to six degrees. By this account, the polar circle on Gerbert's sphere was located at 54 degrees, several degrees off from the actual 66° 33'. His positioning of the Tropic of Cancer at 24 degree was nearly exact, while his positioning of the equator was correct by definition. Richer also revealed how Gerbert made the planets more easily observable in his armillary sphere:
He succeeded equally in showing the paths of the planets when they come near or withdraw from the earth. He fashioned first an armillary sphere. He joined the two circles called by the Greeks "coluri" and by the Latins "incidentes" because they fell upon each other, and at their extremities he placed the poles. He drew with great art and accuracy, across the "colures", five other circles called parallels, which, from one pole to the other, divided the half of the sphere into thirty parts. He put six of these thirty parts of the half-sphere between the pole and the first circle; five between the first and the second; from the second to the third, four; from the third to the fourth, four again; five from the fourth to the fifth; and from the fifth to the pole, six. On these five circles he placed obliquely the circles that the Greeks call "loxos" or "zoe", the Latins "obliques" or "vitalis" (the zodiac) because it contained the figures of the animals ascribed to the planets. On the inside of this oblique circle he figured with an extraordinary art the orbits traversed by the planets, whose paths and heights he demonstrated perfectly to his pupils, as well as their respective distances.
Richer wrote about another of Gerbert's last armillary spheres, which had sighting tubes fixed on the axis of the hollow sphere that could observe the constellations, the forms of which he hung on iron and copper wires. This armillary sphere was also described by Gerbert in a letter to his colleague Constantine. Gerbert instructed Constantine that, if doubtful of the position of the pole star, he should fix the sighting tube of the armillary sphere into position to view the star he suspected was it, and if the star did not move out of sight, it was thus the pole star. Furthermore, Gerbert instructed Constantine that the north pole could be measured with the upper and lower sighting tubes, the Arctic Circle through another tube, the Tropic of Cancer through another tube, the equator through another tube, and the Tropic of Capricorn through another tube.
In 969, Borrell II made a pilgrimage to Rome, taking Gerbert with him. There Gerbert met Pope John XIII and Emperor Otto I. The pope persuaded Otto I to employ Gerbert as a tutor for his young son, Otto II. Some years later, Otto I gave Gerbert leave to study at the cathedral school of Rheims where he was soon appointed a teacher by Archbishop Adalberon. When Otto II became sole emperor in 973, he appointed Gerbert the abbot of the monastery of Bobbio and also appointed him as count of the district, but the abbey had been ruined by previous abbots, and Gerbert soon returned to Rheims. After the death of Otto II in 983, Gerbert became involved in the politics of his time. In 985, with the support of his archbishop, he opposed King Lothair of France's attempt to take the Lorraine from Emperor Otto III by supporting Hugh Capet. Hugh became king of France, ending the Carolingian line of kings in 987.
Adalberon died on 23 January 989. Gerbert was a natural candidate for his succession, but King Hugh appointed Arnulf, an illegitimate son of King Lothair, instead. Arnulf was deposed in 991 for alleged treason against Hugh, and Gerbert was elected his successor. There was so much opposition to Gerbert's elevation to the See of Rheims, however, that Pope John XV (985–996) sent a legate to France who temporarily suspended Gerbert from his episcopal office. Gerbert sought to show that this decree was unlawful, but a further synod in 995 declared Arnulf's deposition invalid. Gerbert then became the teacher of Otto III, and Pope Gregory V (996–999), Otto III's cousin, appointed him archbishop of Ravenna in 998.
With imperial support, Gerbert was elected to succeed Gregory V as pope in 999. Gerbert took the name of Sylvester II, alluding to Sylvester I (314–335), the advisor to Emperor Constantine I (324–337). Soon after he became pope, Sylvester II confirmed the position of his former rival Arnulf as archbishop of Rheims. As pope, he took energetic measures against the widespread practices of simony and concubinage among the clergy, maintaining that only capable men of spotless lives should be allowed to become bishops. In 1001, the Roman populace revolted, forcing Otto III and Sylvester II to flee to Ravenna. Otto III led two unsuccessful expeditions to regain control of the city and died on a third expedition in 1002. Sylvester II returned to Rome soon after the emperor's death, although the rebellious nobility remained in power, and died a little later. Sylvester is buried in St. John Lateran.
Gerbert of Aurillac was a humanist long before the Renaissance. He read Virgil, Cicero and Boethius; he studied Latin translations of Porphyry, but also of Aristotle. He had a very accurate classification of the different disciplines of philosophy. He was the first French pope.
Gerbert was said to be one of the most noted scientists of his time. Gerbert wrote a series of works dealing with matters of the quadrivium (arithmetic, geometry, astronomy, music), which he taught using the basis of the trivium (grammar, logic, and rhetoric). In Rheims, he constructed a hydraulic-powered organ with brass pipes that excelled all previously known instruments, where the air had to be pumped manually. In a letter of 984, Gerbert asks Lupitus of Barcelona for a book on astrology and astronomy, two terms historian S. Jim Tester says Gerbert used synonymously. Gerbert may have been the author of a description of the astrolabe that was edited by Hermannus Contractus some 50 years later. Besides these, as Sylvester II he wrote a dogmatic treatise, "De corpore et sanguine Domini"—On the Body and Blood of the Lord.
The legend of Gerbert grows from the work of the English monk William of Malmesbury in "De Rebus Gestis Regum Anglorum" and a polemical pamphlet, "Gesta Romanae Ecclesiae contra Hildebrandum", by Cardinal Beno, a partisan of Emperor Henry IV who opposed Pope Gregory VII in the Investiture Controversy. According to the legend, Gerbert, while studying mathematics and astrology in the Muslim cities of Córdoba and Seville, was accused of having learned sorcery. Gerbert was supposed to be in possession of a book of spells stolen from an Arab philosopher in Spain. Gerbert fled, pursued by the victim, who could trace the thief by the stars, but Gerbert was aware of the pursuit, and hid hanging from a wooden bridge, where, suspended between heaven and earth, he was invisible to the magician.
Gerbert was supposed to have built a brazen head. This "robotic" head would answer his questions with "yes" or "no". He was also reputed to have had a pact with a female demon called "Meridiana", who had appeared after he had been rejected by his earthly love, and with whose help he managed to ascend to the papal throne (another legend tells that he won the papacy playing dice with the Devil).
According to the legend, Meridiana (or the bronze head) told Gerbert that if he should ever read a mass in Jerusalem, the Devil would come for him. Gerbert then cancelled a pilgrimage to Jerusalem, but when he read mass in the church Santa Croce in Gerusalemme ("Holy Cross of Jerusalem") in Rome, he became sick soon afterwards and, dying, he asked his cardinals to cut up his body and scatter it across the city. In another version, he was even attacked by the Devil while he was reading the Mass, and the Devil mutilated him and gave his gouged-out eyes to demons to play with in the Church. Repenting, Sylvester II then cut off his hand and his tongue.
The inscription on Gerbert's tomb reads in part "Iste locus Silvestris membra sepulti venturo Domino conferet ad sonitum" ("This place will yield to the sound [of the last trumpet] the limbs of buried Sylvester II, at the advent of the Lord", mis-read as "will make a sound") and has given rise to the curious legend that his bones will rattle in that tomb just before the death of a pope.
The alleged story of the crown and papal legate authority given to Stephen I of Hungary by Sylvester in the year 1000 (hence the title 'apostolic king') is noted by the 19th-century historian Lewis L. Kropf as a possible forgery of the 17th century. Likewise, the 20th-century historian Zoltan J. Kosztolnyik states that "it seems more than unlikely that Rome would have acted in fulfilling Stephen's request for a crown without the support and approval of the emperor."
Hungary issued a commemorative stamp honoring Pope Sylvester II on 1 January 1938, and France honoured him in 1964 by issuing a postage stamp.
Gerbert's writings were printed in volume 139 of the "Patrologia Latina". Darlington notes that Gerbert's preservation of his letters might have been an effort of his to compile them into a textbook for his pupils that would illustrate proper letter writing. His books on mathematics and astronomy were not research-oriented; his texts were primarily educational guides for his students. | https://en.wikipedia.org/wiki?curid=24614 |
Pottery
Pottery is the process and the products of forming vessels and other objects with clay and other ceramic materials, which are fired at high temperatures to give them a hard, durable form. Major types include earthenware, stoneware and porcelain. The place where such wares are made by a "potter" is also called a "pottery" (plural "potteries"). The definition of "pottery" used by the American Society for Testing and Materials (ASTM), is "all fired ceramic wares that contain clay when formed, except technical, structural, and refractory products." In archaeology, especially of ancient and prehistoric periods, "pottery" often means vessels only, and figures etc. of the same material are called "terracottas". Clay as a part of the materials used is required by some definitions of pottery, but this is dubious.
Pottery is one of the oldest human inventions, originating before the Neolithic period, with ceramic objects like the Gravettian culture Venus of Dolní Věstonice figurine discovered in the Czech Republic dating back to 29,000–25,000 BC, and pottery vessels that were discovered in Jiangxi, China, which date back to 18,000 BC. Early Neolithic and pre-Neolithic pottery artifacts have been found, in Jōmon Japan (10,500 BC), the Russian Far East (14,000 BC), Sub-Saharan Africa (9,400 BC), South America (9,000s-7,000s BC), and the Middle East (7,000s-6,000s BC).
Pottery is made by forming a ceramic (often clay) body into objects of a desired shape and heating them to high temperatures (600-1600 °C) in a bonfire, pit or kiln and induces reactions that lead to permanent changes including increasing the strength and rigidity of the object. Much pottery is purely utilitarian, but much can also be regarded as ceramic art. A clay body can be decorated before or after firing.
Clay-based pottery can be divided into three main groups: earthenware, stoneware and porcelain. These require increasingly more specific clay material, and increasingly higher firing temperatures. All three are made in glazed and unglazed varieties, for different purposes. All may also be decorated by various techniques. In many examples the group a piece belongs to is immediately visually apparent, but this is not always the case. The fritware of the Islamic world does not use clay, so technically falls outside these groups. Historic pottery of all these types is often grouped as either "fine" wares, relatively expensive and well-made, and following the aesthetic taste of the culture concerned, or alternatively "coarse", "popular", "folk" or "village" wares, mostly undecorated, or simply so, and often less well-made.
All the earliest forms of pottery were made from clays that were fired at low temperatures, initially in pit-fires or in open bonfires. They were hand formed and undecorated. Earthenware can be fired as low as 600 °C, and is normally fired below 1200 °C. Because unglazed biscuit earthenware is porous, it has limited utility for the storage of liquids or as tableware. However, earthenware has had a continuous history from the Neolithic period to today. It can be made from a wide variety of clays, some of which fire to a buff, brown or black colour, with iron in the constituent minerals resulting in a reddish-brown. Reddish coloured varieties are called terracotta, especially when unglazed or used for sculpture. The development of ceramic glaze made impermeable pottery possible, improving the popularity and practicality of pottery vessels. The addition of decoration has evolved throughout its history.
Stoneware is pottery that has been fired in a kiln at a relatively high temperature, from about 1,100 °C to 1,200 °C, and is stronger and non-porous to liquids. The Chinese, who developed stoneware very early on, classify this together with porcelain as high-fired wares. In contrast, stoneware could only be produced in Europe from the late Middle Ages, as European kilns were less efficient, and the right type of clay less common. It remained a speciality of Germany until the Renaissance.
Stoneware is very tough and practical, and much of it has always been utilitarian, for the kitchen or storage rather than the table. But "fine" stoneware has been important in China, Japan and the West, and continues to be made. Many utilitarian types have also come to be appreciated as art.
Porcelain is made by heating materials, generally including kaolin, in a kiln to temperatures between . This is higher than used for the other types, and achieving these temperatures was a long struggle, as well as realizing what materials were needed. The toughness, strength and translucence of porcelain, relative to other types of pottery, arises mainly from vitrification and the formation of the mineral mullite within the body at these high temperatures.
Although porcelain was first made in China, the Chinese traditionally do not recognise it as a distinct category, grouping it with stoneware as "high-fired" ware, opposed to "low-fired" earthenware. This confuses the issue of when it was first made. A degree of translucency and whiteness was achieved by the Tang dynasty (AD 618–906), and considerable quantities were being exported. The modern level of whiteness was not reached until much later, in the 14th century. Porcelain was also made in Korea and in Japan from the end of the 16th century, after suitable kaolin was located in those countries. It was not made effectively outside East Asia until the 18th century.
Before being shaped, clay must be prepared. Kneading helps to ensure an even moisture content throughout the body. Air trapped within the clay body needs to be removed. This is called de-airing and can be accomplished either by a machine called a vacuum pug or manually by wedging. Wedging can also help produce an even moisture content. Once a clay body has been kneaded and de-aired or wedged, it is shaped by a variety of techniques. After it has been shaped, it is dried and then fired.
Body is a term for the main pottery form of a piece, underneath any glaze or decoration. The main ingredient of the body is clay. There are several materials that are referred to as clay. The properties which make them different include:
Plasticity, the malleability of the body; the extent to which they will absorb water after firing; and shrinkage, the extent of reduction in size of a body as water is removed. Different clay bodies also differ in the way in which they respond when fired in the kiln. A clay body can be decorated before or after firing. Prior to some shaping processes, clay must be prepared. Each of these different clays is composed of different types and amounts of minerals that determine the characteristics of resulting pottery. There can be regional variations in the properties of raw materials used for the production of pottery, and these can lead to wares that are unique in character to a locality. It is common for clays and other materials to be mixed to produce clay bodies suited to specific purposes. A common component of clay bodies is the mineral kaolinite. Other minerals in the clay, such as feldspar, act as fluxes which lower the vitrification temperature of bodies. Following is a list of different types of clay used for pottery.
Pottery can be shaped by a range of methods that include:
Pottery may be decorated in many different ways. Some decoration can be done before or after the firing.
Glaze is a glassy coating on pottery, the primary purposes of which are decoration and protection. One important use of glaze is to render porous pottery vessels impermeable to water and other liquids. Glaze may be applied by dusting the unfired composition over the ware or by spraying, dipping, trailing or brushing on a thin slurry composed of the unfired glaze and water. The colour of a glaze after it has been fired may be significantly different from before firing. To prevent glazed wares sticking to kiln furniture during firing, either a small part of the object being fired (for example, the foot) is left unglazed or, alternatively, special refractory ""spurs"" are used as supports. These are removed and discarded after the firing.
Some specialised glazing techniques include:
Firing produces irreversible changes in the body. It is only after firing that the article or material is pottery. In lower-fired pottery, the changes include sintering, the fusing together of coarser particles in the body at their points of contact with each other. In the case of porcelain, where different materials and higher firing-temperatures are used, the physical, chemical and mineralogical properties of the constituents in the body are greatly altered. In all cases, the reason for firing is to permanently harden the wares and the firing regime must be appropriate to the materials used to make them. As a rough guide, modern earthenwares are normally fired at temperatures in the range of about 1,000°C (1,830 °F) to ; stonewares at between about to ; and porcelains at between about to . Historically, reaching high temperatures was a long-lasting challenge, and earthenware can be fired effectively as low as 600°C, achievable in primitive pit firing.
Firing pottery can be done using a variety of methods, with a kiln being the usual firing method. Both the maximum temperature and the duration of firing influences the final characteristics of the ceramic. Thus, the maximum temperature within a kiln is often held constant for a period of time to "soak" the wares to produce the maturity required in the body of the wares.
The atmosphere within a kiln during firing can affect the appearance of the finished wares. An oxidising atmosphere, produced by allowing an excess of air in the kiln, can cause the oxidation of clays and glazes. A reducing atmosphere, produced by limiting the flow of air into the kiln, or burning coal rather than wood, can strip oxygen from the surface of clays and glazes. This can affect the appearance of the wares being fired and, for example, some glazes containing iron-rich minerals fire brown in an oxidising atmosphere, but green in a reducing atmosphere. The atmosphere within a kiln can be adjusted to produce complex effects in glaze.
Kilns may be heated by burning wood, coal and gas, or by electricity. When used as fuels, coal and wood can introduce smoke, soot and ash into the kiln which can affect the appearance of unprotected wares. For this reason, wares fired in wood- or coal-fired kilns are often placed in the kiln in saggars, ceramic boxes, to protect them. Modern kilns powered by gas or electricity are cleaner and more easily controlled than older wood- or coal-fired kilns and often allow shorter firing times to be used. In a Western adaptation of traditional Japanese Raku ware firing, wares are removed from the kiln while hot and smothered in ashes, paper or woodchips which produces a distinctive carbonised appearance. This technique is also used in Malaysia in creating traditional "labu sayung".
In Mali, a firing mound is used rather than a brick or stone kiln. Unfired pots are first brought to the place where a mound will be built, customarily by the women and girls of the village. The mound's foundation is made by placing sticks on the ground, then:
A great part of the history of pottery is prehistoric, part of past pre-literate cultures. Therefore, much of this history can only be found among the artifacts of archaeology. Because pottery is so durable, pottery and shards of pottery survive from millennia at archaeological sites, and are typically the most common and important type of artifact to survive. Many prehistoric cultures are named after the pottery that is the easiest way to identify their sites, and archaeologists develop the ability to recognise different types from the chemistry of small shards.
Before pottery becomes part of a culture, several conditions must generally be met.
Pottery may well have been discovered independently in various places, probably by accidentally creating it at the bottom of fires on a clay soil. All the earliest vessel forms were pit fired and made by coiling, which is a simple technology to learn. The earliest-known ceramic objects are Gravettian figurines such as those discovered at Dolní Věstonice in the modern-day Czech Republic. The Venus of Dolní Věstonice is a Venus figurine, a statuette of a nude female figure dated to 29,000–25,000 BC (Gravettian industry).
Sherds have been found in China and Japan from a period between 12,000 and perhaps as long as 18,000 years ago. As of 2012, the earliest pottery found anywhere in the world, dating to 20,000 to 19,000 years before the present, was found at Xianrendong Cave in the Jiangxi province of China.
Other early pottery vessels include those excavated from the Yuchanyan Cave in southern China, dated from 16,000 BC, and those found in the Amur River basin in the Russian Far East, dated from 14,000 BC.
The Odai Yamamoto I site, belonging to the Jōmon period, currently has the oldest pottery in Japan. Excavations in 1998 uncovered earthenware fragments which have been dated as early as 14,500 BC.
The term "Jōmon" means "cord-marked" in Japanese. This refers to the markings made on the vessels and figures using sticks with cords during their production. Recent research has elucidated how Jōmon pottery was used by its creators.
It appears that pottery was independently developed in Sub-Saharan Africa during the 10th millennium BC, with findings dating to at least 9,400 BC from central Mali, and in South America during the 9,000s-7,000s BC. The Malian finds date to the same period as similar finds from East Asia – the triangle between Siberia, China and Japan – and are associated in both regions to the same climatic changes (at the end of the ice age new grassland develops, enabling hunter-gatherers to expand their habitat), met independently by both cultures with similar developments: the creation of pottery for the storage of wild cereals (pearl millet), and that of small arrowheads for hunting small game typical of grassland. Alternatively, the creation of pottery in the case of the Incipient Jōmon civilisation could be due to the intensive exploitation of freshwater and marine organisms by late glacial foragers, who started developing ceramic containers for their catch.
In Japan, the Jōmon period has a long history of development of Jōmon pottery which was characterized by impressions of rope on the surface of the pottery created by pressing rope into the clay before firing. Glazed Stoneware was being created as early as the 15th century BC in China. A form of Chinese porcelain became a significant Chinese export from the Tang Dynasty (AD 618–906) onwards. Korean potters produced porcelain as early as the 14th century AD. Koreans brought the art of porcelain to Japan in the 17th century AD.
In contrast to Europe, the Chinese elite used pottery extensively at table, for religious purposes, and for decoration, and the standards of fine pottery were very high. From the Song dynasty (960–1279) for several centuries elite taste favoured plain-coloured and exquisitely formed pieces; during this period true porcelain was perfected in Ding ware, although it was the only one of the Five Great Kilns of the Song period to use it. The traditional Chinese category of high-fired wares includes stoneware types such as Ru ware, Longquan celadon, and Guan ware. Painted wares such as Cizhou ware had a lower status, though they were acceptable for making pillows.
The arrival of Chinese blue and white porcelain was probably a product of the Mongol Yuan dynasty (1271–1368) dispersing artists and craftsmen across its large empire. Both the cobalt stains used for the blue colour, and the style of painted decoration, usually based on plant shapes, were initially borrowed from the Islamic world, which the Mongols had also conquered. At the same time Jingdezhen porcelain, produced in Imperial factories, took the undisputed leading role in production, which it has retained to the present day. The new elaborately painted style was now favoured at court, and gradually more colours were added.
The secret of making such porcelain was sought in the Islamic world and later in Europe when examples were imported from the East. Many attempts were made to imitate it in Italy and France. However it was not produced outside of the Orient until 1709 in Germany.
Cord-Impressed style pottery belongs to 'Mesolithic' ceramic tradition that developed among Vindhya hunter-gatherers in Central India during the Mesolithic period. This ceramic style is also found in later Proto-Neolithic phase in nearby regions. This early type of pottery, also found at the site of Lahuradewa, is currently the oldest known pottery tradition in South Asia, dating back to 7,000-6,000 BC. Wheel-made pottery began to be made during the Mehrgarh Period II (5,500–4,800 BC) and Merhgarh Period III (4,800–3,500 BC), known as the ceramic Neolithic and chalcolithic. Pottery, including items known as the ed-Dur vessels, originated in regions of the Saraswati River / Indus River and have been found in a number of sites in the Indus Civilization.
Despite an extensive prehistoric record of pottery, including painted wares, little "fine" or luxury pottery was made in the subcontinent in historic times. Hinduism discourages eating off pottery, which probably largely accounts for this. Most traditional Indian pottery vessels are large pots or jars for storage, or small cups or lamps, often treated as disposable. In contrast there are long traditions of sculpted figures, often rather large, in terracotta.
Pottery in Southeast Asia is as diverse as its ethnic groups. Each ethnic group has their own set of standards when it comes to pottery arts. Potteries are made due to various reasons, such as trade, food and beverage storage, kitchen usage, religious ceremonies, and burial purposes.
Around 8000 BC during the Pre-pottery Neolithic period, and before the invention of pottery, several early settlements became experts in crafting beautiful and highly sophisticated containers from stone, using materials such as alabaster or granite, and employing sand to shape and polish. Artisans used the veins in the material to maximum visual effect. Such objects have been found in abundance on the upper Euphrates river, in what is today eastern Syria, especially at the site of Bouqras.
The earliest history of pottery production in the Fertile Crescent starts the Pottery Neolithic and can be divided into four periods, namely: the Hassuna period (7000–6500 BC), the Halaf period (6500–5500 BC), the Ubaid period (5500–4000 BC), and the Uruk period (4000–3100 BC). By about 5000 BC pottery-making was becoming widespread across the region, and spreading out from it to neighbouring areas.
Pottery making began in the 7th millennium BC. The earliest forms, which were found at the Hassuna site, were hand formed from slabs, undecorated, unglazed low-fired pots made from reddish-brown clays. Within the next millennium, wares were decorated with elaborate painted designs and natural forms, incising and burnished.
The invention of the potter's wheel in Mesopotamia sometime between 6000 and 4000 BC (Ubaid period) revolutionized pottery production. Newer kiln designs could fire wares to to which enabled new possibilities and new preparation of clays. Production was now carried out by small groups of potters for small cities, rather than individuals making wares for a family. The shapes and range of uses for ceramics and pottery expanded beyond simple vessels to store and carry to specialized cooking utensils, pot stands and rat traps. As the region developed, new organizations and political forms, pottery became more elaborate and varied. Some wares were made using moulds, allowing for increased production for the needs of the growing populations. Glazing was commonly used and pottery was more decorated.
In the Chalcolithic period in Mesopotamia, Halafian pottery achieved a level of technical competence and sophistication, not seen until the later developments of Greek pottery with Corinthian and Attic ware.
The early inhabitants of Europe developed pottery in the Linear Pottery culture slightly later than the Near East, circa 5500–4500 BC. In the ancient Western Mediterranean elaborately painted earthenware reached very high levels of artistic achievement in the Greek world; there are large numbers of survivals from tombs. Minoan pottery was characterized by complex painted decoration with natural themes. The classical Greek culture began to emerge around 1000 BC featuring a variety of well crafted pottery which now included the human form as a decorating motif. The pottery wheel was now in regular use. Although glazing was known to these potters, it was not widely used. Instead, a more porous clay slip was used for decoration. A wide range of shapes for different uses developed early and remained essentially unchanged during Greek history.
Fine Etruscan pottery was heavily influenced by Greek pottery and often imported Greek potters and painters. Ancient Roman pottery made much less use of painting, but used moulded decoration, allowing industrialized production on a huge scale. Much of the so-called red Samian ware of the Early Roman Empire was
in fact produced in modern Germany and France, where entrepreneurs established large potteries.
Pottery was hardly seen on the tables of elites from Hellenistic times until the Renaissance, and most medieval wares were coarse and utilitarian, as the elites ate off metal vessels. Imports from Asia revived interest in fine pottery, which European manufacturers eventually learned to make, and from the 18th century European porcelain and other wares from a great number of producers became extremely popular.
The English city of Stoke-on-Trent is widely known as "The Potteries" because of the large number of pottery factories or, colloquially, "Pot Banks." It was one of the first industrial cities of the modern era where, as early as 1785, two hundred pottery manufacturers employed 20,000 workers. Josiah Wedgwood (1730–1795) was the dominant leader.
In North Staffordshire hundreds of companies produced all kinds of pottery, from tablewares and decorative pieces to industrial items. The main pottery types of earthenware, stoneware and porcelain were all made in large quantities, and the Staffordshire industry was a major innovator in developing new varieties of ceramic bodies such as bone china and jasperware, as well as pioneering transfer printing and other glazing and decorating techniques. In general Staffordshire was strongest in the middle and low price ranges, though the finest and most expensive types of wares were also made.
By the late 18th century North Staffordshire was the largest producer of ceramics in Britain, despite significant centres elsewhere. Large export markets took Staffordshire pottery around the world, especially in the 19th century. Production had begun to decline in the late 19th century, as other countries developed their industries, and declined steeply after World War II. Some production continues in the area, but at a fraction of the levels at the peak of the industry.
Early Islamic pottery followed the forms of the regions which the Muslims conquered. Eventually, however, there was cross-fertilization between the regions. This was most notable in the Chinese influences on Islamic pottery. Trade between China and Islam took place via the system of trading posts over the lengthy Silk Road. Islamic nations imported stoneware and later porcelain from China. China imported the minerals for Cobalt blue from the Islamic ruled Persia to decorate their blue and white porcelain, which they then exported to the Islamic world.
Likewise, Islamic art contributed to a lasting pottery form identified as Hispano-Moresque in Andalucia (Islamic Spain). Unique Islamic forms were also developed, including fritware, lusterware and specialized glazes like tin-glazing, which led to the development of the popular maiolica.
One major emphasis in ceramic development in the Muslim world was the use of tile and decorative tilework.
Most evidence points to an independent development of pottery in the Native American cultures, with the earliest known dates from Brazil, from 9,500 to 5,000 years ago and 7,000 to 6,000 years ago. Further north in Mesoamerica, dates begin with the Archaic Era (3500–2000 BC), and into the Formative period (2000 BC – AD 200). These cultures did not develop the stoneware, porcelain or glazes found in the Old World. Maya ceramics include finely painted vessels, usually beakers, with elaborate scenes with several figures and texts. Several cultures, beginning with the Olmec, made terracotta sculpture, and sculptural pieces of humans or animals that are also vessels are produced in many places, with Moche portrait vessels among the finest.
Evidence indicates an independent invention of pottery in Sub-Saharan Africa. In 2007, Swiss archaeologists discovered pieces of the oldest pottery in Africa at Ounjougou in Central Mali, dating back to at least 9,400 BC. In later periods, a relationship of the introduction of pot-making in some parts of Sub-Saharan Africa with the spread of Bantu languages has been long recognized, although the details remain controversial and awaiting further research, and no consensus has been reached.
Ancient Egyptian pottery begins after 5,000 BC, having spread from the Levant. There were many distinct phases of development in pottery, with very sophisticated wares being produced by the Naqada III period, c. 3,200 to 3,000 BC. During the early Mediterranean civilizations of the fertile crescent, Egypt developed a non-clay-based ceramic which has come to be called Egyptian faience. A similar type of body is still made in Jaipur in India. During the Umayyad Caliphate of Islam, Egypt was a link between early centre of Islam in the Near East and Iberia which led to the impressive style of pottery.
It is, however, still valuable to look into pottery as an archaeological record of potential interaction between peoples, especially in areas where little or no written history exists. Because Africa is primarily heavy in oral traditions, and thus lacks a large body of written historical sources, pottery has a valuable archaeological role. When pottery is placed within the context of linguistic and migratory patterns, it becomes an even more prevalent category of social artifact. As proposed by Olivier P. Gosselain, it is possible to understand ranges of cross-cultural interaction by looking closely at the "chaîne opératoire" of ceramic production.
The methods used to produce pottery in early Sub-Saharan Africa are divisible into three categories: techniques visible to the eye (decoration, firing and post-firing techniques), techniques related to the materials (selection or processing of clay, etc.), and techniques of molding or fashioning the clay. These three categories can be used to consider the implications of the reoccurrence of a particular sort of pottery in different areas. Generally, the techniques that are easily visible (the first category of those mentioned above) are thus readily imitated, and may indicate a more distant connection between groups, such as trade in the same market or even relatively close proximity in settlements. Techniques that require more studied replication (i.e., the selection of clay and the fashioning of clay) may indicate a closer connection between peoples, as these methods are usually only transmissible between potters and those otherwise directly involved in production. Such a relationship requires the ability of the involved parties to communicate effectively, implying pre-existing norms of contact or a shared language between the two. Thus, the patterns of technical diffusion in pot-making that are visible via archaeological findings also reveal patterns in societal interaction.
Polynesia, Melanesia and Micronesia
Pottery has been found in archaeological sites across the islands of Oceania. It is attributed to an ancient archaeological culture called the Lapita. Another form of pottery called Plainware is found throughout sites of Oceania. The relationship between Lapita pottery and Plainware is not altogether clear.
The Indigenous Australians never developed pottery. After Europeans came to Australia and settled, they found deposits of clay which were analysed by English potters as excellent for making pottery. Less than 20 years later, Europeans came to Australia and began creating pottery. Since then, ceramic manufacturing, mass-produced pottery and studio pottery have flourished in Australia.
The study of pottery can help to provide an insight into past cultures. Pottery is durable, and fragments, at least, often survive long after artefacts made from less-durable materials have decayed past recognition. Combined with other evidence, the study of pottery artefacts is helpful in the development of theories on the organisation, economic condition and the cultural development of the societies that produced or acquired pottery. The study of pottery may also allow inferences to be drawn about a culture's daily life, religion, social relationships, attitudes towards neighbours, attitudes to their own world and even the way the culture understood the universe.
Chronologies based on pottery are often essential for dating non-literate cultures and are often of help in the dating of historic cultures as well. Trace-element analysis, mostly by neutron activation, allows the sources of clay to be accurately identified and the thermoluminescence test can be used to provide an estimate of the date of last firing. Examining fired pottery shards from prehistory, scientists learned that during high-temperature firing, iron materials in clay record the exact state of Earth's magnetic field at that exact moment.
Although many of the environmental effects of pottery production have existed for millennia, some of these have been amplified with modern technology and scales of production. The principal factors for consideration fall into two categories: (a) effects on workers, and (b) effects on the general environment.
The chief risks on worker health include heavy metal poisoning, poor indoor air quality, dangerous sound levels and possible over-illumination.
Historically, "plumbism" (lead poisoning) was a significant health concern to those glazing pottery. This was recognised at least as early as the nineteenth century, and the first legislation in the United Kingdom to limit pottery workers' exposure was introduced in 1899.
Proper ventilation to guarantee adequate indoor air quality can reduce or eliminate workers' exposure to fine particulate matter, carbon monoxide, certain heavy metals, and crystalline silica (which can lead to silicosis). A more recent study at Laney College, Oakland, California suggests that all these factors can be controlled in a well-designed workshop environment.
The primary environmental concerns include off-site water pollution, air pollution, disposal of hazardous materials, and fuel consumption. | https://en.wikipedia.org/wiki?curid=24619 |
Pacta sunt servanda
Pacta sunt servanda (Latin for "agreements must be kept"), a brocard, is a basic principle of civil law, canon law, and international law.
In its most common sense, the principle refers to private contracts, stressing that contained clauses are law between the parties, and implies that nonfulfillment of respective obligations is a breach of the pact. The maxim first appears in the writings of the canonist Cardinal Hostiensis, written in the 13th century but published in the 16th.
In civil law jurisdictions this principle is related to the general principle of correct behavior in commercial practice – including the assumption of "good faith" – is a requirement for the efficacy of the whole system, so the eventual disorder is sometimes punished by the law of some systems even without any direct penalty incurred by any of the parties. However, common law jurisdictions usually do not have the principle of good faith in commercial contracts, therefore it is inappropriate to state that "pacta sunt servanda" includes the principle of good faith.
With reference to international agreements, "every treaty in force is binding upon the parties to it and must be performed by them in good faith." This entitles states to require that obligations be respected and to rely upon the obligations being respected. This good faith basis of treaties implies that a party to the treaty cannot invoke provisions of its municipal (domestic) law as justification for a failure to perform. However, with regards to the Vienna Convention and the UNIDROIT Principles it should be kept in mind that these are heavily influenced by civil law jurisdictions. To derive from these sources that "pacta sunt servanda" includes the principle of good faith is therefore incorrect.
The only limit to "pacta sunt servanda" are the peremptory norms of general international law, called "jus cogens" (compelling law). The legal principle "clausula rebus sic stantibus", part of customary international law, also allows for treaty obligations to be unfulfilled due to a compelling change in circumstances. | https://en.wikipedia.org/wiki?curid=24622 |
Paul Laurence Dunbar
Paul Laurence Dunbar (June 27, 1872 – February 9, 1906) was an American poet, novelist, and playwright of the late 19th and early 20th centuries. Born in Dayton, Ohio to parents who were enslaved in Kentucky before the American Civil War, Dunbar began to write stories and verse as a child and published his first poems at the age of 16 in a Dayton newspaper. He was also president of his high school's literary society.
Much of Dunbar's more popular work in his lifetime was written in the "Negro dialect" associated with the antebellum South, though he also used the Midwestern regional dialect of James Whitcomb Riley. Dunbar's work was praised by William Dean Howells, a leading editor associated with the "Harper's Weekly", and Dunbar was one of the first African-American writers to establish an international reputation. He wrote the lyrics for the musical comedy "In Dahomey" (1903), the first all-African-American musical produced on Broadway in New York. The musical later toured in the United States and the United Kingdom.
Dunbar also wrote in conventional English in other poetry and novels. Since the late 20th century, scholars have become more interested in these other works. Suffering from tuberculosis, which then had no cure, Dunbar died in Dayton, Ohio at the age of 33.
Paul Laurence Dunbar was born at 311 Howard Street in Dayton, Ohio, on June 27, 1872, to parents who were enslaved in Kentucky before the American Civil War. After being emancipated, his mother Matilda moved to Dayton with other family members, including her two sons Robert and William from her first marriage. Dunbar's father Joshua escaped from slavery in Kentucky before the war ended. He traveled to Massachusetts and volunteered for the 55th Massachusetts Infantry Regiment, one of the first two black units to serve in the war. The senior Dunbar also served in the 5th Massachusetts Cavalry Regiment. Paul Dunbar was born six months after Joshua and Matilda's wedding on Christmas Eve, 1871.
The marriage of Dunbar's parents was troubled, and Dunbar's mother left Joshua soon after having their second child, a daughter. Joshua died on August 16, 1885, when Paul was 13 years old.
Dunbar wrote his first poem at the age of six and gave his first public recital at the age of nine. His mother assisted him in his schooling, having learned to read expressly for that purpose. She often read the Bible with him, and thought he might become a minister in the African Methodist Episcopal Church. It was the first independent black denomination in America, founded in Philadelphia in the early 19th century.
Dunbar was the only African-American student during his years at Central High School in Dayton. Orville Wright was a classmate and friend. Well-accepted, he was elected as president of the school's literary society, and became the editor of the school newspaper and a debate club member.
At the age of 16, Dunbar published the poems "Our Martyred Soldiers" and "On The River" in 1888 in Dayton's "The Herald" newspaper. In 1890 Dunbar wrote and edited "The Tattler", Dayton's first weekly African-American newspaper. It was printed by the fledgling company of his high-school acquaintances, Wilbur and Orville Wright. The paper lasted six weeks.
After completing his formal schooling in 1891, Dunbar took a job as an elevator operator, earning a salary of four dollars a week. He had hoped to study law, but was not able to because of his mother's limited finances. He was restricted at work because of racial discrimination. The next year, Dunbar asked the Wrights to publish his dialect poems in book form, but the brothers did not have a facility that could print books. They suggested he go to the United Brethren Publishing House which, in 1893, printed Dunbar's first collection of poetry, "Oak and Ivy". Dunbar subsidized the printing of the book, and quickly earned back his investment in two weeks by selling copies personally, often to passengers on his elevator.
The larger section of the book, the "Oak" section, consisted of traditional verse, whereas the smaller section, the "Ivy", featured light poems written in dialect. The work attracted the attention of James Whitcomb Riley, the popular "Hoosier Poet". Both Riley and Dunbar wrote poems in both standard English and dialect.
His literary gifts were recognized, and older men offered to help him financially. Attorney Charles A. Thatcher offered to pay for college, but Dunbar wanted to persist with writing, as he was encouraged by his sales of poetry. Thatcher helped promote Dunbar, arranging work to read his poetry in the larger city of Toledo at "libraries and literary gatherings." In addition, psychiatrist Henry A. Tobey took an interest and assisted Dunbar by helping distribute his first book in Toledo and sometimes offering him financial aid. Together, Thatcher and Tobey supported the publication of Dunbar's second verse collection, "Majors and Minors" (1896).
Despite frequently publishing poems and occasionally giving public readings, Dunbar had difficulty supporting himself and his mother. Many of his efforts were unpaid and he was a reckless spender, leaving him in debt by the mid-1890s.
On June 27, 1896, the novelist, editor, and critic William Dean Howells published a favorable review of Dunbar's second book, "Majors and Minors" in "Harper's Weekly". Howells' influence brought national attention to the poet's writing. Though Howell praised the "honest thinking and true feeling" in Dunbar's traditional poems, he particularly praised the dialect poems. In this period, there was an appreciation for folk culture, and black dialect was believed to express one type of that. The new literary fame enabled Dunbar to publish his first two books as a collected volume, titled "Lyrics of Lowly Life", which included an introduction by Howells.
Dunbar maintained a lifelong friendship with the Wright brothers. Through his poetry, he met and became associated with black leaders Frederick Douglass and Booker T. Washington, and was close to his contemporary James D. Corrothers. Dunbar also became a friend of Brand Whitlock, a journalist in Toledo who went to work in Chicago. Whitlock joined the state government and had a political and diplomatic career.
By the late 1890s, Dunbar started to explore the short story and novel forms; in the latter, he frequently featured white characters and society.
Dunbar was prolific during his relatively short career: he published a dozen books of poetry, four books of short stories, four novels, lyrics for a musical, and a play.
His first collection of short stories, "Folks From Dixie" (1898), a sometimes "harsh examination of racial prejudice", had favorable reviews.
This was not the case for his first novel, "The Uncalled" (1898), which critics described as "dull and unconvincing". Dunbar explored the spiritual struggles of a white minister Frederick Brent, who had been abandoned as a child by his alcoholic father and raised by a virtuous white spinster, Hester Prime. (Both the minister and woman's names recalled Nathaniel Hawthorne's "The Scarlet Letter," which featured a central character named Hester Prynne.) With this novel, Dunbar has been noted as one of the first African Americans to cross the "color line" by writing a work solely about white society. Critics at the time complained about his handling of the material, not his subject. The novel was not a commercial success.
Dunbar's next two novels also explored lives and issues in white culture, and some contemporary critics found these lacking as well. However, literary critic Rebecca Ruth Gould argues that one of these, "The Sport of the Gods", culminates as an object lesson in the power of shame – a key component of the scapegoat mentality – to limit the law’s capacity to deliver justice.
In collaboration with the composer Will Marion Cook, and Jesse A. Shipp, who wrote the libretto, Dunbar wrote the lyrics for "In Dahomey," the first musical written and performed entirely by African Americans. It was produced on Broadway in 1903; the musical comedy successfully toured England and the United States over a period of four years and was one of the more successful theatrical productions of its time.
Dunbar's essays and poems were published widely in the leading journals of the day, including "Harper's Weekly", the "Saturday Evening Post", the "Denver Post", "Current Literature" and others. During his life, commentators often noted that Dunbar appeared to be purely black African, at a time when many leading members of the African-American community were notably of mixed race, often with considerable European ancestry.
In 1897 Dunbar traveled to England for a literary tour; he recited his works on the London circuit. He met the young black composer Samuel Coleridge-Taylor, who set some of Dunbar's poems to music. Coleridge-Taylor was influenced by Dunbar to use African and American Negro songs and tunes in future compositions. Also living in London at the time, African-American playwright Henry Francis Downing arranged a joint recital for Dunbar and Coleridge-Taylor, under the patronage of John Hay, a former aide to President Abraham Lincoln, and at that time the American ambassador to Great Britain. Downing also lodged Dunbar in London while the poet worked on his first novel, "The Uncalled" (1898).
Dunbar was active in the area of civil rights and the uplifting of African Americans. He was a participant in the March 5, 1897, meeting to celebrate the memory of abolitionist Frederick Douglass. The attendees worked to found the American Negro Academy under Alexander Crummell.
After returning from the United Kingdom, Dunbar married Alice Ruth Moore, on March 6, 1898. She was a teacher and poet from New Orleans whom he had met three years earlier. Dunbar called her "the sweetest, smartest little girl I ever saw". A graduate of Straight University (now Dillard University), a historically black college, Moore is best known for her short story collection, "Violets". She and her husband also wrote books of poetry as companion pieces. An account of their love, life and marriage was portrayed in "Oak and Ivy," a 2001 play by Kathleen McGhee-Anderson.
In October 1897 Dunbar took a job at the Library of Congress in Washington, DC. He and his wife moved to the capital, where they lived in the comfortable LeDroit Park neighborhood. At the urging of his wife, Dunbar soon left the job to focus on his writing, which he promoted through public readings. While in Washington, DC, Dunbar attended Howard University after the publication of "Lyrics of Lowly Life".
In 1900, he was diagnosed with tuberculosis (TB), then often fatal, and his doctors recommended drinking whisky to alleviate his symptoms. On the advice of his doctors, he moved to Colorado with his wife, as the cold, dry mountain air was considered favorable for TB patients. Dunbar and his wife separated in 1902, but they never divorced. Depression and declining health drove him to a dependence on alcohol, which further damaged his health.
Dunbar returned to Dayton in 1904 to be with his mother. He died of tuberculosis on February 9, 1906, at the age of 33. He was interred in the Woodland Cemetery in Dayton.
Dunbar's work is known for its colorful language and a conversational tone, with a brilliant rhetorical structure. These traits were well matched to the tune-writing ability of Carrie Jacobs-Bond (1862–1946), with whom he collaborated.
Dunbar wrote much of his work in conventional English, while using African-American dialect for some of it, as well as regional dialects. Dunbar felt there was something suspect about the marketability of dialect poems, as if blacks were limited to a constrained form of expression not associated with the educated class. One interviewer reported that Dunbar told him, "I am tired, so tired of dialect", though he is also quoted as saying, "my natural speech is dialect" and "my love is for the Negro pieces".
Dunbar credited William Dean Howells with promoting his early success, but was dismayed at the critic's encouragement that he concentrate on dialect poetry. Angered that editors refused to print his more traditional poems, Dunbar accused Howells of "[doing] me irrevocable harm in the dictum he laid down regarding my dialect verse." Dunbar, was continuing in a literary tradition that used Negro dialect; his predecessors included such writers as Mark Twain, Joel Chandler Harris and George Washington Cable.
Two brief examples of Dunbar's work, the first in standard English and the second in dialect, demonstrate the diversity of the poet's works:
Dunbar became the first African-American poet to earn national distinction and acceptance. "The New York Times" called him "a true singer of the people – white or black." Frederick Douglass once referred to Dunbar as, "one of the sweetest songsters his race has produced and a man of whom [he hoped] great things."
His friend and writer James Weldon Johnson highly praised Dunbar, writing in "The Book of American Negro Poetry:"
"Paul Laurence Dunbar stands out as the first poet from the Negro race in the United States to show a combined mastery over poetic material and poetic technique, to reveal innate literary distinction in what he wrote, and to maintain a high level of performance. He was the first to rise to a height from which he could take a perspective view of his own race. He was the first to see objectively its humor, its superstitions, its short-comings; the first to feel sympathetically its heart-wounds, its yearnings, its aspirations, and to voice them all in a purely literary form."
This collection was published in 1931, following the Harlem Renaissance, which led to a great outpouring of literary and artistic works by blacks. They explored new topics, expressing ideas about urban life and migration to the North. In his writing, Johnson also criticized Dunbar for his dialect poems, saying they had fostered stereotypes of blacks as comical or pathetic, and reinforced the restriction that blacks write only about scenes of antebellum plantation life in the South.
Dunbar has continued to influence other writers, lyricists, and composers. Composer William Grant Still used excerpts from four dialect poems by Dunbar as epigraphs for the four movements of his Symphony No. 1 in A-flat, "Afro-American" (1930). The next year it was premiered, the first symphony by an African American to be performed by a major orchestra for a US audience.
Maya Angelou titled her autobiography, "I Know Why the Caged Bird Sings" (1969), from a line in Dunbar's poem "Sympathy", at the suggestion of jazz musician and activist Abbey Lincoln. Angelou said that Dunbar's works had inspired her "writing ambition." She returns to his symbol of a caged bird as a chained slave in much of her writings.
Numerous schools and places have been named in honor of Dunbar. These include:
-Lower schools:
-College buildings:
-Other institutions: | https://en.wikipedia.org/wiki?curid=24623 |
Pop music
Pop music is a genre of popular music that originated in its modern forms in the United States and the United Kingdom during the mid-1950s. The terms "popular music" and "pop music" are often used interchangeably, although the former describes all music that is popular and includes many diverse styles. "Pop" and "rock" were roughly synonymous terms until the late 1960s, when they became quite separated from each other.
Although much of the music that appears on record charts is seen as pop music, the genre is distinguished from chart music. Pop music often borrows elements from other styles such as urban, dance, rock, Latin, and country; nevertheless, there are many key elements that define pop music. Identifying factors usually include short to medium-length songs written in a basic format (often the verse-chorus structure), as well as common use of repeated choruses, melodic tunes, melancholy or sad lyrics, and hooks.
David Hatch and Stephen Millward define pop music as "a body of music which is distinguishable from popular, jazz, and folk musics".
According to Pete Seeger, pop music is "professional music which draws upon both folk music and fine arts music".
Although pop music is seen as just the singles charts, it is not the sum of all chart music. The music charts contain songs from a variety of sources, including classical, jazz, rock, and novelty songs. As a genre, pop music is seen to exist and develop separately. Therefore, the term "pop music" may be used to describe a distinct genre, designed to appeal to all, often characterized as "instant singles-based music aimed at teenagers" in contrast to rock music as "album-based music for adults".
Pop music continuously evolves along with the term's definition. According to music writer Bill Lamb, popular music is defined as "the music since industrialization in the 1800s that is most in line with the tastes and interests of the urban middle class." The term "pop song" was first used in 1926, in the sense of a piece of music "having popular appeal". Hatch and Millward indicate that many events in the history of recording in the 1920s can be seen as the birth of the modern pop music industry, including in country, blues, and hillbilly music.
According to the website of "The New Grove Dictionary of Music and Musicians", the term "pop music" "originated in Britain in the mid-1950s as a description for rock and roll and the new youth music styles that it influenced". "The Oxford Dictionary of Music" states that while pop's "earlier meaning meant concerts appealing to a wide audience [...] since the late 1950s, however, pop has had the special meaning of non-classical mus[ic], usually in the form of songs, performed by such artists as The Beatles, The Rolling Stones, ABBA, etc." "Grove Music Online" also states that "[...] in the early 1960s, [the term] 'pop music' competed terminologically with beat music [in England], while in the US its coverage overlapped (as it still does) with that of 'rock and roll'".
From about 1967, the term “pop music” was increasingly used in opposition to the term rock music, a division that gave generic significance to both terms. While rock aspired to authenticity and an expansion of the possibilities of popular music, pop was more commercial, ephemeral, and accessible. According to British musicologist Simon Frith, pop music is produced "as a matter of enterprise not art", and is "designed to appeal to everyone" but "doesn't come from any particular place or mark off any particular taste". Frith adds that it is "not driven by any significant ambition except profit and commercial reward [...] and, in musical terms, it is essentially conservative". It is, "provided from on high (by record companies, radio programmers, and concert promoters) rather than being made from below ... Pop is not a do-it-yourself music but is professionally produced and packaged".
According to Frith, characteristics of pop music include an aim of appealing to a general audience, rather than to a particular sub-culture or ideology, and an emphasis on craftsmanship rather than formal "artistic" qualities. Music scholar Timothy Warner said it typically has an emphasis on recording, production, and technology, rather than live performance; a tendency to reflect existing trends rather than progressive developments; and aims to encourage dancing or uses dance-oriented rhythms.
The main medium of pop music is the song, often between two and a half and three and a half minutes in length, generally marked by a consistent and noticeable rhythmic element, a mainstream style and a simple traditional structure. Common variants include the verse-chorus form and the thirty-two-bar form, with a focus on melodies and catchy hooks, and a chorus that contrasts melodically, rhythmically and harmonically with the verse. The beat and the melodies tend to be simple, with limited harmonic accompaniment. The lyrics of modern pop songs typically focus on simple themes – often love and romantic relationships – although there are notable exceptions.
Harmony and chord progressions in pop music are often "that of classical European tonality, only more simple-minded." Clichés include the barbershop quartet-style harmony (i.e. ii – V – I) and blues scale-influenced harmony. There was a lessening of the influence of traditional views of the circle of fifths between the mid-1950s and the late 1970s, including less predominance for the dominant function.
In the 1940s, improved microphone design allowed a more intimate singing style and ten or twenty years later, inexpensive and more durable 45 rpm records for singles "revolutionized the manner in which pop has been disseminated", which helped to move pop music to "a record/radio/film star system". Another technological change was the widespread availability of television in the 1950s; with televised performances, "pop stars had to have a visual presence". In the 1960s, the introduction of inexpensive, portable transistor radios meant that teenagers in the developed world could listen to music outside of the home. By the early 1980s, the promotion of pop music had been greatly affected by the rise of music television channels like MTV, which "favoured those artists such as Michael Jackson and Madonna who had a strong visual appeal".
Multi-track recording (from the 1960s) and digital sampling (from the 1980s) have also been utilized as methods for the creation and elaboration of pop music. During the mid-1960s, pop music made repeated forays into new sounds, styles, and techniques that inspired public discourse among its listeners. The word "progressive" was frequently used, and it was thought that every song and single was to be a "progression" from the last. Music critic Simon Reynolds writes that beginning with 1967, a divide would exist between "progressive" pop and "mass/chart" pop, a separation which was "also, broadly, one between boys and girls, middle-class and working-class."
The latter half of the 20th-century included a large-scale trend in American culture in which the boundaries between art and pop music were increasingly blurred. Between 1950 and 1970, there was a debate of pop versus art. Since then, certain music publications have embraced the music's legitimacy, a trend referred to as "poptimism".
Throughout its development, pop music has absorbed influences from other genres of popular music. Early pop music drew on the sentimental ballad for its form, gained its use of vocal harmonies from gospel and soul music, instrumentation from jazz and rock music, orchestration from classical music, tempo from dance music, backing from electronic music, rhythmic elements from hip-hop music, and spoken passages from rap. In 2016, a "Scientific Reports" study that examined over 464,000 recordings of popular music recorded between 1955 and 2010 found that since the 1960s, pop music had found less variety in pitch progressions, grown average loudness levels, less diverse instrumentation and recording techniques, and less timbral variety. "Scientific American"s John Matson reported that this "seems to support the popular anecdotal observation that pop music of yore was "better", or at least more varied, than today's top-40 stuff". However, he also noted that the study may not have been entirely representative of pop in each generation.
In the 1960s, the majority of mainstream pop music fell in two categories: guitar, drum and bass groups or singers backed by a traditional orchestra. Since early in the decade, it was common for pop producers, songwriters, and engineers to freely experiment with musical form, orchestration, unnatural reverb, and other sound effects. Some of the best known examples are Phil Spector's Wall of Sound and Joe Meek's use of homemade electronic sound effects for acts like the Tornados. At the same time, pop music on radio and in both American and British film moved away from refined Tin Pan Alley to more eccentric songwriting and incorporated reverb-drenched rock guitar, symphonic strings, and horns played by groups of properly arranged and rehearsed studio musicians. A 2019 study held by New York University in which 643 participants had to rank how familiar a pop song is to them, songs from the 1960s turned out to be the most memorable, significantly more than songs from recent years 2000 to 2015.
Before the progressive pop of the late 1960s, performers were typically unable to decide on the artistic content of their music. Assisted by the mid-1960s economic boom, record labels began investing in artists, giving them the freedom to experiment, and offering them limited control over their content and marketing. This situation declined after the late 1970s and would not reemerge until the rise of Internet stars. Indie pop, which developed in the late 1970s, marked another departure from the glamour of contemporary pop music, with guitar bands formed on the then-novel premise that one could record and release their own music without having to procure a record contract from a major label.
Latin pop rose in popularity in the United States during the 1950s with early rock and roll success Ritchie Valens, though it truly rose to prominence during the 1970s and 1980s with the likes of Los Lobos. With later Hispanic and Latino Americans seeing success within pop music charts, 1990s pop successes stayed popular in both their original genres and in broader pop music. Artists like Selena saw large-scale pop music presence, and crossover appeal with fans of Tejano music trailblazers like Lydia Mendoza; likewise in other genres like Lorenzo Antonio and Sparx, who are recognized throughout Latin America as well as within their New Mexico music repertoire of Al Hurricane and Antonia Apodaca. Musicians like Shakira, Ricky Martin, Selena Gomez, and Demi Lovato seen lasting mass-appeal within pop music circles. Latin pop hit singles, such as "Macarena" by Los del Río and "Despacito" by Luis Fonsi, have seen record-breaking success on worldwide pop music charts.
The 1980s are commonly remembered for an increase in the use of digital recording, associated with the usage of synthesizers, with synth-pop music and other electronic genres featuring non-traditional instruments increasing in popularity. By 2014, pop music worldwide had been permeated by electronic dance music. In 2018, researchers at the University of California, Irvine, concluded that pop music has become 'sadder' since the 1980s. The elements of happiness and brightness have eventually been replaced with the electronic beats making the pop music more 'sad yet danceable'.
Pop music has been dominated by the American and (from the mid-1960s) British music industries, whose influence has made pop music something of an international monoculture, but most regions and countries have their own form of pop music, sometimes producing local versions of wider trends, and lending them local characteristics. Some of these trends (for example Europop) have had a significant impact of the development of the genre.
According to "Grove Music Online", "Western-derived pop styles, whether coexisting with or marginalizing distinctively local genres, have spread throughout the world and have come to constitute stylistic common denominators in global commercial music cultures". Some non-Western countries, such as Japan, have developed a thriving pop music industry, most of which is devoted to Western-style pop. Japan has for several years produced a greater quantity of music than everywhere except the US. The spread of Western-style pop music has been interpreted variously as representing processes of Americanization, homogenization, modernization, creative appropriation, cultural imperialism, or a more general process of globalization.
In Korea, pop music's influence has led to the birth of boy bands and girl groups which have gained overseas renown through both their music and aesthetics. Korean co-ed groups (mixed gender groups) have not been as successful. | https://en.wikipedia.org/wiki?curid=24624 |
Paul Wertico
Paul Wertico (born January 5, 1953 in Chicago, Illinois) is an American drummer. He gained recognition as a member of the Pat Metheny Group from 1983 until 2001, leaving the group to spend more time with his family and to pursue other musical interests.
After Pat Metheny heard the Simon and Bard Group with Wertico and bassist Steve Rodby, he invited both to join his band. During his time with Metheny, Wertico played on ten albums and four videos, appeared on television, and toured around the world. He won seven Grammy Awards (for "Best Jazz Fusion Performance," "Best Contemporary Jazz Performance," and "Best Rock Instrumental Performance"), magazine polls, and received several gold records.
He formed the Paul Wertico Trio with John Moulder and Eric Hochberg and collaborated with Larry Coryell, Kurt Elling, and Jeff Berlin. From 2000 to 2007, he was a member of SBB, the platinum-record-winning Polish progressive rock band. Wertico was a member of the Larry Coryell Power Trio until Coryell's death in 2017.
In 2009, Wertico became a member of the jazz-rock group Marbin with Israeli musicians Danny Markovitch and Dani Rabin. The group performed as Paul Wertico's Mid-East/Mid-West Alliance and recorded an album for the Chicago Sessions label that received accolades from the "Chicago Tribune", "DRUM!", and "Modern Drummer".
Wertico formed Wertico Cain & Gray with multi-instrumentalists David Cain and Larry Gray. Their debut album "Sound Portraits" (2013) won Best Live Performance Album in the 13th Annual Independent Music Awards, and their fourth album "Realization" (2015) was nominated for Best Live Performance Album and Best Long Form Video in the 15th Annual Independent Music Awards.
He has worked with Frank Catalano, Eddie Harris, Lee Konitz, Dave Liebman, Sam Rivers, Bob Mintzer, Terry Gibbs, Buddy DeFranco, Roscoe Mitchell, Evan Parker, Jay McShann, Herbie Mann, Randy Brecker, Jerry Goodman, and Ramsey Lewis.
He played drums on Paul Winter's 1990 Grammy-nominated album, "Earth: Voices of a Planet". He played on vocalist Kurt Elling's 1995 Grammy-nominated album, "Close Your Eyes", as well as Elling's 1997 Grammy-nominated album, "The Messenger", 1998 Grammy-nominated album, "This Time It's Love", and 2003 Grammy-nominated album, "Man in the Air".
He hosted his own radio show, "Paul Wertico's Wild World of Jazz", from 2010 to 2012. As Musical Director for the crowdsourced TV video series, "Inventing the Future", Wertico was nominated for a 2012-2013 Emmy Award in the “Outstanding Achievement In Interactivity” category by The National Academy of Television Arts & Sciences, Midwest Chapter.
He's the inventor of TUBZ, made by Pro-Mark, which makes the "Paul Wertico Signature Drum Stick".
Wertico is very active in the field of education. In addition to teaching drums privately for 45 years, he is an Associate Professor of Jazz Studies at the Chicago College of Performing Arts of Roosevelt University in Chicago, and he also headed the school’s Jazz & Contemporary Music Studies program for five years. He served on the faculty of the percussion and jazz-studies programs at the Bienen School of Music at Northwestern University in Evanston, Illinois for 16 years, and taught at the Bloom School of Jazz in Chicago for several years.
He has written educational articles for "Modern Drummer", "DRUM!", "Drums & Drumming", "Drum Tracks", and "DownBeat", and for Musician.com. He serves on the advisory board of "Modern Drummer", and is also one of their Pro-Panelists.
Wertico served five terms on the Board of Governors of The Recording Academy Chicago Chapter the National Academy of Recording Arts and Sciences (NARAS), as well as serving on both the Advisory Board and the Education Committee of The Jazz Institute of Chicago.
Wertico has performed numerous drum clinics and master classes at universities, high schools, and music stores in the U.S. and around the world, including Drummers Collective in NYC, Percussion Institute of Technology in LA, North Texas State University, and the University of Miami, as well as Musicians Institute in England, Drummers Institute in Germany, Università della Musica in Italy, Escuela de Música de Buenos Aires in Argentina, and the Rimon School of Jazz in Israel.
He has been a featured clinician/soloist at numerous international drum festivals including Canada’s Cape Breton International Drum Festival and the Montréal Drum Fest, Uruguay’s Montevideo Drum Festival, the CrossDrumming International Festival of Percussive Arts in Poland, the Mendoza International Drum Festival in Argentina, the Percussion Camp International Percussion Festival in Greece, and the First International Drummers Week in Venezuela, as well as at schools and festivals in New Zealand, Chile, Mexico, Russia, Hungary, France, Sweden, Ireland, and Spain.
He also performed at the 1994, 1999 & 2002 Percussive Arts Society International Conventions, the 1997 Modern Drummer Drum Festival (and appeared in videos of two of those events), and the 2005, 2013 & 2014 Chicago Drum Shows. Wertico is featured in the Drum Workshop videos, "The American Dream II", "The American Dream III" and "Masters of Resonance".
He has released two instructional videos: "Fine-Tuning Your Performance" and "Paul Wertico's Drum Philosophy"—the latter (that is also available on DVD), was named “One of the best drum videos of the last 25 years” by "Modern Drummer" magazine.
Wertico’s drum instructional book "TURN THE BEAT AROUND" (A Drummer’s Guide to Playing “Backbeats” on 1 & 3) was published by Alfred Music on July 7, 2017.
Wertico has also released numerous recordings as co-leader: a self-titled LP, "Earwax Control", and a live Earwax Control CD entitled, "Number 2 Live"; a self-titled LP, "Spontaneous Composition"; a drum/percussion duo CD (with Gregg Bendian) entitled "BANG!"; a double-guitar/double-drum three-CD set (with Derek Bailey, Pat Metheny and Bendian) entitled "The Sign Of 4"; and two piano/bass/drums trio CDs (with Laurence Hobgood and Brian Torff) entitled "Union" and "State of the Union".
Some of his latest releases include a DVD & CD by David Cain & Paul Wertico entitled "Feast for the Senses"; a CD by Paul Wertico & Frank Catalano entitled "Topics of Conversation"; a CD by Fabrizio Mocata, Gianmarco Scaglia & Paul Wertico entitled "Free the Opera!"; a DVD & CD by Wertico Cain & Gray entitled "Sound Portraits" (winner of Best Live Performance Album in the 13th Annual Independent Music Awards (2014); Wertico Cain & Gray’s second CD entitled "Out in SPACE"; Wertico Cain & Gray's second DVD & third CD entitled "Organic Architecture"; Wertico Cain & Gray's fourth CD & video release entitled "Realization"; Wertico Cain & Gray's fifth CD entitled "Short Cuts – 40 Improvisations"; Wertico Cain & Gray's sixth CD entitled "AfterLive"; and Wertico Cain & Gray's seventh CD & downloadable video release entitled "Without Compromise". The Paul Wertico Trio also just released a new CD (celebrating the trio’s 25th anniversary) entitled "First Date", and two upcoming releases, a CD entitled "Dynamics In Meditation" by The Gianmarco Scaglia & Paul Wertico Quartet, and a double-CD entitled "Live Under Italian Skies" by The Paul Wertico/John Helliwell Project, will be released in 2020.
Wertico's debut CD as a leader, "The Yin and the Yout", received a four-star rating in "DownBeat". His 1998 trio CD, "Live in Warsaw!", received four and a half stars from "DownBeat" and featured guitarist John Moulder and bassist Eric Hochberg. The trio's 2000 studio recording, entitled "Don't Be Scared Anymore", received reviews of "This album is like the soundtrack to the world's coolest vacation" from "All About Jazz" and "Jazz-rock in the truest sense" from "Allmusic".
Wertico's 2004 CD, "StereoNucleosis", was released to extremely positive reviews. The "Chicago Tribune" wrote: "A brilliant release – Wertico shows a thrilling disregard for stylistic boundaries. "StereoNucleosis" is one of the most intelligent, creative and alluring percussion recordings of the past decade. Wertico reaffirms his position among the most restlessly inventive drummers working today." "Allmusic" reported: "Wertico and his players have done something wonderful and rare: they've actually created something not only different, but also truly new." "LA Weekly" wrote: "His recent records, such as 2000's "Don't Be Scared Anymore" and the new "StereoNucleosis" are stunning examples of the electronic, rhythmic and intellectual directions jazz could be going." Wertico's 2006 CD, "Another Side", was released on the audiophile Naim Label; it was described as "a brilliant collaborative effort between these three uniquely talented musicians."
His 2010 CD, "Impressions of a City", featuring his band, Paul Wertico's Mid-East/Mid-West Alliance, has been described in reviews as "One of the most impressively spontaneous albums you'll find on this planet – or any other"; "Haunting and memorable…an engaging musical experiment and one that is highly unique."; "This is musical narrative at its finest. A fanfare for the common (and mechanically exploited) 21st century man and woman."; "Sometimes beautiful, other times tense or just plain spooky, "Impressions of a City" ought to go some way toward correcting the dubious reputation of avant-garde music."; "A wildly unpredictable journey into one man's apparently inexhaustible sonic imagination."
"DownBeat" magazine awarded it four and a half stars, listed it as of the Best CDs of 2010, and wrote "What makes the music work is not only that Wertico is not content to just "play it straight" as a drummer but that his skills as a conceptualist/leader may even be greater. A heads-up for all budding drummers (check out Wertico's inventive pause of a solo on "My Side of the Story") who would like to hear and create music that goes beyond just keeping time."
This band also released a live in concert DVD, entitled "Live from SPACE", that has been reviewed by the "Chicago Examiner" as "More than setting tones, moods, and the stage for future, like-minded experimentation, these talented musicians have managed to also push the limits of what jazz can be, while entertaining a wider form of audience."; and thiszine.org wrote "For Wertico fans, this DVD is a must have, showcasing innovative, finely tuned jazz talent. For new fans of modern jazz, this would be a staple, and a great place to start before your journey backwards."
In 2007 Wertico and Brian Peters released their CD, "Ampersand", which Drummerszone.com called "Simply a musical masterpiece" and "Classic Drummer" described as "one of the most ambitious records ever released. Recorded over a period of four years, it documents a completely new approach to combine elements of both rock and jazz music while resulting in a very listenable and captivating final product." That same year he released "Jazz Impressions 1" with pianist Silvano Monasterios and bassist Mark Egan. "Chicago Jazz" wrote: "From the first note of "Jazz Impressions 1", you know you're in for something interesting and different. What these three do with that format, however, is nothing short of breathtaking."
As leader | https://en.wikipedia.org/wiki?curid=24625 |
Porsche 356
The Porsche 356 is a sports car which was first produced by Austrian company Porsche Konstruktionen GesmbH (1948–1949), and then by German company Dr. Ing. h. c. F. Porsche GmbH (1950–1965). It was Porsche's first production automobile. Earlier cars designed by the Austrian company include Cisitalia Grand Prix race car, the Volkswagen Beetle, and Auto Union Grand Prix cars.
The 356 is a lightweight and nimble-handling, rear-engine, rear-wheel drive, two-door available both in hardtop coupé and open configurations. Engineering innovations continued during the years of manufacture, contributing to its motorsports success and popularity. Production started in 1948 at Gmünd, Austria, where approximately 50 cars were built. In 1950 the factory relocated to Zuffenhausen, Germany, and general production of the 356 continued until April 1965, well after the replacement model 911 made its autumn 1964 debut. Of the 76,000 originally produced, approximately half survive.
The original price in 1948 for the 356 coupe was US$3,750. The 356 cabriolet cost US$4,250.
Prior to World War II Porsche designed and built three Type 64 cars for a 1939 Berlin-to-Rome race that was cancelled. In 1948 the mid-engine, tubular chassis 356 prototype called "No. 1" was completed. This led to some debate as to the "first" Porsche automobile. Although the original Porsche 356 unit had a rear-mid engine placement, the rear-engined 356 is considered by Porsche to be its first production model.
The 356 was created by Ferdinand "Ferry" Porsche (son of Ferdinand Porsche, founder of the German company), who founded the Austrian company with his sister, Louise. Like its cousin, the Volkswagen Beetle (which Ferdinand Porsche Sr. had designed), the 356 is a four-cylinder, air-cooled, rear-engine, rear-wheel drive car with unitized pan and body construction. The chassis was a completely new design as was the 356's body which was designed by Porsche employee Erwin Komenda, while certain mechanical components including the engine case and some suspension components were based on and initially sourced from Volkswagen. Ferry Porsche described the thinking behind the development of the 356 in an interview with the editor of "Panorama", the PCA magazine, in September 1972. "...I had always driven very speedy cars. I had an Alfa Romeo, also a BMW and others. ….By the end of the war I had a Volkswagen Cabriolet with a supercharged engine and that was the basic idea. I saw that if you had enough power in a small car it is nicer to drive than if you have a big car which is also overpowered. And it is more fun. On this basic idea we started the first Porsche prototype. To make the car lighter, to have an engine with more horsepower…that was the first two-seater that we built in Carinthia (Gmünd)".
The first 356 was road certified in Austria on June 8, 1948, and was entered in a race in Innsbruck, where it won its class. Porsche re-engineered and refined the car with a focus on performance. Fewer and fewer parts were shared between Volkswagen and Porsche as the 1950s progressed. The early 356 automobile bodies produced at Gmünd were handcrafted in aluminum, but when production moved to Zuffenhausen, Germany, in 1950, models produced there were steel-bodied. The aluminum-bodied cars from that very small company are what are now referred to as "prototypes". Porsche contracted Reutter to build the steel bodies and eventually bought the Reutter company in 1963. The Reutter company retained the seat manufacturing part of the business and changed its name to "Recaro".
Little noticed at its inception, mostly by a small number of auto racing enthusiasts, the first 356s sold primarily in Austria and Germany. It took Porsche two years, starting with the first prototype in 1948, to manufacture the first 50 automobiles. By the early 1950s the 356 had gained some renown among enthusiasts on both sides of the Atlantic for its aerodynamics, handling, and excellent build quality. The class win at Le Mans in 1951 was a factor. It was common for owners to race the car as well as drive them on the streets. They introduced the four-cam racing "Carrera" engine, a totally new design and unique to Porsche sports cars, in late 1954. Increasing success with its racing and road cars brought Porsche orders for over 10,000 units in 1964, and by the time 356 production ended in 1965 approximately 76,000 had been produced.
The 356 was built in four distinct series, the original ("pre-A"), followed by the 356 A, 356 B, and finally the 356 C. To distinguish among the major revisions of the model, 356s are generally classified into a few major groups. The 356 coupés and "cabriolets" (soft-tops) built through 1955 are readily identifiable by their split (1948 to 1952) or bent (centre-creased, 1953 to 1955) windscreens. In late 1955 the 356 A appeared, with a curved windshield. The A was the first road going Porsche to offer the Carrera four-cam engine as an option. In late 1959 the T5 356 B appeared; followed by the redesigned T6 series 356 B in 1962. The final version was the 356 C, little changed from the late T6 B cars but disc brakes replaced the drums.
Prior to completion of 356 production, Porsche had developed a higher-revving 616/36 version of the 356's four-cylinder pushrod engine for installation in a new 912 model that commenced production in April 1965. Although the 912 used numerous 356 components, Porsche did not intend for the 912 to replace the 356.
When the decision was made to replace the 356, the 901 (later 911) was the road car designed to carry the Porsche name forward. The 912 was developed as the "standard version" of the 911 at the price of a 356 1600 SC, while the complex but faster and heavier six-cylinder 911 was priced more than fifty percent higher. Customers purchased nearly 33,000 912 coupés and Targas powered by the Type 616 engine that had served Porsche so well during the 356 era.
From the earliest, 1,100 cc Gmünd beginnings, the overall shape of the 356 remained more or less set. In 1951, 1,300 and 1,500 cc engines with considerably more power were introduced. In late 1952 the split windscreen was replaced by a slightly V-shaped, single windshield, which fit into the same shape opening. In 1953, the 1300 S or "Super" was introduced, and the 1,100 cc engine was dropped.
In late-1954 Max Hoffman, the sole US importer of Porsches, convinced Porsche to build a stripped down roadster version with minimal equipment and a cut-down windscreen.
Towards the end of the original 356's time (in 1955, when the 356 A was about to be introduced) Hoffman, wanting a model name rather than just a number, got the factory to use the name "Continental" which was applied mostly to cars sold in the United States. Ford, makers of the Lincoln Continental, sued. This name was used only in 1955 and today this version is especially valued. For 1956, the equivalent version was briefly sold as the "European". Today all of the earliest Porsches are highly coveted by collectors and enthusiasts worldwide, based on their design, reliability and sporting performance.
In late 1955, with numerous small but significant changes, the 356 A was introduced. Its internal factory designation, "Type 1", gave rise to its nickname "T1" among enthusiasts. In the US 1,200 early 356s had been badged as the "Continental" and then a further 156 from autumn 1955 to January 1956 as an even rarer T1 “European” variant after which it reverted to its numerical 356 designation. In early 1957 a second revision of the 356 A was produced, known as Type 2 (or T2). Production of the Speedster peaked at 1,171 cars in 1957 and then started to decline. The four-cam "Carrera" engine, initially available only in the spyder race cars, became an available option starting with the 356 A.
Within the last 25 years, replicas of the 356 A have become very popular.
Most typical engine was a 4-cylinder boxer air-cooled naturally aspirated pushrod OHV 2 valves per cylinder valvetrain, with dual downdraft Zenith carburetors, producing @ 4500 rpm and a maximum torque of @ 2800 rpm .
In late 1959 significant styling and technical refinements gave rise to the 356 B (a T5 body type). The mid-1962 356 B model was changed to the T6 body type (twin engine lid grilles, an external fuel filler in the right front wing/fender and a larger rear window in the coupé). The Porsche factory did not call attention to these quite visible changes with a different model designation. However, when the T6 got disc brakes, with no other visible alterations, they called it the model C, or the SC when it had the optional extra powerful engine. A unique "Karmann hardtop" or "notchback" 356 B model was produced in 1961 and 1962. The 1961 production run (T5) was essentially a cabriolet body with the optional steel cabriolet hardtop welded in place. The 1962 line (T6 production) was a very different design in that the new T6 notchback coupé body did not start life as a cabriolet, but with its own production design—In essence, part cabriolet rear end design, part T6 coupé windshield frame, unique hard top. Both years of these models have taken the name "Karmann notchback".
The last revision of the 356 was the 356 C introduced for the 1964 model year. It featured disc brakes all around, as well as an option for the most powerful pushrod engine Porsche had ever produced, the "SC". Production of the 356 peaked at 14,151 cars in 1964, the year that its successor, the new 911, was introduced to the US market (it was introduced slightly earlier in Europe). The company continued to sell the 356 C in North America through 1965 as demand for the model remained quite strong in the early days of the heavier and more "civilized" 911. The last ten 356s (cabriolets) were assembled for the Dutch police force in March 1966 as 1965 models.
In 1953 Studebaker contacted Porsche to develop a new engine, but they developed an entire car that was a four seat version of the 356. The prototype called Porsche 530 got rejected as Studebaker wanted a larger car, with larger engine and with the engine in the front. The new prototype was called Porsche 542 or Studebaker Z-87.
The 356 originated as a coupé only 1948-1955. Over time a variety of other styles appeared, including roadster, convertible, cabriolet, and a very rare split-roof.
The basic design of the 356 remained the same throughout the end of its lifespan in 1965, with evolutionary, functional improvements rather than annual superficial styling changes.
The car was built of a unibody construction, making restoration difficult for cars that were kept in rust-prone climates. One of the most desirable collector models is the 356 "Speedster", introduced in late 1954 after Max Hoffman advised the company that a lower-cost, somewhat spartan open-top version could sell well in the American market. With its low, raked windscreen (which could be removed for weekend racing), bucket seats and minimal folding top, the Speedster was an instant hit, especially in Southern California.It was replaced in late 1958 by the "convertible D" model. It featured a taller, more practical windshield (allowing improved headroom with the top erected), roll-up glass side-windows and more comfortable seats. The following year the 356 B "roadster" convertible replaced the D model but the sports car market's love affair with top-down motoring was fading; soft-top 356 model sales declined significantly in the early 1960s.
Cabriolet models (convertibles with a full windshield and padded top) were offered from the start, and in the early 1950s sometimes comprised over 50% of total production. A unique "Karmann hardtop" or "notchback" 356 B model was produced in 1961 and 1962, essentially a cabriolet-style body with a permanent metal roof.
Porsche designers decided to build the 356's air-cooled pushrod OHV flat-four around the engine case they had originally designed for the Volkswagen Beetle. They added new cylinder heads, camshaft, crankshaft, intake and exhaust manifolds and used dual carburetors to more than double the VW's horsepower. While the first prototype 356 had a mid-engine layout, all subsequent 356 engines were rear-mounted. The four-cam "Carrera" engine appeared in late 1955 as an extra cost option on the 356 A, and remained available through the 356 model run.
The 356 has always been popular with the motor press. In 2004, "Sports Car International" ranked the 356 C tenth on their list of top sports cars of the 1960s. It remains a highly regarded collector car, regularly bringing between US$20,000 and well over US$100,000 at auction. The limited production Carrera Speedster (with its special DOHC racing engine), SC, Super 90 and Speedster models are among the most desirable. Multiple restored Carrera variants (of which only about 140 were made) have sold for values in excess of US$800,000, with the vast majority sold for more than US$300,000 at auction.
Thousands of owners worldwide maintain the 356 tradition, preserving their cars and driving them regularly. The U.S.-based 356 Registry's website states calls it, "[W]orld's largest classic Porsche club."
The 356 Speedster is among the most frequently reproduced classic automobiles.
Several companies build near-exact replicas from the ground up, fabricating turn-key cars to the buyer's exact specifications.
The Porsche 356, close to stock or highly modified, has enjoyed much success in rallying and car racing events.
Several Porsche 356s were stripped down in weight, and were modified in order to have better performance and handling for these races. A few notable examples include the Porsche 356 SL, and the Porsche 356 A Carrera GT.
In the early 1960s Porsche collaborated with Abarth and built the Porsche 356 B Carrera GTL Abarth coupé, which enjoyed some success in motor sports.
Number 53456, the first 356 Carrera ever produced (a modified 3 May, 1955 exemplar owned by Porsche engineer Reinhard Schmidt as first owner), was analyzed in February 2018 by "Quattroruote"'s subsidiary "Ruoteclassiche". It was estimated that its price was about € 335000. | https://en.wikipedia.org/wiki?curid=24626 |
Pedro Martínez
Pedro Jaime Martínez (born October 25, 1971) is a Dominican former professional baseball starting pitcher, who played in Major League Baseball (MLB) from to , for five teams—most notably the Boston Red Sox from to .
At the time of his retirement as an active player, his career record of 219 wins and 100 losses placed him fourth-highest in winning percentage in MLB history, and was the highest such achievement by a right-hander since the modern pitching era began, in . Martínez ended his career with an earned run average (ERA) of 2.93, the sixth-lowest by a pitcher with at least 2,500 innings pitched, since 1920. He reached the 3,000 strikeout mark in fewer innings than any pitcher except Randy Johnson, and is the only pitcher to compile over 3,000 career strikeouts with fewer than 3,000 innings pitched; Martínez‘ career strikeout rate of 10.04 per 9 innings trails only Johnson (10.61) among pitchers with over 1,500 innings.
An eight-time All-Star, Martínez was at his peak from to , establishing himself as one of the most dominant pitchers in baseball history. He won three Cy Young Awards (1997, 1999, 2000) and was runner-up twice (1998, 2002), posting a cumulative record of 118–36 (.766) with a 2.20 ERA, while leading his league in ERA five times and in winning percentage and strikeouts three times each. In , Martínez was runner-up for the American League (AL) Most Valuable Player Award, after winning the pitching Triple Crown with a 23–4 record, 2.07 ERA, and 313 strikeouts, and—along with Johnson—joined Gaylord Perry in the rare feat of winning the Cy Young Award in both the American and National Leagues (a feat since accomplished by Roger Clemens, Roy Halladay, and Max Scherzer). He is also the record holder for the lowest single-season WHIP in major league history (0.737 in ), and is the record holder for the lowest single-season Fielding Independent Pitching (FIP) in the live ball era (1.39 in 1999). Although his performance suffered a steep decline in 2004, Martínez ended the season memorably, by helping the Red Sox end a long drought in winning their first World Series title in 86 years.
Officially listed at and , Martínez was unusually small for a modern-day power pitcher, and is believed to have been somewhat smaller than his officially listed height and weight. In his early 30s, injuries began to keep him off the field to an increasing extent, with his appearances and success dropping off sharply, in his final seasons. Modern sabermetric analysis has strongly highlighted Martínez' achievements; his WHIP is the lowest of any live-ball era starting pitcher, his adjusted ERA+ is the best of any starting pitcher in major league history, and he has the third-highest strikeout-to-walk ratio in modern history. He dominated while pitching most often in a hitter-friendly ballpark and facing some of the toughest competition during the steroid era, which is generally thought to have favored batters. His dominance, reflected by modern statistics, has led many to consider Martinez as one of the greatest pitchers in MLB history. He was elected to the Baseball Hall of Fame in 2015 in his first year of eligibility, joining Juan Marichal as the second Dominican to be enshrined; his number (45) was retired by the Red Sox in a ceremony, two days after his Hall induction.
Martínez grew up in the Dominican Republic in the Santo Domingo suburb of Manoguayabo. He was the fifth of six siblings living in a palm wood house with a tin roof and dirt floors. His father, Pablo Jaime Abreu, worked odd jobs. His mother, Leopoldina Martínez, worked for traditionally wealthy families, washing their clothes. When Pedro was old enough to work, he held a job as a mechanic.
He did not have enough money to afford baseballs, so he improvised with oranges. His older brother, Ramón Martínez, was pitching at a Los Angeles Dodgers baseball camp in the Dominican Republic. As a young teenager, Martínez carried his brother's bags at the camp. One day at the camp, Ramón Martínez clocked his 14-year-old brother's pitches at between 78 and 80 miles per hour.
Martínez debuted professionally with the Tigres del Licey of the Dominican Winter League during the 1989-90 season. He then pitched briefly for the Azucareros del Este, before rejoining Licey in 1991-92 in a nine-player transaction that included George Bell, José Offerman and Julio Solano, among others.
Martínez was originally signed by the Dodgers as an amateur free agent in 1988. After pitching in the Dodgers farm system for several years, he made his MLB debut on September 24, 1992 for the Dodgers against the Cincinnati Reds, working two scoreless innings of relief. He made his first start for the Dodgers on September 30, taking the loss while giving up two runs in a 3–1 loss to the Reds.
Although Pedro's brother Ramón, then a star pitcher for the Dodgers, declared that his brother was an even better pitcher than he, the younger Martínez was thought by manager Tommy Lasorda too small to be an effective starting pitcher at the MLB level; Lasorda used Pedro Martínez almost exclusively as a relief pitcher. Lasorda was not the first to question Martínez's stature and durability; in the minor leagues, the then-135-pound pitcher was threatened with a $500 fine if he was caught running. Martínez turned in a strong 1993 season as the Dodgers' setup man, going 10–5 with a 2.61 ERA and 119 strikeouts, in 65 games; his 107 innings led all NL relievers. With the Dodgers in need of a second baseman after a contract dispute with Jody Reed, Martínez was traded to the Montreal Expos for Delino DeShields before the 1994 season.
It was with the Expos that Martínez developed into one of the top pitchers in baseball. Despite possessing a live fastball, he had difficulty maintaining control. It was during a bullpen session that manager Felipe Alou encouraged him to modify his primary grip on the fastball from two-seam to four-seam. The transformation was dramatic: the fastball − already among the fastest in the game − now was thrown with near-impeccable control and break that routinely overwhelmed hitters. On April 13, 1994, Martínez took a perfect game through 7 innings until throwing a brushback pitch at Reggie Sanders that led Sanders to immediately charge the mound, starting a bench-clearing brawl. Martínez ended up with a no-decision in the game, which the Expos eventually won 3–2.
On June 3, 1995, Martínez pitched nine perfect innings in a game against the San Diego Padres, before giving up a hit in the bottom of the 10th inning. He was immediately removed from the game, and was the winning pitcher in Montreal's 1–0 victory. [See "Memorable Games"]
In 1996, during a game against the Philadelphia Phillies, Mike Williams attempted to hit Martínez with retaliatory pitches for an earlier hit batter but failed with two consecutive attempts. After the second attempt, Martínez charged the mound, and started a bench-clearing fight.
In 1997, Martínez posted a 17–8 record for the Expos, and led the league in half a dozen pitching categories, including a 1.90 ERA, 305 strikeouts and 13 complete games pitched, while becoming the only Expo ever to win the National League Cy Young Award. The 13 complete games were tied for the second-highest single-season total in the modern era of baseball since Martínez's career began (Curt Schilling had 15 in 1998; Chuck Finley and Jack McDowell also reached 13 in a year). However, this 1997 total is by far the highest in Martínez's career, as he only completed more than 5 games in one other season (7, in 2000). Martínez was the first right-handed pitcher to reach 300 strikeouts with an ERA under 2.00 since Walter Johnson in 1912.
Approaching free agency, Martínez was traded to the Boston Red Sox in November 1997 for Carl Pavano and Tony Armas, Jr., and was soon signed to a six-year, $75 million contract (with an option for a seventh at $17 million) by Red Sox general manager Dan Duquette, at the time the largest ever awarded to a pitcher. Martínez paid immediate dividends in 1998, with a 19–7 record, and finishing second in the American League in ERA, WHIP, strikeouts, and the Cy Young voting.
In 1999, Martínez finished 23–4 with a 2.07 ERA and 313 strikeouts (earning the pitching Triple Crown) in 31 games (29 starts), pitching 213⅓ innings. He led the entire major leagues with K/9 and K/BB ratios of 13.20 and 8.46 and his Fielding Independent Pitching (FIP) (a defense independent pitching statistic measuring a pitchers effectiveness to limit walks, homers and hits and accumulate strikeouts) of 1.39 was the lowest in modern major league history and the third lowest in history behind Christy Mathewson in 1908 and Walter Johnson in 1910 (by comparison the next best FIP in baseball was Randy Johnson's 2.76 and no one else in the American League had an FIP below 3.25). unanimously winning his second Cy Young Award (this time in the American League), and coming in second in the Most Valuable Player (MVP) ballot. The MVP result was controversial, as Martínez received the most first-place votes of any player (8 of 28), but was omitted from the ballot of two sportswriters, New York's George King and Minneapolis' LaVelle Neal. The two writers argued that pitchers were not sufficiently all-around players to be considered. (However, George King had given MVP votes to two pitchers just the season before: Rick Helling and David Wells; King was the only writer to cast a vote for Helling, who had gone 20–7 with a 4.41 ERA and 164 strikeouts.) MVP ballots have ten ranked slots, and sportswriters are traditionally asked to recuse themselves if they feel they cannot vote for a pitcher. "It really made us all look very dumb", said Buster Olney, then a sportswriter for "The New York Times". "People were operating under different rules. The question of eligibility is a very basic thing. People were determining eligibility for themselves." "The Times" does not permit its writers to participate in award voting. Martínez finished second to Texas Rangers catcher Iván Rodríguez, by a margin of 252 points to 239. Rodríguez had been included on all 28 ballots. When asked about the result by WEEI-FM radio in January 2012, Martínez said, "I'm not afraid to say that the way that George King and Mr. LaVelle Neal III went about it was unprofessional."
In 1999, Martínez became just the 9th modern pitcher to have a second 300-strikeout season, along with Nolan Ryan (6 times), Randy Johnson (third time in 1999, and three more times since), Sandy Koufax (3 times), Rube Waddell, Walter Johnson, Sam McDowell, J. R. Richard, Steve Carlton, and Curt Schilling; Schilling would later add a third 300-K season. An anomaly in power pitching annals, Martínez is the only 20th-century pitcher to notch 300 strikeouts in a season without being at least six feet tall. He was not afraid to pitch inside. On May 1, 1999, against Oakland, he came inside to Olmedo Saenz, hitting him. Previously, Saenez had hit a three run homer off him. Questioned after the game he said ``I have no reason to hit [Saenz], But believe me ... if you get fresh with me or do something to show me up, I'll drill your ass."
Between August 1999 and April 2000, Martínez had ten consecutive starts with 10 or more strikeouts. Only three pitchers have had as many as seven such starts in a row, and one of those was Martínez himself, in April–May 1999. He averaged more than 15 strikeouts per nine innings during his record 10-game streak. During the 1999 season, he set the record for most consecutive innings pitched with a strikeout, with 40. For his career, Martínez has compiled 15 or more strikeouts in a game ten times, which is tied with Roger Clemens for the third-most 15-K games in history. (Nolan Ryan had 27, and Randy Johnson had 29.)
Martínez was named the AL Pitcher of the Month in April, May, June and September 1999 – 4 times in a single season. Martínez punctuated his dominance in the 1999 All-Star Game start at Fenway Park, when he struck out Barry Larkin, Larry Walker, Sammy Sosa, Mark McGwire and Jeff Bagwell in two innings. It was the first time any pitcher struck out the side to start an All-Star Game, and the performance earned Martínez the All-Star Game MVP award. Martínez later said that the 1999 All-Star break was especially memorable for him because he was able to meet the members of the MLB All-Century Team and get an autograph from Ted Williams.
Martínez was a focal point of the 1999 playoffs against the Cleveland Indians. Starting the series opener, he was forced out of the game after 4 shutout innings due to a strained back with the Red Sox up 2–0. The Red Sox, however, lost the game 3–2. Boston won the next two games to tie the series, but Martínez was still too injured to start the fifth and final game. However, neither team's starters were effective, and the game became a slugfest, tied at 8–8 at the end of 3 innings. Martínez entered the game as an emergency relief option. Unexpectedly, Martínez neutralized the Cleveland lineup with six no-hit innings for the win. He struck out eight and walked three, despite not being able to throw either his fastball or changeup with any command. Relying totally on his curve, Martínez and the Red Sox won the deciding game 12–8.
In the American League Championship Series, Martínez pitched seven shutout innings to beat Red Sox nemesis Roger Clemens and the New York Yankees in Game 3, handing the World Champions their only loss of the 1999 postseason.
Following up 1999, Martínez had perhaps his best year in 2000. Martínez posted an exceptional 1.74 ERA, the AL's lowest since 1978, while winning his third Cy Young Award. His ERA was about a third of the park-adjusted league ERA (4.97). No other single season by a starting pitcher has had such a large differential. Roger Clemens' 3.70 was the second-lowest ERA in the AL, but was still more than double that of Martínez. Martínez also set a record in the lesser known sabermetric statistic of Weighted Runs allowed per 9 innings pitched (Wtd. RA/9), posting a remarkably low 1.55 Wtd. RA/9. He gave up only 128 hits in 217 innings, for an average of just 5.31 hits allowed per 9 innings pitched: the third lowest mark on record.
Martínez's record was 18–6, but could have been even better. In his six losses, Martínez had 60 strikeouts, 8 walks, and 30 hits allowed in 48 innings, with a 2.44 ERA and an 0.79 WHIP, while averaging 8 innings per start. Martínez's ERA in his losing games was less than the leading ERA total in the lower-scoring National League (Kevin Brown's 2.58). The Yankees' Andy Pettitte outdueled Martínez twice; Martínez's other four losses were each by one run. Martínez's first loss of the year was a 1–0 complete game in which he had 17 strikeouts and 1 walk. All of Martínez's losses were quality starts, and he pitched 8 or more innings in all but one of his losses. Martínez received 2 runs or fewer of run support in 10 of his starts (over one third of his starts), in which his ERA was a minuscule 1.25 with 4 complete games and 2 shutouts, but his win-loss record was 4–5.
Martínez's WHIP in 2000 was 0.74, breaking both the 87-year-old modern Major League record set by Walter Johnson, as well as Guy Hecker's mark of 0.77 in 1882. The American League slugged just .259 against him. Hitters also had a .167 batting average and .213 on-base percentage, setting two more modern era records. Martínez became the only starting pitcher in history to have more than twice as many strikeouts in a season (284) as hits allowed (128). Martínez also set an American League record in K/BB, with a ratio of 8.88, surpassing the previous record set by Martínez in 1999 of 8.46.
When opposing teams had runners in scoring position, however, Martínez was even stingier. There were 138 such plate appearances against Martínez in 2000, in which opponents batted .133 with a .188 on-base percentage. Martínez struck out 58 while walking six, and allowed 17 hits.
On May 6 of that 2000 season, Martínez struck out 17 Tampa Bay Devil Rays in a 1–0 loss. In his next start six days later, he struck out 15 Baltimore Orioles in a 9–0, two-hit victory. The 32 strikeouts tied Luis Tiant's 32-year American League record for most strikeouts over two games.
In the span of 1999 and 2000, Martínez allowed 288 hits and 69 walks in 430 innings, with 597 strikeouts, an 0.83 WHIP, and a 1.90 ERA. Some statisticians believe that in the circumstances — with lefty-friendly Fenway Park as his home field, in a league with a designated hitter, during the highest offensive period in baseball history — this performance represents the peak for any pitcher in baseball history.
Though he continued his dominance when healthy, carrying a sub-2.00 ERA to the midpoint of the following season, Martínez spent much of 2001 on the disabled list with a rotator cuff injury as the Red Sox slumped to a poor finish. Martínez finished with a 7–3 record, a 2.39 ERA, and 163 strikeouts, but only threw 116 innings.
Healthy in 2002, he rebounded to lead the league with a 2.26 ERA, 0.923 WHIP and 239 strikeouts, while going 20–4. However, that season's American League Cy Young Award narrowly went to 23-game winner Barry Zito of the Oakland A's, despite Zito's higher ERA, higher WHIP, fewer strikeouts, and lower winning percentage. Martínez became the first pitcher since the introduction of the Cy Young Award to lead his league in each of those four statistics, yet not win the award.
Martínez's record was 14–4 in 2003. He led the league in ERA for the fifth time with 2.22, also led in WHIP for the fifth time at 1.04, and finished second to league leader Esteban Loaiza by a single strikeout. Martínez came in third for the 2003 Cy Young Award, which went to Toronto's Roy Halladay.
Martínez went 16–9 in 2004, despite an uncharacteristic 3.90 ERA, as the Red Sox won the American League wild card berth. He pitched effectively in the playoffs, contributing to the team's first World Series win in 86 years. Martínez again finished second in AL strikeouts, and was fourth in that winter's Cy Young voting.
The seven-year contract he received from the Red Sox had been considered a huge risk in the 1997 off-season, but Martínez had rewarded the team's hopes with two Cy Young Awards, and six Top-4 finishes. Martínez finished his Red Sox career with a 117–37 record, the highest winning percentage any pitcher has had with any team in baseball history.
After Boston's World Series triumph in 2004, Martínez became a free agent and signed a 4-year, $53 million contract with the New York Mets. In 2005, his first season as a Met, Martínez posted a 15–8 record with a 2.82 ERA, 208 strikeouts, and a league-leading 0.95 WHIP. It was his sixth league WHIP title, and the fifth time that he led the Major Leagues in the category. Opponents batted .204 against him.
Martínez started the 2006 season at the top of his game. At the end of May, he was 5–1 with a 2.50 ERA, with 88 strikeouts and 17 walks and 44 hits allowed in 76 innings; Martínez's record was worse than it could have been, with the Mets bullpen costing him two victories. However, during his May 26 start against the Florida Marlins, Martínez was instructed by the umpires to change his undershirt. He slipped in the corridor, injuring his hip, and his promising season curdled. The effect was not immediately apparent; although Martínez lost the Marlins game, his following start was a scintillating 0–0 duel with Arizona's Brandon Webb. But after that, beginning on June 6, Martínez went 4–7 with a 7.10 ERA in a series of spotty starts interrupted twice by stays on the disabled list. A right calf injury plagued him for the last two months of the season. After Martínez was removed from an ineffective September 15 outing, television cameras found him in the Mets dugout, apparently crying. Subsequent MRI exams revealed a torn muscle in Martínez's left calf, and a torn rotator cuff. Martínez underwent surgery which sidelined him for most of the 2007 season.
On November 3, 2006, Martínez stated that if he could not return to full strength, he might end up retiring after the 2007 season. "It's getting better, and progress is above all what is hoped for", Martínez told the Associated Press. "To go back, I have to recover, I have to be healthy. But if God doesn't want that, then I would have to think about giving it all up." Martínez added, "It's going to be a bitter winter because I am going to have to do a lot of work. The pain I feel was one of the worst I have felt with any injury in my career." But by December 30, 2006, Martínez was more optimistic: "The progress has been excellent. I don't have problems anymore with my reach or flexibility, and so far everything is going very well. The problem has to do with the calcification of the bone that was broken with the tear, and that had to be operated on. You have to let it run its course." Martínez also reported bulking up as part of his recuperative regimen: "I've put on about 10 pounds of muscle, because that's one of our strategies."
On September 3, 2007, Martínez returned from the disabled list with his 207th career win, allowing two earned runs in five efficient innings and collecting his 3000th career strikeout, becoming the 15th pitcher to do so. "I thought I was going to have butterflies and like that", said Martínez, "but I guess I'm too old." Martínez's comeback was considered a great success, as the right-hander went 3–1 in five starts with a 2.57 ERA. But his last start was a crucial 3–0 loss to St. Louis in the final week of the 2007 Mets' historic collapse; Martínez provided a good pitching performance (7 IP, 2 ER, 7 H, 1 BB, 8 K) but his teammates failed to score.
Martínez became just the fourth pitcher to reach 3,000 strikeouts with fewer than 1,000 walks (in Martínez's case, 701). Ferguson Jenkins, Greg Maddux and Curt Schilling had previously done likewise. Martínez also joined Nolan Ryan and Randy Johnson to become the third 3,000-strikeout pitcher to have more strikeouts than innings pitched, and is also the first Latin American pitcher to have 3,000 strikeouts.
His unexpectedly strong finish in 2007 raised hopes, but 2008 was a lost season for Martínez. He was injured just four innings into his first game of the season, an April 1 no-decision against the Florida Marlins. He later told reporters he'd felt a "pop" in his left leg. Martínez was diagnosed with a strained hamstring and did not return to action for more than two months. Following his return, his fastball typically topped out in the 90–91 mph range, a lower velocity than he'd had during his prime but slightly higher than in recent seasons. Martínez finished the season on a low note, losing all three of his decisions in September en route to a 5–6 record, the first losing record of his career. (Martínez was 0–1 in two appearances in 1992.) His 5.61 ERA and 1.57 WHIP were also Martínez's worst ever, and for the first time in his career, he failed to strike out at least twice as many batters as he walked (87–44).
During his four-year Met contract, Martínez was 32–23 in 79 starts, with a 3.88 ERA and a 1.16 WHIP.
A free agent, Martínez did not sign with a major league team during the winter. In March, he joined the Dominican Republic's squad for the 2009 World Baseball Classic, in an attempt to showcase his arm. Martínez pitched six scoreless innings with 6 strikeouts and no walks, but the team was quickly eliminated from the tournament and no MLB contract was forthcoming. In July 2009, Phillies scouts evaluated Martínez in two simulated games against the Phillies DSL team, leading to a one-year, $1-million contract. Martínez told reporters, "I would just like to be the backup. If I could be the backup, that would be a great thing to have—a healthy Pedro behind everybody else, in case something happens. That would be a great feeling to have on a team, eh?"
Replacing Jamie Moyer as a starter in the Phillies rotation on August 12, Martínez won his 2009 debut. In his return to New York on August 23, Martínez's win against the Mets was preserved by a rare unassisted triple play by second baseman Eric Bruntlett in the bottom of the ninth inning. With his win on September 3—his third as a Philadelphia Phillie and his 100th as a National Leaguer—Martínez became the 10th pitcher in history to win at least 100 games in each league. On September 13, Martínez pitched eight innings to beat the Mets again, by a final score of 1–0. His 130 pitches were the most he had thrown in a game since the ALDS in October 2003. Philadelphia won each of Martínez's first seven starts, the first time in franchise history that this had occurred with any debuting Phillies pitcher. In the NLCS against the Los Angeles Dodgers, he pitched seven shutout innings while allowing just two hits, but the Philadelphia bullpen faltered in the following inning, costing Martínez the win.
Intense media interest preceded Martínez's "return to Yankee Stadium" in Game 2 of the World Series. At the pre-game press conference, he seemed to relish the attention, telling reporters, "When you have 60,000 people chanting your name, waiting for you to throw the ball, you have to consider yourself someone special, someone that really has a purpose out there." Martínez pitched effectively in his second-ever World Series start, but left the game in the 7th inning trailing, 2–1, and wound up taking the loss. Before his second start of the Series, Martínez called himself and opposing pitcher Andy Pettitte "old goats", and acknowledged that Red Sox fans were rooting for him: "I know that they don't like the Yankees to win, not even in Nintendo games." However, Martínez allowed 4 runs in 4 innings, falling to 0–2 as the Phillies lost the sixth game and the 2009 World Series to the New York Yankees.
Following the Series, Martínez announced that he had no intention of retiring, but the 2010 season came and went without his signing with a team. Media reports surfaced that the Phillies had been discussing a deal to bring Martínez back for another half-season, but Martínez's agent announced in July that he would not be pitching at all in 2010, while remaining interested in a 2011 return. In December 2010, Martínez told a reporter for "El Día" "I'm realizing what it is to be a normal person. ... It's most likely that I don't return to active baseball ... but honestly I don't know if I'll definitively announce my retirement." The pitcher received some initial inquiries during the winter, but did not sign with any team for 2011. On December 4, 2011, he officially announced his retirement.
In December 2009, "Sports Illustrated" named Martínez as one of the five pitchers in the starting rotation of its MLB All-Decade Team. In February 2011, the Smithsonian's National Portrait Gallery announced that it had acquired an oil painting of Martínez for its collection.
On January 24, 2013, Martínez joined the Boston Red Sox as a special assistant to general manager Ben Cherington.
Martínez was elected to the National Baseball Hall of Fame in January 2015 with 91.1% of the vote. His Hall of Fame plaque has him wearing a Boston Red Sox cap. "I cannot be any prouder to take Red Sox Nation to the Hall of Fame with the logo on my plaque", Martínez said in a statement. "I am extremely proud to represent Boston and all of New England with my Hall of Fame career. I'm grateful to all of the teams for which I played, and especially fans, for making this amazing honor come true."
In 2015, Martínez was hired by the MLB Network as a studio analyst and also released an autobiography, "Pedro", which he coauthored with Michael Silverman of the "Boston Herald". Reflecting on his career, he named Barry Bonds, Edgar Martínez, Derek Jeter, Kenny Lofton and Ichiro Suzuki as the most difficult hitters he had to face.
On June 22, 2015, it was announced that Martínez' number 45 would be retired by the Red Sox on July 28, two days after his Hall of Fame induction. Red Sox principal owner John Henry stated, "to be elected into the Baseball Hall of Fame upon his first year of eligibility speaks volumes regarding Pedro's outstanding career, and is a testament to the respect and admiration so many in baseball have for him."
On February 1, 2018, Martínez was announced as part of the 2018 induction class for the Canadian Baseball Hall of Fame.
Pedro Martinez is an MLB on TBS studio analyst for Postseason coverage with Gary Sheffield, Jimmy Rollins, and Casey Stern.
On April 13, 1994, in his second start as a Montreal Expo, Martínez lost a perfect game with one out in the eighth inning when he hit Cincinnati's Reggie Sanders with a pitch. An angered Sanders charged the mound, and threw Martínez to the ground, before both teams cleared the benches and broke up any potential fight. Sanders was later ridiculed in the press for assuming that a pitcher would abandon a perfect game in order to hit a batter intentionally. Martínez allowed a leadoff single in the ninth inning, breaking up his no-hitter, and was removed for reliever John Wetteland (who loaded the bases, then allowed two sacrifice flies, thus saddling Martínez with a no-decision). Three years later, in 1997, Martínez had a one-hitter against the Reds; the one hit came in the 5th inning.
On June 3, 1995, while pitching for Montreal, he retired the first 27 Padres hitters he faced. However, the score was still tied 0–0 at that point and the game went into extra innings. The Expos scored a run in the top of the 10th, but Martínez surrendered a double to the 28th batter he faced, Bip Roberts. Expos manager Felipe Alou then removed Martínez from the game, bringing in reliever Mel Rojas, who retired the next three batters. Martínez officially recorded neither a perfect game nor a no-hitter. Until 1991, the rules would have judged it differently; however, a rule clarification specified that perfect games, even beyond nine innings, must remain perfect until the game is completed for them to be considered perfect. This retroactively decertified many no-hit games, including Ernie Shore's perfect relief stint in 1917 and Harvey Haddix's legendary 12 perfect innings in 1959 (lost in the 13th).
Martínez was selected as the starting pitcher for the American League All-Star team in 1999. The game, on July 13, 1999, was at Fenway Park, Martínez's home field. Martínez struck out Barry Larkin, Larry Walker, and Sammy Sosa consecutively in the first inning. He then struck out Mark McGwire leading off the 2nd, becoming the first pitcher to begin an All-Star game by striking out the first four batters. (The National League's Brad Penny matched the feat in 2006.) The next batter, Matt Williams, managed to reach first base from an error by Roberto Alomar. Martínez then proceeded to strike out Jeff Bagwell while Williams was caught stealing.
Martínez again came close to a perfect game on September 10, 1999, when he beat the New York Yankees, 3–1. He faced just 28 batters while striking out 17 and walking none (Martínez hit the Yankees' first batter, Chuck Knoblauch, but he was then caught stealing). Only a solo home run by Chili Davis separated Martínez from a no-hitter. The Davis home run came in the second inning, eliminating any suspense, but sportswriter Thomas Boswell called it the best game ever pitched at Yankee Stadium. Martinez not only retired the last 22 batters in a row, but over the last innings, (11 batters), Martinez threw an incredible 53 consecutive pitches, without allowing a base runner, and without a single ball being put in play. (9 Strikeouts, 2 foul-ball, pop-fly outs.)
On October 11, 1999, in Game 5 of the ALDS, Charles Nagy started for Cleveland and Bret Saberhagen started for Boston, both on only three days rest. Boston jumped out to a quick two-run lead in the top of the first inning, but Cleveland responded with three runs of their own in the bottom half of the innings. The hitting continued, knocking Saberhagen out of the game in the second inning having allowed five runs, and then Nagy out of the game after only finishing only three innings and allowing eight runs. Going into the fourth inning, manager Jimy Williams opted to replace Derek Lowe with the ailing Pedro Martínez, who had left Game 1 with a back injury. This decision would prove to be wise, as Martínez threw six hitless innings in relief to win and clinch the ALDS.
Game 3 of the American League Championship Series was the long anticipated matchup between Pedro Martínez and Roger Clemens. The Red Sox scored first. After a leadoff triple by Offerman, Valentin homered to put the Red Sox ahead 2–0. The onslaught continued as the Red Sox scored in all but two innings. Clemens was done in the third inning and the Red Sox would go on to win 13–1 and make the series two games to one. When Clemens was knocked out, Red Sox fans chanted "Where is Roger?" and then a response chant of "In the Shower". Martínez struck out 12 Yankees in seven scoreless innings and allowing just two hits, to beat Red Sox nemesis Roger Clemens and the New York Yankees in Game 3, handing the World Champions their only loss of the 1999 postseason. Martínez finished 1999 with a streak of 17 scoreless innings in the playoffs.
On May 28, 2000, Martínez and Roger Clemens had a dramatic duel on ESPN's "Sunday Night Baseball" telecast. Both pitchers excelled, combining to allow only 9 hits and 1 walk while striking out 22. A scoreless game was finally broken up in the 9th inning by Trot Nixon's home run off Clemens. In the bottom of the ninth, the Yankees loaded the bases against a tiring Martínez, but New York could not score, as Martínez completed the shutout.
On August 29, 2000, Martínez took a no-hitter into the 9th against the Tampa Bay Devil Rays, losing it on a leadoff single by John Flaherty. Martínez had begun the night by hitting the leadoff batter, Gerald Williams, in the hand. Williams started towards first base before charging the mound and knocking down Martínez; in the scrum, Williams was tackled by Boston catcher Jason Varitek. Martínez then retired the next 24 hitters in a row until allowing Flaherty's single, and finished with a one-hitter. He had 13 strikeouts and no walks in the game; the Flaherty single would have broken up a perfect game, if not for the leadoff hit batsman. Pedro Martínez never threw an official no-hitter. However, he has professed a lack of interest in the matter: "I think my career is more interesting than one game."
In the testy Game 3 of the 2003 ALCS, after allowing single runs in the 2nd, 3rd, and 4th innings, Martínez hit Yankees right fielder Karim García near the shoulders with a pitch, sparking a shouting match between Martínez and the New York bench. Directing his attention at Yankees catcher Jorge Posada, Martínez jabbed a finger into the side of his own head, which some, including an enraged Yankee bench coach Don Zimmer, interpreted as a threatened beanball. Emotions remained high in the bottom of the inning, which was led off by Boston slugger Manny Ramírez. Ramírez became irate over a high pitch from Roger Clemens, and both benches cleared. During the ensuing commotion, the 72-year-old Zimmer ran on to the field and started straight for Martínez; as he approached Martínez threw Zimmer to the ground. Later, Martínez claimed that he was not indicating that he would hit Posada in the head, but that he would remember what Posada was saying to him. In 2009, Martínez stated that he regretted the incident but denied being at fault. Zimmer did not give much credence to Pedro's statements. Martínez wrote in 2015 that the altercation with Zimmer was his only regret in his entire career.
Martínez was also on the mound for Game 7 of the 2003 ALCS versus the Yankees. With the Red Sox ahead 5–2 at the start of the 8th inning, a tiring Martínez pitched his way into trouble. He was visited on the mound by manager Grady Little, but was left in to pitch, in a controversial non-move. The Yankees tied the score against Martínez in that inning on four successive hits, leading to a dramatic extra-inning, series-ending victory for New York, costing Grady Little his job with the Red Sox as his contract was not renewed.
After a comparatively lackluster season in 2004 (though still a solid season by general standards), Pedro Martínez got the win in Game 3 of the World Series. He shut out the St. Louis Cardinals through seven innings, recording his final 14 outs consecutively in what would turn out to be his last game for Boston.
With the Mets, on April 10, 2005, at Turner Field, Martínez outdueled John Smoltz, pitching a two-hit, one-run, complete game en route to his first Mets victory. On August 14, 2005, against the Dodgers, he pitched 7 hitless innings, but ended up losing the no-hitter and the game.
In June 2006, the Mets played an interleague series against the Red Sox, which was Martínez's first appearance at Fenway Park since leaving the team. The Red Sox gave their former ace a two-minute video tribute on June 27, but showed no courtesies to Martínez the following night. In his June 28, 2006 start, Martínez lasted only 3 innings, and was rocked for 8 runs (6 earned) on 7 hits, losing his worst game as a Met just before going onto the disabled list. It was Martínez's only career appearance against the Red Sox, the only Major League team against which he did not record a victory.
In both the 2004 ALCS and the 2009 World Series, Martínez was greeted with the chant "Who's your daddy?" from New York Yankees fans whenever Pedro was pitching due to his statement earlier in the 2004 ALCS saying, "I mean what can I say? Just tip my hat and call the Yankees my daddies."
Martinez threw five pitches. His 4-seam fastball, power curveball, cutter, 2-Seam fastball, and circle changeup were all well above average; combined with his historically excellent control, they proved to be an overpowering package. Martínez threw from a low three-quarters position (nearly sidearm) that hid the ball very well from batters, who have remarked on the difficulty of picking up Martínez's delivery. Additionally, Martínez threw three different types of fastballs - a straight high-velocity four-seam fastball he used to overpower hitters, a two-seamer that ran to his throwing arm side, and a cut fastball that ran away from his throwing arm side - each with the pinpoint control that defined him.
Early in his career, Martínez's fastball was consistently clocked in the 95–98 mph range. Using it in combination with his devastating changeup and occasionally mixing in his curveball, he was as dominant a pitcher as the game has ever seen. "Sports Illustrated"s Joe Posnanski wrote, "There has never been a pitcher in baseball history—not Walter Johnson, not Lefty Grove, not Sandy Koufax, not Tom Seaver, not Roger Clemens—who was more overwhelming than the young Pedro."
As injuries and the aging process took their toll, Martínez made the adjustment to rely more on finesse than power. His fastball settled into the 85–88 mph range, although he was occasionally able to reach 90–91 mph when the need arose. Martínez continued to use a curveball, a circle changeup, and an occasional slider. With his command of the strike zone, he remained an effective strikeout pitcher despite the drop in velocity. Baseball historian Bill James described Martínez as being substantially more effective than his pitching peers due to his variety of pitches, pitch speeds, pinpoint control, and numerous modes of deception.
Pedro is married to former ESPN Deportes sideline reporter Carolina Cruz de Martínez, who now runs his charitable organization, Pedro Martínez and Brothers Foundation. He has four children. One of his sons, Pedro Martínez Jr., signed with the Detroit Tigers as an international free agent in September 2017. Another son, Pedro Isaías Martínez, signed at Nova Southeastern University in Fort Lauderdale, Florida. His other son, Enyol Martínez, and a daughter, Nayla Martínez, are studying in college right now. | https://en.wikipedia.org/wiki?curid=24630 |
Picts
The Picts were a confederation of Celtic-speaking peoples who lived in what is today eastern and northern Scotland during the Late British Iron Age and Early Medieval periods. Where they lived and what their culture was like can be inferred from early medieval texts and Pictish stones. Their Latin name, "Picti", appears in written records from Late Antiquity to the 10th century. They lived to the north of the rivers Forth and Clyde. Early medieval sources report the existence of a distinct Pictish language, which today is believed to have been an Insular Celtic language, closely related to the Brittonic spoken by the Britons who lived to the south.
Picts are assumed to have been the descendants of the Caledonii and other Iron Age tribes that were mentioned by Roman historians or on the world map of Ptolemy. Pictland, also called Pictavia by some sources, achieved a large degree of political unity in the late 7th and early 8th centuries through the expanding kingdom of Fortriu, the Iron Age Verturiones. By the year 900, the resulting Pictish over-kingdom had merged with the Gaelic kingdom of Dál Riata to form the Kingdom of Alba (Scotland); and by the 13th century Alba had expanded to include the formerly Brittonic kingdom of Strathclyde, Northumbrian Lothian, Galloway and the Western Isles.
Pictish society was typical of many Iron Age societies in northern Europe, having "wide connections and parallels" with neighbouring groups. Archaeology gives some impression of the society of the Picts. While very little in the way of Pictish writing has survived, Pictish history since the late 6th century is known from a variety of sources, including Bede's "Historia ecclesiastica gentis Anglorum", saints' lives such as that of Columba by Adomnán, and various Irish annals.
The term "Pict" is thought to have originated as a generic exonym used by the Romans in relation to people living north of the Forth–Clyde isthmus. The Latin word "Picti" first occurs in a panegyric written by Eumenius in AD 297 and is taken to mean "painted or tattooed people" (from Latin "pingere" "to paint"; "pictus", "painted", cf. Greek "πυκτίς" "pyktis", "picture").
"Pict" is "Pettr" in Old Norse, "Peohta" in Old English, "Pecht" in Scots and "Peithwyr" ("pict-men") in Welsh. Some think these words suggest an original Pictish root, instead of a Latin coinage. In writings from Ireland, the name "Cruthin", "Cruthini", "Cruthni", "Cruithni" or "Cruithini" (Modern Irish: "Cruithne") was used to refer both to the Picts and to another group of people who lived alongside the Ulaid in eastern Ulster. It is generally accepted that this is derived from "*Qritani", which is the Goidelic/Q-Celtic version of the Britonnic/P-Celtic "*Pritani". From this came "Britanni", the Roman name for those now called the Britons.
What the Picts called themselves is unknown. It has been proposed that they called themselves "Albidosi", a name found in the Chronicle of the Kings of Alba during the reign of Máel Coluim mac Domnaill, but this idea has been disputed. A unified "Pictish" identity may have consolidated with the Verturian hegemony established following the Battle of Dun Nechtain in 685 AD.
A Pictish confederation was formed in Late Antiquity from a number of tribes—how and why is not known. Some scholars have speculated that it was partly in response to the growth of the Roman Empire. The Pictish Chronicle, the Anglo-Saxon Chronicle and the early historiographers such as Bede, Geoffrey of Monmouth, Holinshed, etc. all present the Picts as conquerors of Alba from Scythia. However, no credence is now given to that view.
Pictland had previously been described by Roman writers and geographers as the home of the "Caledonii". These Romans also used other names to refer to tribes living in that area, including "Verturiones", "Taexali" and "Venicones". But they may have heard these other names only second- or third-hand, from speakers of Brittonic or Gaulish languages, who may have used different names for the same group or groups.
Pictish recorded history begins in the Early Middle Ages. At that time, the Gaels of Dál Riata controlled what is now Argyll, as part of a kingdom straddling the sea between Britain and Ireland. The Angles of Bernicia, which merged with Deira to form Northumbria, overwhelmed the adjacent British kingdoms, and for much of the 7th century Northumbria was the most powerful kingdom in Britain. The Picts were probably tributary to Northumbria until the reign of Bridei mac Beli, when, in 685, the Anglians suffered a defeat at the Battle of Dun Nechtain that halted their northward expansion. The Northumbrians continued to dominate southern Scotland for the remainder of the Pictish period.
Dál Riata was subject to the Pictish king Óengus mac Fergusa during his reign (729–761), and though it had its own kings beginning in the 760s, does not appear to have recovered its political independence from the Picts. A later Pictish king, Caustantín mac Fergusa (793–820), placed his son Domnall on the throne of Dál Riata (811–835). Pictish attempts to achieve a similar dominance over the Britons of Alt Clut (Dumbarton) were not successful.
The Viking Age brought great changes in Britain and Ireland, no less in Scotland than elsewhere, with the Vikings conquering and settling the islands and various mainland areas, including Caithness, Sutherland and Galloway. In the middle of the 9th century Ketil Flatnose is said to have founded the Kingdom of the Isles, governing many of these territories, and by the end of that century the Vikings had destroyed the Kingdom of Northumbria, greatly weakened the Kingdom of Strathclyde, and founded the Kingdom of York. In a major battle in 839, the Vikings killed the King of Fortriu, Eógan mac Óengusa, the King of Dál Riata Áed mac Boanta, and many others. In the aftermath, in the 840s, Cínaed mac Ailpín (Kenneth MacAlpin) became king of the Picts.
During the reign of Cínaed's grandson, Caustantín mac Áeda (900–943), outsiders began to refer to the region as the Kingdom of Alba rather than the Kingdom of the Picts, but it is not known whether this was because a new kingdom was established or Alba was simply a closer approximation of the Pictish name for the Picts. However, though the Pictish language did not disappear suddenly, a process of Gaelicisation (which may have begun generations earlier) was clearly underway during the reigns of Caustantín and his successors. By a certain point, probably during the 11th century, all the inhabitants of northern Alba had become fully Gaelicised Scots, and Pictish identity was forgotten. Later, the idea of Picts as a tribe was revived in myth and legend.
The early history of Pictland is unclear. In later periods multiple kings existed, ruling over separate kingdoms, with one king, sometimes two, more or less dominating their lesser neighbours. "De Situ Albanie", a late document, the Pictish Chronicle, the "Duan Albanach", along with Irish legends, have been used to argue the existence of seven Pictish kingdoms. These are as follows; those in bold are known to have had kings, or are otherwise attested in the Pictish period:
More small kingdoms may have existed. Some evidence suggests that a Pictish kingdom also existed in Orkney. "De Situ Albanie" is not the most reliable of sources, and the number of kingdoms, one for each of the seven sons of "Cruithne", the eponymous founder of the Picts, may well be grounds enough for disbelief. Regardless of the exact number of kingdoms and their names, the Pictish nation was not a united one.
For most of Pictish recorded history the kingdom of Fortriu appears dominant, so much so that "king of Fortriu" and "king of the Picts" may mean one and the same thing in the annals. This was previously thought to lie in the area around Perth and southern Strathearn; however, recent work has convinced those working in the field that Moray (a name referring to a very much larger area in the High Middle Ages than the county of Moray) was the core of Fortriu.
The Picts are often said to have practised matrilineal kingship succession on the basis of Irish legends and a statement in Bede's history. The kings of the Picts when Bede was writing were Bridei and Nechtan, sons of Der Ilei, who indeed claimed the throne through their mother Der Ilei, daughter of an earlier Pictish king.
In Ireland, kings were expected to come from among those who had a great-grandfather who had been king. Kingly fathers were not frequently succeeded by their sons, not because the Picts practised matrilineal succession, but because they were usually followed by their own brothers or cousins, more likely to be experienced men with the authority and the support necessary to be king. This was similar to tanistry.
The nature of kingship changed considerably during the centuries of Pictish history. While earlier kings had to be successful war leaders to maintain their authority, kingship became rather less personalised and more institutionalised during this time. Bureaucratic kingship was still far in the future when Pictland became Alba, but the support of the church, and the apparent ability of a small number of families to control the kingship for much of the period from the later 7th century onwards, provided a considerable degree of continuity. In much the same period, the Picts' neighbours in Dál Riata and Northumbria faced considerable difficulties, as the stability of succession and rule that previously benefited them ended.
The later Mormaers are thought to have originated in Pictish times, and to have been copied from, or inspired by, Northumbrian usages. It is unclear whether the Mormaers were originally former kings, royal officials, or local nobles, or some combination of these. Likewise, the Pictish shires and thanages, traces of which are found in later times, are thought to have been adopted from their southern neighbours.
The archaeological record provides evidence of the material culture of the Picts. It tells of a society not readily distinguishable from its
British, Gaelic, or Anglo-Saxon neighbours. Although analogy and knowledge of other so-called 'Celtic' societies (a term they never used for themselves) may be a useful guide, these extended across a very large area. Relying on knowledge of pre-Roman Gaul, or 13th-century Ireland, as a guide to the Picts of the 6th century may be misleading if analogy is pursued too far.
As with most peoples in the north of Europe in Late Antiquity, the Picts were farmers living in small communities. Cattle and horses were an obvious sign of wealth and prestige, sheep and pigs were kept in large numbers, and place names suggest that transhumance was common. Animals were small by later standards, although horses from Britain were imported into Ireland as breed-stock to enlarge native horses. From Irish sources it appears that the elite engaged in competitive cattle-breeding for size, and this may have been the case in Pictland also. Carvings show hunting with dogs, and also, unlike in Ireland, with falcons. Cereal crops included wheat, barley, oats and rye. Vegetables included kale, cabbage, onions and leeks, peas and beans and turnips, and some types no longer common, such as skirret. Plants such as wild garlic, nettles and watercress may have been gathered in the wild. The pastoral economy meant that hides and leather were readily available. Wool was the main source of fibres for clothing, and flax was also common, although it is not clear if they grew it for fibres, for oil, or as a foodstuff. Fish, shellfish, seals, and whales were exploited along coasts and rivers. The importance of domesticated animals suggests that meat and milk products were a major part of the diet of ordinary people, while the elite would have eaten a diet rich in meat from farming and hunting.
No Pictish counterparts to the areas of denser settlement around important fortresses in Gaul and southern Britain, or any other significant urban settlements, are known. Larger, but not large, settlements existed around royal forts, such as at Burghead Fort, or associated with religious foundations. No towns are known in Scotland until the 12th century.
The technology of everyday life is not well recorded, but archaeological evidence shows it to have been similar to that in Ireland and Anglo-Saxon England. Recently evidence has been found of watermills in Pictland. Kilns were used for drying kernels of wheat or barley, not otherwise easy in the changeable, temperate climate.
The early Picts are associated with piracy and raiding along the coasts of Roman Britain. Even in the Late Middle Ages, the line between traders and pirates was unclear, so that Pictish pirates were probably merchants on other occasions. It is generally assumed that trade collapsed with the Roman Empire, but this is to overstate the case. There is only limited evidence of long-distance trade with Pictland, but tableware and storage vessels from Gaul, probably transported up the Irish Sea, have been found. This trade may have been controlled from Dunadd in Dál Riata, where such goods appear to have been common. While long-distance travel was unusual in Pictish times, it was far from unknown as stories of missionaries, travelling clerics and exiles show.
Brochs are popularly associated with the Picts. Although these were built earlier in the Iron Age, with construction ending around 100 AD, they remained in use into and beyond the Pictish period. Crannogs, which may originate in Neolithic Scotland, may have been rebuilt, and some were still in use in the time of the Picts. The most common sort of buildings would have been roundhouses and rectangular timbered halls. While many churches were built in wood, from the early 8th century, if not earlier, some were built in stone.
The Picts are often said to have tattooed themselves, but evidence for this is limited. Naturalistic depictions of Pictish nobles, hunters and warriors, male and female, without obvious tattoos, are found on monumental stones. These stones include inscriptions in Latin and ogham script, not all of which have been deciphered. The well-known Pictish symbols found on standing stones and other artifacts have defied attempts at translation over the centuries. Pictish art can be classed as "Celtic" (a term not coined till the 1850s), and later as Insular. Irish poets portrayed their Pictish counterparts as very much like themselves.
Early Pictish religion is presumed to have resembled Celtic polytheism in general, although only place names remain from the pre-Christian era. When the Pictish elite converted to Christianity is uncertain, but traditions place Saint Palladius in Pictland after he left Ireland, and link Abernethy with Saint Brigid of Kildare. Saint Patrick refers to "apostate Picts", while the poem "Y Gododdin" does not remark on the Picts as pagans. Bede wrote that Saint Ninian (confused by some with Saint Finnian of Moville, who died c. 589), had converted the southern Picts. Recent archaeological work at Portmahomack places the foundation of the monastery there, an area once assumed to be among the last converted, in the late 6th century. This is contemporary with Bridei mac Maelchon and Columba, but the process of establishing Christianity throughout Pictland will have extended over a much longer period.
Pictland was not solely influenced by Iona and Ireland. It also had ties to churches in Northumbria, as seen in the reign of Nechtan mac Der Ilei. The reported expulsion of Ionan monks and clergy by Nechtan in 717 may have been related to the controversy over the dating of Easter, and the manner of tonsure, where Nechtan appears to have supported the Roman usages, but may equally have been intended to increase royal power over the church. Nonetheless, the evidence of place names suggests a wide area of Ionan influence in Pictland. Likewise, the "Cáin Adomnáin" (Law of Adomnán, "Lex Innocentium") counts Nechtan's brother Bridei among its guarantors.
The importance of monastic centres in Pictland was not, perhaps, as great as in Ireland. In areas that have been studied, such as Strathspey and Perthshire, it appears that the parochial structure of the High Middle Ages existed in early medieval times. Among the major religious sites of eastern Pictland were Portmahomack, Cennrígmonaid (later St Andrews), Dunkeld, Abernethy and Rosemarkie. It appears that these are associated with Pictish kings, which argues for a considerable degree of royal patronage and control of the church. Portmahomack in particular has been the subject of recent excavation and research, published by Martin Carver.
The cult of Saints was, as throughout Christian lands, of great importance in later Pictland. While kings might patronise great Saints, such as Saint Peter in the case of Nechtan, and perhaps Saint Andrew in the case of the second Óengus mac Fergusa, many lesser Saints, some now obscure, were important. The Pictish Saint Drostan appears to have had a wide following in the north in earlier times, although he was all but forgotten by the 12th century. Saint Serf of Culross was associated with Nechtan's brother Bridei. It appears, as is well known in later times, that noble kin groups had their own patron saints, and their own churches or abbeys.
Pictish art appears on stones, metalwork and small objects of stone and bone. It uses a distinctive form of the general Celtic Early Medieval development of La Tène style with increasing influences from the Insular art of 7th and 8th century Ireland and Northumbria, and then Anglo-Saxon and Irish art as the Early Medieval period continues. The most conspicuous survivals are the many Pictish stones that are located all over Pictland, from Inverness to Lanarkshire. An illustrated catalogue of these stones was produced by J. Romilly Allen as part of "The Early Christian Monuments of Scotland", with lists of their symbols and patterns. The symbols and patterns consist of animals including the Pictish Beast, the "rectangle", the "mirror and comb", "double-disc and Z-rod" and the "crescent and V-rod", among many others. There are also bosses and lenses with pelta and spiral designs. The patterns are curvilinear with hatchings. The cross-slabs are carved with Pictish symbols, Insular-derived interlace and Christian imagery, though interpretation is often difficult due to wear and obscurity. Several of the Christian images carved on various stones, such as David the harpist, Daniel and the lion, or scenes of St Paul and St Anthony meeting in the desert, have been influenced by the Insular manuscript tradition.
Pictish metalwork is found throughout Pictland (modern-day Scotland) and also further south; the Picts appeared to have a considerable amount of silver available, probably from raiding further south, or the payment of subsidies to keep them from doing so. The very large hoard of late Roman hacksilver found at Traprain Law may have originated in either way. The largest hoard of early Pictish metalwork was found in 1819 at Norrie's Law in Fife, but unfortunately much was dispersed and melted down (Scots law on treasure finds has always been unhelpful to preservation). Two famous 7th century silver and enamel plaques from the hoard, one shown above, have a "Z-rod", one of the Pictish symbols, in a particularly well-preserved and elegant form; unfortunately few comparable pieces have survived. Over ten heavy silver chains, some over 0.5m long, have been found from this period; the double-linked Whitecleuch Chain is one of only two that have a penannular linking piece for the ends, with symbol decoration including enamel, which shows how these were probably used as "choker" necklaces.
In the 8th and 9th centuries, after Christianization, the Pictish elite adopted a particular form of the Celtic brooch from Ireland, preferring true penannular brooches with lobed terminals. Some older Irish pseudo-penannular brooches were adapted to the Pictish style, for example the Breadalbane Brooch (British Museum). The St Ninian's Isle Treasure contains the best collection of Pictish forms. Other characteristics of Pictish metalwork are dotted backgrounds or designs and animal forms influenced by Insular art. The 8th century Monymusk Reliquary has elements of Pictish and Irish style.
The Pictish language is extinct. Evidence is limited to place names, the names of people found on monuments, and the contemporary records in other languages. The evidence of place-names and personal names argues strongly that the Picts spoke Insular Celtic languages related to the more southerly Brittonic languages. A number of Ogham inscriptions have been argued to be unidentifiable as Celtic, and on this basis, it has been suggested that non-Celtic languages were also in use.
The absence of surviving written material in Pictish—if the ambiguous "Pictish inscriptions" in the Ogham script are discounted—does not indicate a pre-literate society. The church certainly required literacy in Latin, and could not function without copyists to produce liturgical documents. Pictish iconography shows books being read, and carried, and its naturalistic style gives every reason to suppose that such images were of real life. Literacy was not widespread, but among the senior clergy, and in monasteries, it would have been common enough.
Toponymic evidence demonstrates the existence of a Brittonic language in the Pictish region. Those names containing elements paralleled in Welsh, such as "pant" ("hollow"; Panbride), "aber" ("estuary"; Abernethy) and "maen" ("stone"; Methven) are claimed to indicate regions inhabited by Picts in the past. Some naming elements, such as "pit" ("portion, share"), may have been borrowed into Gaelic, and may refer to previous "shires" or "thanages".
The evidence of place-names may also reveal the advance of Gaelic into Pictland. As noted, Atholl, meaning "New Ireland", is attested in the early 8th century. This may be an indication of the advance of Gaelic. Fortriu also contains place-names suggesting Gaelic settlement, or Gaelic influences. A pre-Gaelic interpretation of the name as Athfocla meaning 'north pass' or 'north way', as in gateway to Moray, suggests that the Gaelic Athfotla may be a Gaelic misreading of the minuscule c for t.
Medieval Welsh tradition credited the founding of Gwynedd to the Picts and traced their principal royal families—the Houses of Aberffraw and Dinefwr—to Cunedda Wledig, said to have invaded northern Wales from Lothian.
Informational notes
Citations
Bibliography
Further reading | https://en.wikipedia.org/wiki?curid=24632 |
Permutation group
In mathematics, a permutation group is a group "G" whose elements are permutations of a given set "M" and whose group operation is the composition of permutations in "G" (which are thought of as bijective functions from the set "M" to itself). The group of "all" permutations of a set "M" is the symmetric group of "M", often written as Sym("M"). The term "permutation group" thus means a subgroup of the symmetric group. If then, Sym("M"), the "symmetric group on n letters" is usually denoted by S"n".
By Cayley's theorem, every group is isomorphic to some permutation group.
The way in which the elements of a permutation group permute the elements of the set is called its group action. Group actions have applications in the study of symmetries, combinatorics and many other branches of mathematics, physics and chemistry.
Being a subgroup of a symmetric group, all that is necessary for a set of permutations to satisfy the group axioms and be a permutation group is that it contain the identity permutation, the inverse permutation of each permutation it contains, and be closed under composition of its permutations. A general property of finite groups implies that a finite nonempty subset of a symmetric group is again a group if and only if it is closed under the group operation.
The degree of a group of permutations of a finite set is the number of elements in the set. The order of a group (of any type) is the number of elements (cardinality) in the group. By Lagrange's theorem, the order of any finite permutation group of degree "n" must divide "n"! since "n"-factorial is the order of the symmetric group "S""n".
Since permutations are bijections of a set, they can be represented by Cauchy's "two-line notation". This notation lists each of the elements of "M" in the first row, and for each element, its image under the permutation below it in the second row. If formula_1 is a permutation of the set formula_2 then,
For instance, a particular permutation of the set {1,2,3,4,5} can be written as:
this means that "σ" satisfies "σ"(1)=2, "σ"(2)=5, "σ"(3)=4, "σ"(4)=3, and "σ"(5)=1. The elements of "M" need not appear in any special order in the first row. This permutation could also be written as:
Permutations are also often written in cyclic notation ("cyclic form") so that given the set "M" = {1,2,3,4}, a permutation "g" of "M" with "g"(1) = 2, "g"(2) = 4, "g"(4) = 1 and "g"(3) = 3 will be written as (1,2,4)(3), or more commonly, (1,2,4) since 3 is left unchanged; if the objects are denoted by single letters or digits, commas and spaces can also be dispensed with, and we have a notation such as (124). The permutation written above in 2-line notation would be written in cyclic notation as formula_6
The product of two permutations is defined as their composition as functions, so formula_7 is the function that maps any element "x" of the set to formula_8. Note that the rightmost permutation is applied to the argument first, because of the way function application is written. Some authors prefer the leftmost factor acting first, | https://en.wikipedia.org/wiki?curid=24634 |
Protein kinase
A protein kinase is a kinase enzyme that modifies other proteins by chemically adding phosphate groups to them (phosphorylation). Phosphorylation usually results in a functional change of the target protein (substrate) by changing enzyme activity, cellular location, or association with other proteins. The human genome contains about 500 protein kinase genes and they constitute about 2% of all human genes. Protein kinases are also found in bacteria and plants. Up to 30% of all human proteins may be modified by kinase activity, and kinases are known to regulate the majority of cellular pathways, especially those involved in signal transduction.
The chemical activity of a kinase involves removing a phosphate group from ATP and covalently attaching it to one of three amino acids that have a free hydroxyl group. Most kinases act on both serine and threonine, others act on tyrosine, and a number (dual-specificity kinases) act on all three. There are also protein kinases that phosphorylate other amino acids, including histidine kinases that phosphorylate histidine residues.
Because protein kinases have profound effects on a cell, their activity is highly regulated. Kinases are turned on or off by phosphorylation (sometimes by the kinase itself - "cis"-phosphorylation/autophosphorylation), by binding of activator proteins or inhibitor proteins, or small molecules, or by controlling their location in the cell relative to their substrates.
The catalytic subunits of many protein kinases are highly conserved, and several structures have been solved.
Eukaryotic protein kinases are enzymes that belong to a very extensive family of proteins that share a conserved catalytic core. There are a number of conserved regions in the catalytic domain of protein kinases. In the N-terminal extremity of the catalytic domain there is a glycine-rich stretch of residues in the vicinity of a lysine amino acid, which has been shown to be involved in ATP binding. In the central part of the catalytic domain, there is a conserved aspartic acid, which is important for the catalytic activity of the enzyme.
Serine/threonine protein kinases () phosphorylate the OH group of serine or threonine (which have similar side-chains). Activity of these protein kinases can be regulated by specific events (e.g., DNA damage), as well as numerous chemical signals, including cAMP/cGMP, diacylglycerol, and Ca2+/calmodulin.
One very important group of protein kinases are the MAP kinases (acronym from: "mitogen-activated protein kinases"). Important subgroups are the kinases of the ERK subfamily, typically activated by mitogenic signals, and the stress-activated protein kinases JNK and p38.
While MAP kinases are serine/threonine-specific, they are activated by combined phosphorylation on serine/threonine and tyrosine residues. Activity of MAP kinases is restricted by a number of protein phosphatases, which remove the phosphate groups that are added to specific serine or threonine residues of the kinase and are required to maintain the kinase in an active conformation.
Two major factors influence activity of MAP kinases:
a) signals that activate transmembrane receptors (either natural ligands or crosslinking agents) and proteins associated with them (mutations that simulate active state)
b) signals that inactivate the phosphatases that restrict a given MAP kinase. Such signals include oxidant stress.
Tyrosine-specific protein kinases ( and ) phosphorylate tyrosine amino acid residues, and like serine/threonine-specific kinases are used in signal transduction. They act primarily as growth factor receptors and in downstream signaling from growth factors ; some examples:
Bioessays. 2000 Aug;22(8):697-707. Review. PMID: 10918300
These kinases consist of a transmembrane receptor with a tyrosine kinase domain protruding into the cytoplasm. They play an important role in regulating cell division, cellular differentiation, and morphogenesis. More than 50 receptor tyrosine kinases are known in mammals.
The extracellular domain serves as the ligand-binding part of the molecule. It can be a separate unit that is attached to the rest of the receptor by a disulfide bond. The same mechanism can be used to bind two receptors together to form a homo- or heterodimer. The transmembrane element is a single α helix. The intracellular or cytoplasmic domain is responsible for the (highly conserved) kinase activity, as well as several regulatory functions.
Ligand binding causes two reactions:
The autophosphorylation causes the two subdomains of the intrinsic kinase to shift, opening the kinase domain for ATP binding. In the inactive form, the kinase subdomains are aligned so that ATP cannot reach the catalytic center of the kinase. When several amino acids suitable for phosphorylation are present in the kinase domain (e.g., the insulin-like growth factor receptor), the activity of the kinase can increase with the number of phosphorylated amino acids; in this case, the first phosphorylation is said to be a "cis"-autophosphorylation, switching the kinase from "off" to "standby".
The active tyrosine kinase phosphorylates specific target proteins, which are often enzymes themselves. An important target is the ras protein signal-transduction chain.
Tyrosine kinases recruited to a receptor following hormone binding are receptor-associated tyrosine kinases and are involved in a number of signaling cascades, in particular those involved in cytokine signaling (but also others, including growth hormone). One such receptor-associated tyrosine kinase is Janus kinase (JAK), many of whose effects are mediated by STAT proteins. ("See JAK-STAT pathway.")
Histidine kinases are structurally distinct from most other protein kinases and are found mostly in prokaryotes as part of two-component signal transduction mechanisms. A phosphate group from ATP is first added to a histidine residue within the kinase, and later transferred to an aspartate residue on a 'receiver domain' on a different protein, or sometimes on the kinase itself. The aspartyl phosphate residue is then active in signaling.
Histidine kinases are found widely in prokaryotes, as well as in plants, fungi and eukaryotes. The pyruvate dehydrogenase family of kinases in animals is structurally related to histidine kinases, but instead phosphorylate serine residues, and probably do not use a phospho-histidine intermediate.
Some kinases have mixed kinase activities. For example, MEK (MAPKK), which is involved in the MAP kinase cascade, is a mixed serine/threonine and tyrosine kinase and, hence, is a dual-specificity kinase.
Deregulated kinase activity is a frequent cause of disease, in particular cancer, wherein kinases regulate many aspects that control cell growth, movement and death. Drugs that inhibit specific kinases are being developed to treat several diseases, and some are currently in clinical use, including Gleevec (imatinib) and Iressa (gefitinib).
Drug developments for kinase inhibitors are started from kinase assays, the lead compounds are usually profiled for specificity before moving into further tests. Many profiling services are available from fluorescent-based assays to radioisotope based detections, and competition binding assays. | https://en.wikipedia.org/wiki?curid=24635 |
Pisa
Pisa ( , or ) is a city and "comune" in Tuscany, central Italy, straddling the Arno just before it empties into the Ligurian Sea. It is the capital city of the Province of Pisa. Although Pisa is known worldwide for its leaning tower (the bell tower of the city's cathedral), the city of over 91,104 residents (around 200,000 with the metropolitan area) contains more than 20 other historic churches, several medieval palaces, and various bridges across the Arno. Much of the city's architecture was financed from its history as one of the Italian maritime republics.
The city is also home to the University of Pisa, which has a history going back to the 12th century and also has the Scuola Normale Superiore di Pisa, founded by Napoleon in 1810, and its offshoot, the Sant'Anna School of Advanced Studies, as the best-sanctioned Superior Graduate Schools in Italy.
The origin of the name, Pisa, is a mystery. While the origin of the city had remained unknown for centuries, the Pelasgi, the Greeks, the Etruscans, and the Ligurians had variously been proposed as founders of the city (for example, a colony of the ancient city of Pisa, Greece). Archaeological remains from the fifth century BC confirmed the existence of a city at the sea, trading with Greeks and Gauls. The presence of an Etruscan necropolis, discovered during excavations in the in 1991, confirmed its Etruscan origins.
Ancient Roman authors referred to Pisa as an old city. Strabo referred Pisa's origins to the mythical Nestor, king of Pylos, after the fall of Troy. Virgil, in his "Aeneid", states that Pisa was already a great center by the times described; the settlers from the Alpheus coast have been credited with the founding of the city in the 'Etruscan lands'. The Virgilian commentator Servius wrote that the Teuti, or Pelops, the king of the Pisaeans, founded the town 13 centuries before the start of the common era.
The maritime role of Pisa should have been already prominent if the ancient authorities ascribed to it the invention of the naval ram. Pisa took advantage of being the only port along the western coast between Genoa (then a small village) and Ostia. Pisa served as a base for Roman naval expeditions against Ligurians, Gauls, and Carthaginians. In 180 BC, it became a Roman colony under Roman law, as . In 89 BC, became a "municipium". Emperor Augustus fortified the colony into an important port and changed the name as .
Pisa supposedly was founded on the shore, but due to the alluvial sediments from the Arno and the Serchio, whose mouth lies about north of the Arno's, the shore moved west. Strabo states that the city was away from the coast. Currently, it is located from the coast. However, it was a maritime city, with ships sailing up the Arno. In the 90s AD, a baths complex was built in the city.
During the last years of the Western Roman Empire, Pisa did not decline as much as the other cities of Italy, probably due to the complexity of its river system and its consequent ease of defence. In the seventh century, Pisa helped Pope Gregory I by supplying numerous ships in his military expedition against the Byzantines of Ravenna: Pisa was the sole Byzantine centre of Tuscia to fall peacefully in Lombard hands, through assimilation with the neighbouring region where their trading interests were prevalent. Pisa began in this way its rise to the role of main port of the Upper Tyrrhenian Sea and became the main trading centre between Tuscany and Corsica, Sardinia, and the southern coasts of France and Spain.
After Charlemagne had defeated the Lombards under the command of Desiderius in 774, Pisa went through a crisis, but soon recovered. Politically, it became part of the duchy of Lucca. In 860, Pisa was captured by vikings led by Björn Ironside. In 930, Pisa became the county centre (status it maintained until the arrival of Otto I) within the mark of Tuscia. Lucca was the capital but Pisa was the most important city, as in the middle of 10th century Liutprand of Cremona, bishop of Cremona, called Pisa ("capital of the province of Tuscia"), and a century later, the marquis of Tuscia was commonly referred to as "marquis of Pisa". In 1003, Pisa was the protagonist of the first communal war in Italy, against Lucca. From the naval point of view, since the 9th century, the emergence of the Saracen pirates urged the city to expand its fleet; in the following years, this fleet gave the town an opportunity for more expansion. In 828, Pisan ships assaulted the coast of North Africa. In 871, they took part in the defence of Salerno from the Saracens. In 970, they gave also strong support to Otto I's expedition, defeating a Byzantine fleet in front of Calabrese coasts.
The power of Pisa as a maritime nation began to grow and reached its apex in the 11th century, when it acquired traditional fame as one of the four main historical maritime republics of Italy ().
At that time, the city was a very important commercial centre and controlled a significant Mediterranean merchant fleet and navy. It expanded its powers in 1005 through the sack of in the south of Italy. Pisa was in continuous conflict with some 'Saracens' - a medieval term to refer to Arab Muslims - who had their bases in Corsica, for control of the Mediterranean. In 1017, Sardinian Giudicati were militarily supported by Pisa, in alliance with Genoa, to defeat the Saracen King Mugahid, who had settled a logistic base in the north of Sardinia the year before. This victory gave Pisa supremacy in the Tyrrhenian Sea. When the Pisans subsequently ousted the Genoese from Sardinia, a new conflict and rivalry was born between these mighty marine republics. Between 1030 and 1035, Pisa went on to defeat several rival towns in Sicily and conquer Carthage in North Africa. In 1051–1052, the admiral Jacopo Ciurini conquered Corsica, provoking more resentment from the Genoese. In 1063, Admiral Giovanni Orlandi, coming to the aid of the Norman Roger I, took Palermo from the Saracen pirates. The gold treasure taken from the Saracens in Palermo allowed the Pisans to start the building of their cathedral and the other monuments which constitute the famous .
In 1060, Pisa had to engage in their first battle with Genoa. The Pisan victory helped to consolidate its position in the Mediterranean. Pope Gregory VII recognised in 1077 the new "Laws and customs of the sea" instituted by the Pisans, and emperor Henry IV granted them the right to name their own consuls, advised by a council of elders. This was simply a confirmation of the present situation, because in those years, the marquis had already been excluded from power. In 1092, Pope Urban II awarded Pisa the supremacy over Corsica and Sardinia, and at the same time raising the town to the rank of archbishopric.
Pisa sacked the Tunisian city of Mahdia in 1088. Four years later, Pisan and Genoese ships helped Alfonso VI of Castilla to push El Cid out of Valencia. A Pisan fleet of 120 ships also took part in the First Crusade, and the Pisans were instrumental in the taking of Jerusalem in 1099. On their way to the Holy Land, the ships did not miss the occasion to sack some Byzantine islands; the Pisan crusaders were led by their archbishop Daibert, the future patriarch of Jerusalem. Pisa and the other took advantage of the crusade to establish trading posts and colonies in the Eastern coastal cities of the Levant. In particular, the Pisans founded colonies in Antiochia, Acre, Jaffa, Tripoli, Tyre, Latakia, and Accone. They also had other possessions in Jerusalem and Caesarea, plus smaller colonies (with lesser autonomy) in Cairo, Alexandria, and of course Constantinople, where the Byzantine Emperor Alexius I Comnenus granted them special mooring and trading rights. In all these cities, the Pisans were granted privileges and immunity from taxation, but had to contribute to the defence in case of attack. In the 12th century, the Pisan quarter in the eastern part of Constantinople had grown to 1,000 people. For some years of that century, Pisa was the most prominent merchant and military ally of the Byzantine Empire, overcoming Venice itself.
In 1113, Pisa and Pope Paschal II set up, together with the count of Barcelona and other contingents from Provence and Italy (Genoese excluded), a war to free the Balearic Islands from the Moors; the queen and the king of Majorca were brought in chains to Tuscany. Though the Almoravides soon reconquered the island, the booty taken helped the Pisans in their magnificent programme of buildings, especially the cathedral, and Pisa gained a role of pre-eminence in the Western Mediterranean.
In the following years, the mighty Pisan fleet, led by archbishop Pietro Moriconi, drove away the Saracens after ferocious combats. Though short-lived, this success of Pisa in Spain increased the rivalry with Genoa. Pisa's trade with the Languedoc and Provence (Noli, Savona, Fréjus, and Montpellier) were an obstacle to the Genoese interests in cities such as Hyères, Fos, Antibes, and Marseille.
The war began in 1119 when the Genoese attacked several galleys on their way to the motherland, and lasted until 1133. The two cities fought each other on land and at sea, but hostilities were limited to raids and pirate-like assaults.
In June 1135, Bernard of Clairvaux took a leading part in the Council of Pisa, asserting the claims of Pope Innocent II against those of Pope Anacletus II, who had been elected pope in 1130 with Norman support, but was not recognised outside Rome. Innocent II resolved the conflict with Genoa, establishing the sphere of influence of Pisa and Genoa. Pisa could then, unhindered by Genoa, participate in the conflict of Innocent II against king Roger II of Sicily. Amalfi, one of the maritime republics (though already declining under Norman rule), was conquered on August 6, 1136; the Pisans destroyed the ships in the port, assaulted the castles in the surrounding areas, and drove back an army sent by Roger from Aversa. This victory brought Pisa to the peak of its power and to a standing equal to Venice. Two years later, its soldiers sacked Salerno.
In the following years, Pisa was one of the staunchest supporters of the Ghibelline party. This was much appreciated by Frederick I. He issued in 1162 and 1165 two important documents, with these grants: Apart from the jurisdiction over the Pisan countryside, the Pisans were granted freedom of trade in the whole empire, the coast from Civitavecchia to Portovenere, a half of Palermo, Messina, Salerno and Naples, the whole of Gaeta, Mazara, and Trapani, and a street with houses for its merchants in every city of the Kingdom of Sicily. Some of these grants were later confirmed by Henry VI, Otto IV, and Frederick II. They marked the apex of Pisa's power, but also spurred the resentment of cities such as Lucca, Massa, Volterra, and Florence, which saw their aim to expand towards the sea thwarted. The clash with Lucca also concerned the possession of the castle of Montignoso and mainly the control of the , the main trade route between Rome and France. Last but not least, such a sudden and large increase of power by Pisa could only lead to another war with Genoa.
Genoa had acquired a largely dominant position in the markets of southern France. The war began presumably in 1165 on the Rhône, when an attack on a convoy, directed to some Pisan trade centres on the river, by the Genoese and their ally, the count of Toulouse, failed. Pisa, though, was allied to Provence. The war continued until 1175 without significant victories. Another point of attrition was Sicily, where both the cities had privileges granted by Henry VI. In 1192, Pisa managed to conquer Messina. This episode was followed by a series of battles culminating in the Genoese conquest of Syracuse in 1204. Later, the trading posts in Sicily were lost when the new Pope Innocent III, though removing the excommunication cast over Pisa by his predecessor Celestine III, allied himself with the Guelph League of Tuscany, led by Florence. Soon, he stipulated a pact with Genoa, too, further weakening the Pisan presence in southern Italy.
To counter the Genoese predominance in the southern Tyrrhenian Sea, Pisa strengthened its relationship with their Spanish and French traditional bases (Marseille, Narbonne, Barcelona, etc.) and tried to defy the Venetian rule of the Adriatic Sea. In 1180, the two cities agreed to a nonaggression treaty in the Tyrrhenian and the Adriatic, but the death of Emperor Manuel Comnenus in Constantinople changed the situation. Soon, attacks on Venetian convoys were made. Pisa signed trade and political pacts with Ancona, Pula, Zara, Split, and Brindisi; in 1195, a Pisan fleet reached Pola to defend its independence from Venice, but the Serenissima managed soon to reconquer the rebel sea town.
One year later, the two cities signed a peace treaty, which resulted in favourable conditions for Pisa, but in 1199, the Pisans violated it by blockading the port of Brindisi in Apulia. In the following naval battle, they were defeated by the Venetians. The war that followed ended in 1206 with a treaty in which Pisa gave up all its hopes to expand in the Adriatic, though it maintained the trading posts it had established in the area. From that point on, the two cities were united against the rising power of Genoa and sometimes collaborated to increase the trading benefits in Constantinople.
In 1209 in Lerici, two councils for a final resolution of the rivalry with Genoa were held. A 20-year peace treaty was signed, but when in 1220, the emperor Frederick II confirmed his supremacy over the Tyrrhenian coast from Civitavecchia to Portovenere, the Genoese and Tuscan resentment against Pisa grew again. In the following years, Pisa clashed with Lucca in Garfagnana and was defeated by the Florentines at Castel del Bosco. The strong Ghibelline position of Pisa brought this town diametrically against the Pope, who was in a strong dispute with the Empire, and indeed the pope tried to deprive the town of its dominions in northern Sardinia.
In 1238, Pope Gregory IX formed an alliance between Genoa and Venice against the empire, and consequently against Pisa, too. One year later, he excommunicated Frederick II and called for an anti-Empire council to be held in Rome in 1241. On May 3, 1241, a combined fleet of Pisan and Sicilian ships, led by the emperor's son Enzo, attacked a Genoese convoy carrying prelates from northern Italy and France, next to the isle of Giglio (Battle of Giglio), in front of Tuscany; the Genoese lost 25 ships, while about a thousand sailors, two cardinals, and one bishop were taken prisoner. After this outstanding victory, the council in Rome failed, but Pisa was excommunicated. This extreme measure was only removed in 1257. Anyway, the Tuscan city tried to take advantage of the favourable situation to conquer the Corsican city of Aleria and even lay siege to Genoa itself in 1243.
The Ligurian republic of Genoa, however, recovered fast from this blow and won back Lerici, conquered by the Pisans some years earlier, in 1256.
The great expansion in the Mediterranean and the prominence of the merchant class urged a modification in the city's institutes. The system with consuls was abandoned, and in 1230, the new city rulers named a "capitano del popolo" ("people's chieftain") as civil and military leader. In spite of these reforms, the conquered lands and the city itself were harassed by the rivalry between the two families of Della Gherardesca and Visconti. In 1237 the archbishop and the Emperor Frederick II intervened to reconcile the two rivals, but the strains did not cease. In 1254, the people rebelled and imposed 12 ("People's Elders") as their political representatives in the commune. They also supplemented the legislative councils, formed of noblemen, with new People's Councils, composed by the main guilds and by the chiefs of the People's Companies. These had the power to ratify the laws of the Major General Council and the Senate.
The decline is said to have begun on August 6, 1284, when the numerically superior fleet of Pisa, under the command of Albertino Morosini, was defeated by the brilliant tactics of the Genoese fleet, under the command of Benedetto Zaccaria and Oberto Doria, in the dramatic naval Battle of Meloria. This defeat ended the maritime power of Pisa and the town never fully recovered; in 1290, the Genoese destroyed forever the Porto Pisano (Pisa's port), and covered the land with salt. The region around Pisa did not permit the city to recover from the loss of thousands of sailors from the Meloria, while Liguria guaranteed enough sailors to Genoa. Goods, however, continued to be traded, albeit in reduced quantity, but the end came when the Arno started to change course, preventing the galleys from reaching the city's port up the river. The nearby area also likely became infested with malaria. The true end came in 1324, when Sardinia was entirely lost in favour of the Aragonese.
Always Ghibelline, Pisa tried to build up its power in the course of the 14th century, and even managed to defeat Florence in the Battle of Montecatini (1315), under the command of Uguccione della Faggiuola. Eventually, however, after a long siege, Pisa was occupied by Florentines in 1405. Florentines corrupted the "capitano del popolo" ("people's chieftain"), Giovanni Gambacorta, who opened by night the city gate of San Marco. Pisa was never conquered by an army. In 1409, Pisa was the seat of a council trying to set the question of the Great Schism. In the 15th century, access to the sea became more difficult, as the port was silting up and was cut off from the sea. When in 1494, Charles VIII of France invaded the Italian states to claim the Kingdom of Naples, Pisa reclaimed its independence as the Second Pisan Republic.
The new freedom did not last long; 15 years of battles and sieges by the Florentine troops led by Antonio da Filicaja, Averardo Salviati and Niccolò Capponi were made, but they never managed to conquer the city. Vitellozzo Vitelli with his brother Paolo were the only ones who actually managed to break the strong defences of Pisa and make a breach in the Stampace bastion in the southern west part of the walls, but he did not enter the city. For that, they were suspected of treachery and Paolo was put to death. However, the resources of Pisa were getting low, and at the end, the city was sold to the Visconti family from Milan and eventually to Florence again. Its role of major port of Tuscany went to Livorno. Pisa acquired a mainly cultural role spurred by the presence of the University of Pisa, created in 1343, and later reinforced by the Scuola Normale Superiore di Pisa (1810) and Sant'Anna School of Advanced Studies (1987).
Pisa was the birthplace of the important early physicist Galileo Galilei. It is still the seat of an archbishopric. Besides its educational institutions, it has become a light industrial centre and a railway hub. It suffered repeated destruction during World War II.
Since the early 1950s, the US Army has maintained Camp Darby just outside Pisa, which is used by many US military personnel as a base for vacations in the area.
Pisa experiences a Mediterranean climate (Köppen climate classification "Csa"). The city is characterized by cool-mild winters and hot summers. This transitional climate keeps Pisa from enjoying a summer devoid of rain, typical of central and southern Italy, as the summer (the driest season) experiences occasional rain showers. Rainfall peaks in the autumn.
While the bell tower of the cathedral, known as "the leaning Tower of Pisa", is the most famous image of the city, it is one of many works of art and architecture in the city's , also known, since the 20th century, as (Square of Miracles), to the north of the old town center. The also houses the (the Cathedral), the Baptistry and the (the monumental cemetery). The medieval complex includes the above-mentioned four sacred buildings, the hospital and few palaces. All the complex is kept by the "Opera (fabrica ecclesiae) della Primaziale Pisana", an old non profit foundation that operates since the building of the Cathedral (1063) to the maintenance of the sacred buildings. The area is framed by medieval walls kept by municipality administration.
Other sights include:
San Pietro in Vinculis. Known as "San Pierino", it is an 11th-century church with a crypt and a cosmatesque mosaic on the floor of the main nave.
Pisa hosts the University of Pisa, especially renowned in the fields of Physics, Mathematics, Engineering and Computer Science. The and the , the Italian academic élite institutions are noted mostly for research and the education of graduate students.
Construction of a new leaning tower of glass and steel 57 meters tall, containing offices and apartments was scheduled to start in summer 2004 and take 4 years. It was designed by Dante Oscar Benini and raised criticism.
Located at: Scuola Normale Superiore di Pisa – Piazza dei Cavalieri, 7 – 56126 Pisa (Italia)
Located at: Scuola Superiore Sant'Anna, P.zza Martiri della Libertà, 33 – 56127 – Pisa (Italia)
Located at: Università di Pisa – Lungarno Pacinotti, 43 – 56126 Pisa (Italia)
For people born in Pisa, see ; among notable non-natives long resident in the city:
Pisa is a one-hour drive from Florence (). One can also get a train directly to Florence from a Central rail station in Pisa (Pisa Centrale). Local buses connect the city of Pisa with all the neighboring cities (come to Pontedera, then take a bus for Volterra, San Miniato, etc.). Taxis come when requested from Pisa International Airport and Central Station.
Pisa has an international airport known as Pisa International Airport located in San Giusto neighborhood in Pisa. The airport has a people mover system, called "Pisamover", opened in March 2017, that connects Airport and Pisa central railway station, that is away. It's based on a driverless "horizontal funicular" that travels the distance in 5 minutes, with a 5-minute frequency, having an intermediate stop at parking station San Giusto & Aurelia.
The city is served by two railway stations available for passengers: Pisa Centrale and Pisa San Rossore.
"Pisa Centrale" is the main railway station and is located along the Tyrrhenic railway line. It connects Pisa directly with several other important Italian cities such as Rome, Florence, Genoa, Turin, Naples, Livorno, and Grosseto.
"Pisa San Rossore" links the city with Lucca (20 minutes from Pisa) and Viareggio and is also reachable from "Pisa Centrale". It is a minor railway station located near the Leaning Tower zone.
There was another station called Pisa Aeroporto situated next to the Airport with services to Pisa Centrale and Florence. It has been closed on 15 December 2013 for the realization of a people mover.
Pisa has two exits on the A11 Florence-Pisa road and on the A12 Genoa-Livorno road, Pisa Nord and Pisa Centro-aeroporto.
Pisa Centro leads visitors to the city centre.
Parking: Pratale (San Jacopo), Pietrasantina (Via Pietrasantina), Piazza Carrara, Lungarni.
Football is the main sport in Pisa; the local team, A.C. Pisa, currently plays in the Lega Pro (the third highest football division in Italy), and has had a top flight history throughout the 1980s and the 1990s, featuring several world-class players such as Diego Simeone, Christian Vieri and Dunga during this time. The club play at the Arena Garibaldi – Stadio Romeo Anconetani, opened in 1919 and with a capacity of 25,000.
Shooting was one of the first sports to have their own association in Pisa. The "Società del Tiro a Segno" di Pisa was founded on July 9, 1862. In 1885, they acquired their own training field. The shooting range was almost completely destroyed during World War II.
In Pisa there was a festival and game "" (Game of the Bridge) which was celebrated (in some form) in Pisa from perhaps the 1200s down to 1807. From the end of the 1400s the game took the form of a mock battle fought upon Pisa's central bridge ("Ponte di Mezzo"). The participants wore quilted armor and the only offensive weapon allowed was the "targone", a shield-shaped, stout board with precisely specified dimensions. Hitting below the belt was not allowed. Two opposing teams started at opposite ends of the bridge. The object of the two opposing teams was to penetrate, drive back, and disperse the opponents' ranks and to thereby drive them backwards off the bridge. The struggle was limited to forty-five minutes. Victory or defeat was immensely important to the team players and their partisans, but sometimes the game was fought to a draw and both sides celebrated. In 1927 the tradition was revived by college students as an elaborate costume parade. In 1935 Vittorio Emanuele III with the royal family witnessed the first revival of a modern version of the game, which has been pursued in the 20th and 21st centuries with some interruptions and varying degrees of enthusiasm by Pisans and their civic institutions.
Pisa is twinned with: | https://en.wikipedia.org/wiki?curid=24636 |
Pentium FDIV bug
The Pentium FDIV bug is a hardware bug affecting the floating point unit (FPU) of the early Intel Pentium processors. Because of the bug, the processor might return incorrect binary floating point results when dividing a number. The bug was discovered in 1994 by Professor Thomas R. Nicely at Lynchburg College. Intel attributed the error to missing entries in the lookup table used by the floating-point division circuitry.
The severity of the FDIV bug is debated. Though rarely encountered by most users ("Byte" magazine estimated that 1 in 9 billion floating point divides with random parameters would produce inaccurate results), both the flaw and Intel's initial handling of the matter were heavily criticized by the tech community.
In December 1994, Intel recalled the defective processors. In January 1995, Intel announced "a pre-tax charge of $475 million against earnings, ostensibly the total cost associated with replacement of the flawed processors."
The Sweeney, Robertson, and Tocher (SRT) division algorithm is used on the affected Pentium chips. It is implemented as a programmable logic array with 2,048 cells, of which 1,066 cells should have been populated with one of five values: . On the buggy chips, five cells that should have contained the value +2 were missing, instead returning 0.
Thomas Nicely, a professor of mathematics at Lynchburg College, had written code to enumerate primes, twin primes, prime triplets, and prime quadruplets. Nicely noticed some inconsistencies in the calculations on June 13, 1994, shortly after adding a Pentium system to his group of computers, but was unable to eliminate other factors (such as programming errors, motherboard chipsets, etc.) until October 19, 1994. On October 24, 1994, he reported the issue to Intel. According to Nicely, his contact person at Intel later admitted that Intel had been aware of the problem since May 1994, when the flaw was discovered by Tom Kraljevic, a Purdue University co-op student working for Intel in Hillsboro, Oregon, during testing of the FPU for its new P6 core, first used in the Pentium Pro.
On October 30, 1994, Nicely sent an email describing the error he had discovered in the Pentium floating point unit to various contacts, requesting reports of testing for the flaw on 486-DX4s, Pentiums and Pentium clones.
This flaw in the Pentium FPU was quickly verified by other people around the Internet, and became known as the Pentium FDIV bug (FDIV is the x86 assembly language mnemonic for floating-point division). One example was found where the division result returned by the Pentium was off by about 61 parts per million.
The story first appeared in the press on November 7, 1994, in an article in "Electronic Engineering Times", "Intel fixes a Pentium FPU glitch" by Alexander Wolfe.
The story was subsequently picked up by CNN in a segment aired on November 21, 1994. This brought it into widespread public prominence.
Publicly, Intel acknowledged the floating-point flaw, but claimed that it was not serious and would not affect most users. Intel offered to replace processors to users who could prove that they were affected. However, although most independent estimates found the bug to be of little importance and would have negligible effect on most users, it caused a great public outcry. Companies like IBM (whose IBM 5x86C microprocessor competed at that time with the Intel Pentium line) joined the condemnation.
On December 20, 1994, Intel offered to replace all flawed Pentium processors on the basis of request, in response to mounting public pressure. Although it turned out that only a small fraction of Pentium owners bothered to get their chips replaced, the financial impact on the company was significant. On January 17, 1995, Intel announced "a pre-tax charge of $475 million against earnings, ostensibly the total cost associated with replacement of the flawed processors." Some of the defective chips were later turned into key rings by Intel.
A 1995 article in "Science" describes the value of number theory problems in discovering computer bugs and gives the mathematical background and history of Brun's constant, the problem Nicely was working on when he discovered the bug.
This problem occurs only on some models of the original Pentium processor. The bug only existed in some Pentium family processors with a clock speed of less than 120 MHz. On affected models, the Intel Processor Frequency ID Utility checks for the presence of this bug.
The ten affected processors are listed below. The 39 S-specs of those processors are not listed in the Intel processor specification finder web page.
Some Intel 80486 OverDrive and Pentium Overdrive models have also been known to exhibit the FDIV bug, as well as the F00F bug.
The presence of the bug can be checked manually by performing the following calculation in any application that uses native floating point numbers, including the Windows Calculator or Microsoft Excel in Windows 95/98.
The correct value is:
When converted to the hexadecimal value used by the processor, 4,195,835 = 0x4005FB and 3,145,727 = 0x2FFFFF. The '5' in 0x4005FB triggers the fault in the FPU control logic. As a result, the value returned by a flawed Pentium processor in certain situations is incorrect at or beyond four digits:
Users can check if their processor has the issue using Device Manager. Once in Device Manager, users should expand "System devices", locate then click on "Numeric data processor", then click the Properties button. Once the new Properties window appears, click the Settings tab.
If the processor does not have the FDIV issue, the following message will be seen:
"Your computer's numeric data processor has passed all diagnostic tests and appears to be working properly.",
Otherwise the following message shall appear:
"The numeric processor in this computer can sometimes compute inaccurate results when dividing large numbers"
Options are then provided at the bottom of the Settings tab to "Always use", "Use only if [it] passes all diagnostics" or "Never use".
Users can run the codice_1 command included with Windows NT 3.51, NT 4.0, 2000, XP, and Server 2003. The computer needs to be restarted for changes to take effect. The codice_1 utility is deprecated and not included in current versions of Windows.
The command-syntax is:
pentnt [-c] [-f] [-o] | https://en.wikipedia.org/wiki?curid=24637 |
Percussion instrument
A percussion instrument is a musical instrument that is sounded by being struck or scraped by a beater including attached or enclosed beaters or rattles struck, scraped or rubbed by hand or struck against another similar instrument. The percussion family is believed to include the oldest musical instruments, following the human voice.
The percussion section of an orchestra most commonly contains instruments such as the timpani, snare drum, bass drum, cymbals, triangle and tambourine. However, the section can "also" contain non-percussive instruments, such as whistles and sirens, or a blown conch shell. Percussive techniques can even be applied to the human body itself, as in body percussion. On the other hand, keyboard instruments, such as the celesta, are not normally part of the percussion section, but keyboard percussion instruments such as the glockenspiel and xylophone (which do not have piano keyboards) are included.
Percussion instruments are most commonly divided into two classes: Pitched percussion instruments, which produce notes with an identifiable pitch, and unpitched percussion instruments, which produce notes or sounds in an indefinite pitch.
Percussion instruments may play not only rhythm, but also melody and harmony.
Percussion is commonly referred to as "the backbone" or "the heartbeat" of a musical ensemble, often working in close collaboration with bass instruments, when present. In jazz and other popular music ensembles, the pianist, bassist, drummer and sometimes the guitarist are referred to as the rhythm section. Most classical pieces written for full orchestra since the time of Haydn and Mozart are orchestrated to place emphasis on the strings, woodwinds, and brass. However, often at least one pair of timpani is included, though they rarely play continuously. Rather, they serve to provide additional accents when needed. In the 18th and 19th centuries, other percussion instruments (like the triangle or cymbals) have been used, again generally sparingly. The use of percussion instruments became more frequent in the 20th century classical music.
In almost every style of music, percussion plays a pivotal role. In military marching bands and pipes and drums, it is the beat of the bass drum that keeps the soldiers in step and at a regular speed, and it is the snare that provides that crisp, decisive air to the tune of a regiment. In classic jazz, one almost immediately thinks of the distinctive rhythm of the hi-hats or the ride cymbal when the word-swing is spoken. In more recent popular-music culture, it is almost impossible to name three or four rock, hip-hop, rap, funk or even soul charts or songs that do not have some sort of percussive beat keeping the tune in time.
Because of the diversity of percussive instruments, it is not uncommon to find large musical ensembles composed entirely of percussion. Rhythm, melody, and harmony are all represented in these ensembles.
Music for pitched percussion instruments can be notated on a staff with the same treble and bass clefs used by many non-percussive instruments. Music for percussive instruments without a definite pitch can be notated with a specialist rhythm or percussion-clef. The guitar also has a special "tab" staff. More often a bass clef is substituted for rhythm clef.
Percussion instruments are classified by various criteria sometimes depending on their construction, ethnic origin, function within musical theory and orchestration, or their relative prevalence in common knowledge.
The word percussion derives from Latin the terms percussio to beat, strike in the musical sense, and percussus noun, a beating. As a noun in contemporary English, Wiktionary describes it as the collision of two bodies to produce a sound. The term is not unique to music, but has application in medicine and weaponry, as in percussion cap. However, all known uses of "percussion" appear to share a similar lineage beginning with the original Latin percussus. In a musical context then, the "percussion instruments" may have been originally coined to describe a family of musical instruments including drums, rattles, metal plates, or blocks that musicians beat or struck to produce sound.
The Hornbostel–Sachs system has no high-level section for "percussion". Most percussion instruments as the term is normally understood are classified as idiophones and membranophones. However the term "percussion" is instead used at lower-levels of the Hornbostel–Sachs hierarchy, including to identify instruments struck with either a non sonorous object hand, stick, striker or against a non-sonorous object human body, the ground. This is opposed to "concussion", which refers to instruments with two or more complementary sonorous parts that strike against each other and other meanings. For example:
111.1 "Concussion idiophones or clappers", played in pairs and beaten against each other, such as zills and clapsticks.
111.2 "Percussion idiophones", includes many percussion instruments played with the hand or by a percussion mallet, such as the hang, gongs and the xylophone, but not drums and only some cymbals.
21 "Struck drums", includes most types of drum, such as the timpani, snare drum, and tom-tom.
412.12 "Percussion reeds", a class of wind instrument unrelated to "percussion" in the more common sense
There are many instruments that have some claim to being percussion, but are classified otherwise:
Percussion instruments are sometimes classified as pitched or unpitched. While valid, this classification is widely seen as inadequate. Rather, it may be more informative to describe percussion instruments in regards to one or more of the following four paradigms:
Many texts, including "Teaching Percussion" by Gary Cook of the University of Arizona, begin by studying the physical characteristics of instruments and the methods by which they can produce sound. This is perhaps the most scientifically pleasing assignment of nomenclature whereas the other paradigms are more dependent on historical or social circumstances. Based on observation and experimentation, one can determine how an instrument produces sound and then assign the instrument to one of the following four categories:
"Idiophones produce sounds through the vibration of their entire body." Examples of idiophones:
Most objects commonly known as drums are membranophones. Membranophones produce sound when the membrane or head is struck with a hand, mallet, stick, beater, or improvised tool.
Examples of membranophones:
Most instruments known as chordophones are defined as string instruments, wherein their sound is derived from the vibration of a string, but some such as these examples "also" fall under percussion instruments.
Most instruments known as aerophones are defined as wind instruments such as a saxophone whereby sound is produced by a stream of air being blown through the object. Although most aerophones are played by specialist players who are trained for that specific instrument, in a traditional ensemble setting, aerophones are played by a percussionist, generally due to the instrument's unconventional nature. Examples of aerophones played by percussionists
When classifying instruments by function it is useful to note if a percussion instrument makes a definite pitch or indefinite pitch.
For example, some percussion instruments such as the marimba and timpani produce an obvious fundamental pitch and can therefore play melody and serve harmonic functions in music. Other instruments such as crash cymbals and snare drums produce sounds with such complex overtones and a wide range of prominent frequencies that no pitch is discernible.
Percussion instruments in this group are sometimes referred to as pitched or tuned.
Examples of percussion instruments with definite pitch:
Instruments in this group are sometimes referred to as non-pitched, unpitched, or untuned. Traditionally these instruments are thought of as making a sound that contains such complex frequencies that no discernible pitch can be heard.
In fact many traditionally unpitched instruments, such as triangles and even cymbals, have also been produced as tuned sets.
Examples of percussion instruments with indefinite pitch:
It is difficult to define what is common knowledge but there are instruments percussionists and composers use in contemporary music that most people wouldn't consider musical instruments. It is worthwhile to "try" to distinguish between instruments based on their acceptance or consideration by a general audience.
For example, most people would not consider an anvil, a brake drum (on a vehicle with drum brakes, the circular hub the brake shoes press against), or a fifty-five gallon oil barrel musical instruments yet composers and percussionists use these objects.
Percussion instruments generally fall into the following categories:
One pre-20th century example of found percussion is the use of cannon usually loaded with blank charges in Tchiakovsky's "1812 Overture". John Cage, Harry Partch, Edgard Varèse, and Peter Schickele, all noted composers, created entire pieces of music using unconventional instruments. Beginning in the early 20th century perhaps with "Ionisation" by Edgard Varèse which used air-raid sirens among other things, composers began to require that percussionists invent or find objects to produce desired sounds and textures. Another example the use of a hammer and saw in Penderecki's "De Natura Sonoris No. 2". By the late 20th century, such instruments were common in modern percussion ensemble music and popular productions, such as the off-Broadway show, Stomp. Rock band Aerosmith used a number of unconventional instruments in their song Sweet Emotion, including shotguns, brooms, and a sugar bag. The metal band Slipknot is well known for playing unusual percussion items, having two percussionists in the band. Along with deep sounding drums, their sound includes hitting baseball bats and other objects on beer kegs to create a distinctive sound.
It is not uncommon to discuss percussion instruments in relation to their cultural origin. This led to a division between instruments considered common or modern, and folk instruments with significant history or purpose within a geographic region or culture.
This category includes instruments that are widely available and popular throughout the world:
The percussionist uses various objects to strike a percussion instrument to produce sound.
The general term for a musician who plays percussion instruments is "percussionist" but the terms listed below often describe specialties: | https://en.wikipedia.org/wiki?curid=24638 |
Press Gang
Press Gang is a British children's television comedy-drama consisting of 43 episodes across five series that were broadcast from 1989 to 1993. It was produced by Richmond Film & Television for Central, and screened on the ITV network in its regular weekday afternoon children's strand, Children's ITV, typically in a 4:45 pm slot (days varied over the course of the run).
Aimed at older children and teenagers, the programme was based on the activities of a children's newspaper, the "Junior Gazette", produced by pupils from the local comprehensive school. In later series it was depicted as a commercial venture. The show interspersed comedic elements with the dramatic. As well as addressing interpersonal relationships (particularly in the Lynda-Spike story arc), the show tackled issues such as solvent abuse, child abuse and firearms control.
Written by ex-teacher Steven Moffat, more than half the episodes were directed by Bob Spiers, a noted British comedy director who had previously worked on classics such as "Fawlty Towers". Critical reception was very positive, particularly for the quality of the writing, and the series has attracted a cult following with a wide age range.
Famous journalist Matt Kerr (Clive Wood) arrives from Fleet Street to edit the local newspaper. He sets up a junior version of the paper, "The Junior Gazette", to be produced by pupils from the local comprehensive school before and after school hours.
Some of the team are "star pupils", but others have reputations for delinquency. One such pupil, Spike Thompson (Dexter Fletcher), is forced to work on the paper rather than being expelled from school. He is immediately attracted to editor Lynda Day (Julia Sawalha), but they bicker, throwing one-liners at each other. Their relationship develops and they have an on-off relationship. They regularly discuss their feelings, especially in the concluding episodes of each series. In the final episode of the third series, "Holding On", Spike unwittingly expresses his strong feelings to Lynda while being taped. Jealous of his American girlfriend, Zoe, Lynda puts the cassette on Zoe's personal stereo, ruining their relationship. The on-screen chemistry between the two leads was reflected off-screen as they became an item for several years.
Although the Lynda and Spike story arc runs throughout the series, most episodes feature self-contained stories and sub-plots. Amongst lighter stories, such as one about Colin accidentally attending a funeral dressed as a pink rabbit, the show tackled many serious issues. Jeff Evans, writing in the "Guinness Television Encyclopedia", writes that the series adopts a "far more adult approach" than "previous efforts in the same vein" such as "A Bunch of Fives." Some critics also compared it with "Hill Street Blues", "Lou Grant" "and other thoughtful US dramas, thanks to its realism and its level-headed treatment of touchy subjects." The first series approached solvent abuse in "How To Make A Killing", and the NSPCC assisted in the production of the "Something Terrible" episodes about child abuse. The team were held hostage by a gun enthusiast in series three's "The Last Word", while the final episode approaches drug abuse. The issue-led episodes served to develop the main characters, so that "Something Terrible" is more "about Colin's redemption [from selfish capitalist], rather than Cindy's abuse."
According to the British Film Institute, ""Press Gang" managed to be perhaps the funniest children's series ever made and at the same time the most painfully raw and emotionally honest. The tone could change effortlessly and sensitively from farce to tragedy in the space of an episode." Although the series is sometimes referred to as a comedy, Moffat insists that it is a drama with jokes in it. The writer recalls "a long running argument with Geoff Hogg (film editor on "Press Gang") about whether "Press Gang" was comedy. He insisted that it was and I said it wasn't – it was just funny." Some innuendo leads Moffat to claim that it "had the dirtiest jokes in history; we got away with tons of stuff ... We nearly got away with a joke about anal sex, but they spotted it at the last minute." In one episode Lynda says she's going to "butter him up", and, when asked (while on a date in a hotel's restaurant) if he was staying at the hotel, Colin replies "I shouldn't think so: it's only the first date."
Jeff Evans also comments that the series was filmed cinematically, dabbling in "dream sequences, flashbacks, fantasies and, on one occasion, a "Moonlighting"-esque parody of the film "It's a Wonderful Life"." The show had a strong awareness of continuity, with some stories, incidents and minor characters referred to throughout the series. Actors who played short-term characters in the first two series were invited back to reprise their roles in future episodes. David Jefford (Alex Crockett) was resurrected from 1989's "Monday – Tuesday" to appear in the final episode "There Are Crocodiles", while the same actress (Aisling Flitton) who played a wrong number in "Love and the Junior Gazette" was invited to reprise her character for the third series episode "Chance is a Fine Thing." "Attention to detail" such as this is, according to Paul Cornell, "one of the numerous ways that the series respects the intelligence of its viewers."
After the team leaves school, the paper gains financial independence and runs commercially. Assistant editor Kenny (Lee Ross) leaves at the end of series three to be replaced by Julie (Lucy Benjamin), who was the head of the graphics team in series one.
Bill Moffat, a headmaster from Glasgow, had an idea for a children's television programme called "The Norbridge Files". He showed it to a producer who visited his school, Thorn Primary School in Johnstone, Renfrewshire, when it was used as the location for an episode of Harry Secombe's "Highway". Producer Sandra C. Hastie liked the idea and showed it to her future husband Bill Ward, co-owner of her company Richmond Films and Television. When she requested a script, Moffat suggested that his 25-year-old son Steven, an English teacher, should write it. Hastie said that it was "the best ever first script" that she had read.
All 43 episodes were written by Steven Moffat. During production of series two, he was having an unhappy personal life after the break-up of his first marriage. His wife's new lover was represented in the episode "The Big Finish?" by the character Brian Magboy (Simon Schatzberger), a name inspired by Brian: Maggie's boy. Moffat brought in the character so that all sorts of unfortunate things would happen to him, such as having a typewriter dropped on his foot. This period in Moffat's life would also be reflected in his sitcom "Joking Apart".
Central Independent Television had confidence in the project, so rather than the show being shot at their studios in Nottingham as planned, they granted Richmond a £2 million budget. This enabled it to be shot on 16 mm film, rather than the regular, less expensive videotape, and on location, making it very expensive compared with most children's television. These high production costs almost led to its cancellation at the end of the second series, by which time Central executive Lewis Rudd was unable to commission programmes by himself.
More than half of the episodes were directed by Bob Spiers, a noted British comedy director who had previously worked on "Fawlty Towers" amongst many other programmes. He would work again with Moffat on his sitcom "Joking Apart" and "Murder Most Horrid", and with Sawalha on "Absolutely Fabulous". According to Moffat, Spiers was the "principal director" taking an interest in the other episodes and setting the visual style of the show. Spiers particularly used tracking shots, sometimes requiring more dialogue to be written to accommodate the length of the shot. The other directors would come in and "do a Spiers". All of the directors were encouraged to attend the others' shoots so that the visual style would be consistent.
The first two episodes were directed by Colin Nutley. However, he was unhappy with the final edit and requested that his name be removed from the credits. Lorne Magory directed many episodes, notably the two-part stories "How To Make A Killing" and "The Last Word." One of the founders of Richmond Films and Television, Bill Ward, directed three episodes, and Bren Simson directed some of series two. The show's cinematographer James Devis took the directorial reins for "Windfall", the penultimate episode.
Whilst the show was set in the fictional town of Norbridge, it was mostly filmed in Uxbridge, in the west of Greater London. Many of the scenes were shot at Haydon School in Pinner. The first series was filmed entirely on location, but after the demolition of the building used as the original newspaper office, interior shots were filmed in Pinewood Studios for the second series, and the exterior of the building was not seen beyond that series. Subsequent series were filmed at Lee International Studios at Shepperton (series three and four) and Twickenham Studios (series five).
The theme music was composed by Peter Davis (who after the second series composed the rest of the series alone as principal composer), John Mealing and John G. Perry. The opening titles show the main characters striking a pose, with the name of the respective actor in a typewriter style typeface. Steven Moffat and Julia Sawalha were not very impressed with the opening titles when discussing them for a DVD commentary in 2004. They were re-recorded for series three, in the same style, to address the actors' ages and alterations to the set.
Many of the closing titles in the first two series were accompanied by dialogue from two characters. Episodes that ended on a particularly sombre tone, such as "Monday-Tuesday" and "Yesterday's News", used only appropriately sombre music to accompany the end credits. After an emphatic climax, "At Last a Dragon" used an enhanced version of the main theme with more extravagant use of electric guitar. Moffat felt that the voiceovers worked well in the first series, but that they were not as good in the second. Hastie recalls that Moffat was "extremely angry" that "Drop the Dead Donkey" had adopted the style. They were dropped after the second series. The cast, according to Moffat, were "grumpy with having to turn up to a recording studio to record them."
Lynda Day (Julia Sawalha) is the editor of the "Junior Gazette". She is strong and opinionated, and is feared by many of her team. Moffat has said that the character was partly based on the show's "ball-breaking" producer, Sandra C. Hastie. Although she appears very tough, she occasionally exposes her feelings. She quits the paper at the end of "Monday-Tuesday", and in "Day Dreams" laments "Why do I get everything in my whole stupid life wrong?" Intimidated by socialising, she hiccups at the idea. She is so nervous at a cocktail party, in "At Last a Dragon", that she attempts to leave on several occasions. The mixture of Lynda's sensitive side and her self-sufficient attitude is illustrated in the series' final episode "There Are Crocodiles." Reprimanding the ghost of Gary (Mark Sayers), who died after taking a drug overdose, she says:
Look, I'm sorry you're dead, okay? I "do" care. But to be perfectly honest with you, I don't care a lot. You had a choice, you took the drugs, you died. Are you seriously claiming no one warned you it was dangerous? ... I mean, have you had a look at the world lately? ... There's plenty of stuff going on that kills you and you don't get warned at all. So sticking your head in a crocodile you were told about is not calculated to get my sympathy.
James "Spike" Thomson (Dexter Fletcher) is an American delinquent, forced to work on the paper rather than being excluded from school. He is immediately attracted to Lynda, and he establishes himself as an important member of the reporting team having been responsible for getting their first lead story. He usually has a range of one-liners, though is often criticised, particularly by Lynda, for excessive joking. However, Spike often consciously uses humour to lighten the tone, such as in "Monday-Tuesday" when he tries to cheer up Lynda after she feels responsible for David's suicide.
The character was originally written as English, until producer Hastie felt that an American character would enhance the chance of overseas sales. This meant that English-born Fletcher had to act in an American accent for all five years. Moffat says that he isn't "sure [that] lumbering Dexter with that accent was a smart move." The American accent had some fans surprised to learn that Fletcher is actually English.
Kenny Phillips (Lee Ross) is one of Lynda's (few) long-term friends and is her assistant editor in the first three series. Kenny is much calmer than Lynda, though is still dominated by her. Despite this, he is one of the few people able to stand up to Lynda, in his own quiet way. Although he identifies himself as "sweet", he is unlucky in love: Jenny (Sadie Frost), the girlfriend he meets in "How to Make a Killing", dumps him because he is too understanding. His secret passion for writing music is revealed at the end of series two, which was influenced by Ross' interests. Colin organizes and markets a concert for him, and the second series ends with Kenny performing "You Don't Feel For Me" (written by Ross himself). Lee Ross was only able to commit to the first six episodes of the 12-episode series three and four filming block because he was expecting a film role. Thus, by series four, Kenny has left for Australia.
Colin Mathews (Paul Reynolds) is the Thatcherite in charge of the paper's finances and advertising. He often wears loud shirts, and his various schemes have included marketing defective half-ping-pong balls (as 'pings'), exam revision kits and soda that leaves facial stains. Rosie Marcel and Claire Hearnden appear throughout the second series as Sophie and Laura, Colin's mischievous young helpers.
Julie Craig (Lucy Benjamin) is the head of the graphics team in series one. Moffat was impressed with Benjamin's performance, and expanded her character for the second series. However she had committed herself to roles in the LWT sitcom "Close to Home" and "Jupiter Moon", so the character was replaced by Sam. The character returns in the opening episode of series four as researcher on the Saturday morning show "Crazy Stuff". She arranges for Lynda and Spike to be reunited on live television, but the subsequent complaints about the violence (face slapping) results in Julie's firing. After giving Lynda some home truths, Julie replaces Kenny as the assistant editor for the final two series. She is a flirt, and, according to Lynda, was the "official pin-up at the last prison riot."
Sarah Jackson (Kelda Holmes) is the paper's lead writer. Although she is intelligent she gets stressed, such as during her interview for editorship of the "Junior Gazette". Her final episode, "Friendly Fire", shows the development of her friendship with Lynda, and how the latter saw her as a challenge when she first arrived to Norbridge High. Together they had established the underground school magazine: "Damn Magazine". Her first attempt to leave the newspaper to attend a writing course at the local college is thwarted by Lynda, but she eventually leaves in series five to attend university (mirroring the reason for Holmes' departure).
Frazer "Frazz" Davis (Mmoloki Chrystie) is one of Spike's co-delinquents forced into working on the paper, his initial main task writing the horoscopes. Frazz is initially portrayed as "intellectually challenged", such as not understanding the synonymous relationship between "the astrology column" and the horoscopes. Later episodes, however, show him to be devious, such as in "The Last Word: Part 2" when he stuns the gunman using a large array of flashguns.
Sam Black (Gabrielle Anwar) replaced Julie as the head of the graphics team in the second series. Sam is very fashion conscious and a flirt, and is surprised when an actor rejects her advances in favour of Sarah. Anwar had auditioned for the role of Lynda. (Many actors who unsuccessfully auditioned for main characters were invited back later for guest roles.) Moffat had expanded the role of Julie after the first series, but Lucy Benjamin was unavailable for series two. Sam, therefore, was basically the character of Julie under a different name, especially in her earlier episodes.
Charlie Creed-Miles, who played Danny McColl, the paper's photographer, became disenchanted with his minor role and left after the second series.
Toni "Tiddler" Tildesley (Joanna Dukes) is the junior member of the team, responsible for the junior section, "Junior Junior Gazette".
Billy Homer (Andy Crowe) was also a recurring character. A tetraplegic, he is very competent with computer networks, sometimes hacking into the school's database. His storylines are some of the first representations of the Internet in British television. Moffat felt that he was unable to sustain the character, and he appears only sporadically after the first series.
The main adults are deputy headmaster Bill Sullivan (Nick Stringer), maverick editor Matt Kerr (Clive Wood) and experienced "Gazette" reporter Chrissie Stewart (Angela Bruce).
Critical reaction was good, the show being particularly praised for the high quality and sophistication of the writing. The first episode was highly rated by "The Daily Telegraph", "The Guardian" and the "Times Educational Supplement". In his emphatic review, Paul Cornell writes that:
"Press Gang" has proved to be a series that can transport you back to how you felt as a teenager, sharper that the world but with as much angst as acute wit ... Never again can a show get away with talking down to children or writing sloppily for them. "Press Gang", possibly the best show in the world.
"Time Out" said that "this is quality entertainment: the kids are sharp, the scripts are clever and the jokes are good." The BBC's William Gallagher called it "pretty flawless", with "The Guardian" retrospectively commending the series. Others, such as Popmatters, have also commented upon how "the show is renowned ... for doing something kid television at the time didn't do (and, arguably, still doesn't): it refused to treat its audience like children." Comedian Richard Herring recalls watching the show as a recent graduate, commenting that it "was subtle, sophisticated and much too good for kids." According to Moffat, ""Press Gang" had gone over very, very well in the industry and I was being touted and romanced all the time." "Press Gang"'s complicated plots and structure would become a hallmark of Moffat's work, such as "Joking Apart" and "Coupling".
The series received a Royal Television Society award and a BAFTA in 1991 for "Best Children's Programme (Entertainment/Drama)". It was also nominated for two Writers' Guild of Great Britain awards, one "Prix Jeunesse" and the 1992 BAFTA for "Best Children's Programme (Fiction)". Julia Sawalha won the Royal Television Society Television Award for "Best Actor – Female" in 1993.
The show gained an even wider adult audience in an early evening slot when repeated on Sundays on Channel 4 in 1991. This crossover is reflected in the BBC's review for one of the DVDs when they say that ""Press Gang" is one of the best series ever made for kids. Or adults."
Nickelodeon showed nearly all of the episodes in a weekday slot in 1997. The final three episodes of the third series, however, were not repeated on the children's channel because of their content: "The Last Word" double episode with the gun siege, and "Holding On" with the repetition of the phrase "divorce the bitch". On the first transmission of the latter on 11 June 1991, continuity announcer Tommy Boyd warned viewers that it contained stronger than usual language. In 2007, itv.com made the first series, with the exception of "Page One", available to be viewed on its website free of charge.
2 episodes were broadcast on the CITV Channel on 5 & 6 January 2013, as part of a weekend of archive programmes to celebrate CITV's 30th anniversary.
"Press Gang" has attracted a cult following. A fanzine, "Breakfast at Czars", was produced in the 1990s. Edited by Stephen O'Brien, it contained a range of interviews with the cast and crew (notably with producer Hastie), theatre reviews and fanfiction. The first edition was included as a PDF file on the series two DVD, while the next three were on the series five disc. An e-mail discussion list has been operational since February 1997. Scholar Miles Booy observes that as Steven Moffat was himself a fan of "Doctor Who", he was able to ingrate the elements that TV fans appreciated, such as:
series finales with big cliff-hangers, rigorous continuity and a slew of running jokes and references which paid those who watched and rewatched the text to pull out its minutia. At the end of the second series, it is remarked that the news team have been following the Spike/Lynda romance 'since page one', and only the fans remembered – or discovered on reviewing – that "Page One" was the title of the first episode.
Booy points out that Chris Carter and Joss Whedon would be acclaimed for these elements in the 1990s (in the shows "The X-Files" and "Buffy the Vampire Slayer"), but "Moffat got there first, and ... in a children's TV slot. His was the first show to arrive with a Britain's fan's sensibility to formal possibilities."
Two conventions were held in the mid-1990s in Liverpool. The events, in aid of the NSPCC, were each titled "Both Sides of the Paper" and were attended by Steven Moffat, Sandra Hastie, Dexter Fletcher, Paul Reynolds, Kelda Holmes and Nick Stringer. There were screenings of extended rough cuts of "A Quarter to Midnight" and "There Are Crocodiles", along with auctions of wardrobe and props. When Virgin Publishing prevented Paul Cornell from writing an episode guide, the "Press Gang Programme Guide", edited by Jim Sangster, was published by Leomac Publishing in 1995. Sangster, O'Brien and Adrian Petford collaborated with Network DVD on the extra features for the DVD releases.
Big Finish Productions, which produces audio plays based on sci-fi properties, particularly "Doctor Who", was named after the title of the final episode of the second series. Moffat himself is an ardent "Doctor Who" fan, and became the programme's lead writer and executive producer in 2009.
Moffat has integrated many references to secondary characters and locations in "Press Gang" in his later work. His 1997 sitcom "Chalk" refers to a neighbouring school as Norbridge High, run by Mr Sullivan, and to the characters Dr Clipstone ("UneXpected"), Malcolm Bullivant ("Something Terrible") and David Jefford ("Monday-Tuesday"/"There are Crocodiles"), a pupil who Mr Slatt (David Bamber) reprimands for masturbating. The name "Talwinning" appears as the name of streets in "A Quarter to Midnight" and "Joking Apart", and as the surname of the protagonist in "Dying Live", an episode of "Murder Most Horrid" written by Moffat, as well as the name of a librarian in his "Doctor Who" prose short story, "Continuity Errors", which was published in the 1996 Virgin Books anthology "Decalog 3: Consequences". The name "Inspector Hibbert", from "The Last Word", is given to the character played by Nick Stringer in "Elvis, Jesus and Jack", Moffat's final "Murder Most Horrid" contribution. Most recently, in the first episode of Moffat's "Jekyll", Mr Hyde (James Nesbitt) whistled the same tune as Lynda in "Going Back to Jasper Street".
A television film called "Deadline" was planned. It was set a few years after the series and aimed at a more adult audience. At one stage in 1992, series 4 was intended to be the last, and the movie was proposed as a follow-up. However, making of the film fell through when a fifth series was commissioned instead. The idea of the follow-up film was reconsidered several times during the 1990s, but every time fell through for various reasons.
In June 2007, "The Stage" reported that Moffat and Sawalha are interested in reviving "Press Gang". He said: "I would revive that like a shot. I would love to do a reunion episode—a grown-up version. I know Julia Sawalha is interested—every time I see her she asks me when we are going to do it. Maybe it will happen—I would like it to." "The Guardian" advocated the show's revival, arguing that "a revamped "Press Gang" with Moffat at the helm could turn the show from a cult into a national institution - a petri dish for young acting and writing talent to thrive. It's part of our TV heritage and definitely worthy of resuscitation."
At the Edinburgh International Television Festival in August 2008, Moffat told how he got drunk after the wrap party for "Jekyll" and pitched the idea of a "Press Gang" reunion special to the Head of Drama at the BBC, John Yorke. Despite Yorke's approval, the writer said that he was too busy with his work on "Doctor Who" to pursue the idea.
Several products have been released, specifically four novelisations, a video and the complete collection on DVD.
Four novelisations were written by Bill Moffat and published by Hippo Books/Scholastic in 1989 and 1990 based on the first two series. "First Edition" was based on the first three episodes, with "Public Exposure" covering "Interface" and "How to Make a Killing." The third book, "Checkmate", covered "Breakfast at Czar's", "Picking Up the Pieces" and "Going Back to Jasper Street", and reveals that Julie left the graphics department to go to art college. The fourth and final book, "The Date", is a novelisation of "Money, Love and Birdseed", "Love and the Junior Gazette" and "At Last a Dragon." Each book featured an eight-page photographic insert.
VCI Home Video, with Central Video, released one volume on VHS in 1990 featuring the first four episodes: "Page One", "Photo Finish", "One Easy Lesson" and "Deadline." The complete series of "Press Gang" is available on DVD (Region 2, UK) from Network DVD and in Australia (Region 4) from Force Entertainment. Four episodes of the second series DVD features an audio commentary by Julia Sawalha and Steven Moffat, in which the actress claims to remember very little about the show. Shooting scripts and extracts from Jim Sangster's programme guide (published by Leomac Publishing) are included in PDF format from series two onwards. The second series DVD set also contains the only existing copy, in offline edit form, of an unaired documentary filmed during production of series two. | https://en.wikipedia.org/wiki?curid=24639 |
Pope Innocent VII
Pope Innocent VII (; 1339 – 6 November 1406), born Cosimo de' Migliorati, was the Roman claimant to the headship of the Catholic Church from 17 October 1404 to his death. He was pope during the period of the Western Schism (1378–1417), and was opposed by Benedict XIII at Avignon. Despite good intentions, he did little to end the schism, owing to the troubled state of affairs in Rome, and his distrust of the sincerity of Benedict XIII, and King Ladislaus of Naples.
Migliorati was born to a simple family of Sulmona in the Abruzzi. He distinguished himself by his learning in both civil and Canon Law, which he taught for a time at Perugia and Padua. His teacher Giovanni da Legnano sponsored him at Rome, where Pope Urban VI (1378–89) took him into the Curia, sent him for ten years as papal collector to England, made him Bishop of Bologna in 1386 at a time of strife in that city, and Archbishop of Ravenna in 1387.
Pope Boniface IX made him cardinal-priest of S. Croce in Gerusalemme (1389) and sent him as legate to Lombardy and Tuscany in 1390. When Boniface IX died, there were present in Rome delegates from the rival pope at Avignon, Benedict XIII. The Roman cardinals asked these delegates whether their master would abdicate if the cardinals refrained from holding an election. When they were bluntly told that Benedict XIII would never abdicate (indeed he never did), the cardinals proceeded to an election. First, however, they each undertook a solemn oath to leave nothing undone, and, if need be, lay down the tiara to end the schism.
Migliorati was unanimously chosen – by eight cardinals – on 17 October 1404 and took the name of Innocent VII. There was a general riot by the Ghibelline party in Rome when news of his election got out, but peace was maintained by the aid of King Ladislaus of Naples, who hastened to Rome with a band of soldiers to assist the Pope in suppressing the insurrection. For his services the king extorted various concessions from Innocent VII, among them the promise that Ladislaus' claim to Naples would not be compromised, which claim had been challenged until very recently by Louis II of Anjou. That suited Innocent VII, who had no intention of reaching an agreement with Avignon that would compromise his claims to the Papal States. Thus Innocent VII was laid under embarrassing obligations, from which he freed himself.
Innocent VII had made the great mistake of elevating his highly unsuitable nephew Ludovico Migliorati – a colorful condottiero formerly in the pay of Giangaleazzo Visconti of Milan – to be Captain of the Papal Militia, an act of nepotism that cost him dearly. Innocent further named him the rector of Todi in April 1405. In August 1405, Ludovico Migliorati, using his power as head of the militia, seized eleven members of the obstreperous Roman partisans on their return from a conference with the Pope, had them murdered in his own house, and had their bodies thrown from the windows of the hospital of Santo Spirito into the street. There was an uproar. Pope, court and cardinals, with the Migliorati faction, fled towards Viterbo. Ludovico took the occasion of driving off cattle that were grazing outside the walls, and the Papal party were pursued by furious Romans, losing thirty members, whose bodies were abandoned in the flight, including the Abbot of Perugia, struck down under the eyes of the Pope.
Innocent's protector Ladislaus sent a squad of troops to quell the riots, and by January 1406 the Romans again acknowledged Papal temporal authority, and Innocent VII felt able to return. But Ladislaus, not content with the former concessions, desired to extend his authority in Rome and the Papal States. To attain his end he aided the Ghibelline faction in Rome in their revolutionary attempts in 1405. A squad of troops which King Ladislaus had sent to the aid of the Colonna faction was still occupying the Castle of Sant' Angelo, ostensibly protecting the Vatican, but making frequent sorties upon Rome and the neighbouring territory. Only after Ladislaus was excommunicated did he yield to the demands of the Pope and withdraw his troops.
Shortly after his accession in 1404 Innocent VII had taken steps to keep his oath by proclaiming a council to resolve the Western Schism. King Charles VI of France, theologians at the University of Paris, such as Pierre d'Ailly and Jean Gerson, and King Rupert of Germany, were all urging such a meeting. However, the troubles of 1405 furnished him with a pretext for postponing the meeting, claiming that he could not guarantee safe passage to his rival Benedict XIII if he came to the council in Rome. Benedict, however, made it appear that the only obstacle to the end of the Schism was the unwillingness of Innocent VII. Innocent VII was unreceptive to the proposal that he as well as Benedict XIII should resign in the interests of peace.
Innocent died in Rome on 6 November 1406. It is said that Innocent VII planned the restoration of the Roman University, but his death brought an end to such talk. | https://en.wikipedia.org/wiki?curid=24642 |
Pope Innocent VIII
Pope Innocent VIII (; 1432 – 25 July 1492), born Giovanni Battista Cybo (or Cibo), was head of the Catholic Church and ruler of the Papal States from 29 August 1484 to his death. Son of the viceroy of Naples, Battista spent his early years at the Neapolitan court. He became a priest in the retinue of Cardinal Calandrini, half-brother to Pope Nicholas V (1447–55), Bishop of Savona under Pope Paul II, and with the support of Cardinal Giuliano Della Rovere. After intense politicking by Della Rovere, Cibo was elected pope in 1484. King Ferdinand I of Naples had supported Cybo's competitor, Rodrigo Borgia. The following year, Pope Innocent supported the barons in their failed revolt.
In March 1489, Cem, the captive brother of Bayezid II, the Sultan of the Ottoman Empire, came into Innocent's custody. Viewing his brother as a rival, the Sultan paid the pope not to set him free. Any time the Sultan threatened war against the Christian Balkans, Innocent threatened to release this brother, who later died in a military expedition, fighting for King Charles VIII of France against Naples.
Giovanni Battista Cybo (or Cibo) was born in Genoa of Greek ancestry, the son of Arano Cybo or Cibo (c. 1375c. 1455) and his wife Teodorina de Mari (c. 1380?), of an old Genoese family. Arano Cybo was viceroy of Naples and then a senator in Rome under Pope Calixtus III (1455–58). Giovanni Battista's early years were spent at the Neapolitan court. While in Naples he was appointed a Canon of the Cathedral of Capua, and was given the Priory of S. Maria d'Arba in Genoa. After the death of King Alfonso, friction between Giovanni Battista and the Archbishop of Genoa decided him to resign his Canonry, and to go to Padua and then to Rome for his education.
In Rome he became a priest in the retinue of cardinal Calandrini, half-brother to Pope Nicholas V (1447–55). In 1467, he was made Bishop of Savona by Pope Paul II, but exchanged this see in 1472 for that of Molfetta in south-eastern Italy. In 1473, with the support of Giuliano Della Rovere, later Pope Julius II, he was made cardinal by Pope Sixtus IV, whom he succeeded on 29 August 1484 as Pope Innocent VIII.
The papal conclave of 1484 was rife with factions, while gangs rioted in the streets. In order to prevent the election of the Venetian Cardinal Barbo, Camerlengo of the Sacred College of Cardinals, on the evening before the election, after the cardinals had retired for the night, the Dean of the College of Cardinals, Cardinal Giuliano della Rovere, nephew of the late Pope, and Cardinal Borgia, the Vice-Chancellor, visited a number of cardinals and secured their votes with the promise of various benefices.
It was claimed that Cardinal della Rovere met secretly with Cardinal Marco Barbo in order to secure him more votes to become pope if he was promised a residence, though Barbo refused in fear it would make the conclave invalid due to simony. Cardinal della Rovere then met with Borgia, who disliked Barbo and wished to block his election, with an offer to turn their votes over to Cibò, promising them benefits for doing so.
Shortly after his coronation Innocent VIII addressed a fruitless summons to Christendom to unite in a crusade against the Turks. A protracted conflict with King Ferdinand I of Naples was the principal obstacle. Ferdinand's oppressive government led in 1485 to a rebellion of the aristocracy, known as the Conspiracy of the Barons, which included Francesco Coppola and Antonello Sanseverino of Salerno and was supported by Pope Innocent VIII. Innocent excommunicated him in 1489 and invited King Charles VIII of France to come to Italy with an army and take possession of the Kingdom of Naples, a disastrous political event for the Italian peninsula as a whole. The immediate conflict was not ended until 1494, after Innocent VIII's death.
Bayezid II ruled as Sultan of the Ottoman Empire from 1481 to 1512. His rule was contested by his brother Cem, who sought the support of the Mamluks of Egypt. Defeated by his brother's armies, Cem sought protection from the Knights of St. John in Rhodes. Prince Cem offered perpetual peace between the Ottoman Empire and Christendom. However, the sultan paid the Knights a large amount to keep Cem captive. Cem was later sent to the castle of Pierre d'Aubusson in France. Sultan Bayezid sent a messenger to France and requested Cem to be kept there; he agreed to make an annual payment in gold for his brother's expenses.
In March 1489, Cem was transferred to the custody of Innocent VIII. Cem's presence in Rome was useful because whenever Bayezid intended to launch a military campaign against the Christian nations of the Balkans, the Pope would threaten to release his brother. In exchange for maintaining the custody of Cem, Bayezid paid Innocent VIII 120,000 crowns, a relic of the Holy Lance and an annual fee of 45,000 ducats. Cem died in Capua on 25 February 1495 on a military expedition under the command of King Charles VIII of France to conquer Naples.
On the request of German inquisitor Heinrich Kramer, Innocent VIII issued the papal bull "Summis desiderantes" (5 December 1484), which supported Kramer's investigations against magicians and witches:
The bull was written in response to the request of Dominican Heinrich Kramer for explicit authority to prosecute witchcraft in Germany, after he was refused assistance by the local ecclesiastical authorities, who disputed his authority to work in their dioceses. Some scholars view the bull as "clearly political", motivated by jurisdictional disputes between the local German Catholic priests and clerics from the Office of the Inquisition who answered more directly to the pope.
Nonetheless, the bull failed to ensure that Kramer obtained the support he had hoped for, causing him to retire and to compile his views on witchcraft into his book "Malleus Maleficarum", which was published in 1487. Kramer would later claim that witchcraft was to blame for bad weather. Both the papal letter appended to the work and the supposed endorsement of Cologne University for it are problematic. The letter of Innocent VIII is not an approval of the book to which it was appended, but rather a charge to inquisitors to investigate diabolical sorcery and a warning to those who might impede them in their duty, that is, a papal letter in the by then conventional tradition established by John XXII and other popes through Eugenius IV and Nicholas V (1447–55).
In 1487, Innocent confirmed Tomas de Torquemada as Grand Inquisitor of Spain. Also in 1487, Innocent issued a bull denouncing the views of the Waldensians (Vaudois), offering plenary indulgence to all who should engage in a Crusade against them. Alberto de' Capitanei, archdeacon of Cremona, responded to the bull by organizing a crusade to fulfill its order and launched an offensive in the provinces of Dauphiné and Piedmont. Charles I, Duke of Savoy eventually interfered to save his territories from further confusion and promised the Vaudois peace, but not before the offensive had devastated the area and many of the Vaudois fled to Provence and south to Italy.
The noted Franciscan theologian Angelo Carletti di Chivasso, whom Innocent in 1491 appointed as Apostolic Nuncio and Commissary, conjointly with the Bishop of Mauriana, was involved in reaching the peaceful agreement between Catholics and Waldensians.
In 1486, Innocent VIII was persuaded that at least thirteen of the 900 theses of Giovanni Pico della Mirandola were heretical, and the book containing the theses was interdicted.
In Rome, he ordered the Belvedere of the Vatican to be built, intended for summer use, on an unarticulated slope above the Vatican Palace. His successor would later turn the building into the Cortile del Belvedere. In season, he hunted at Castello della Magliana, which he enlarged. Constantly confronted with a depleted treasury, he resorted to the objectionable expedient of creating new offices and granting them to the highest bidders. The fall of Granada in January 1492, was celebrated in the Vatican and Innocent granted Ferdinand II of Aragon the epithet "Catholic Majesty."
Minnich (2005) notes that the attitude of Renaissance popes towards slavery, a common institution in contemporary cultures, varied. Minnich states that those who allowed the slave trade did so in the hope of gaining converts to Christianity. In the case of Innocent he permitted trade with Barbary merchants in which foodstuffs would be given in exchange for slaves who could then be converted to Christianity.
King Ferdinand of Aragon gave Innocent 100 Moorish slaves, who were shared out with favoured Cardinals. The slaves of Innocent were called "moro", meaning "dark-skinned man", in contrast to negro slaves who were called "moro nero".
The pope named two saints during his pontificate: Catherine of Vadstena (1484) and Leopold III (1485).
Innocent VIII named eight cardinals in one consistory which was held on 9 March 1489; the pope named three of those cardinals "in pectore" (one of whom being a successor in Giovanni de' Medici who became Pope Leo X) with two of them having their names released after the pope died to ensure that they could vote in the 1492 conclave.
By July 1492, Innocent had become very skinny. To Valori, he had become 'an inert mass of flesh, incapable of assimiliating any nourishment but a few drops of milk from a young woman's breast'.
He then developed a fever. What happened between then and his death on the 25th of that July is unknown, although Anti Semites falsely accused his physician Giacomo di San Genesio of having performed a pseudo blood transfusion that contributed to his death .
A mysterious inscription on his tomb in Saint Peter in Rome states: “Nel tempo del suo Pontificato, la gloria della scoperta di un nuovo mondo” (transl. "During his Pontificate, the glory of the discovery of a new world."). The fact is that he died seven days before the departure of Christopher Columbus for his supposedly first voyage over the Atlantic, raising speculations that Columbus actually traveled before the known date and re-discovered the Americas for the Europeans before the supposed date of 12 October 1492. The Italian journalist and writer Ruggero Marino, in his book "Cristoforo Colombo e il Papa tradito" (transl. "Christopher Columbus and the betrayed Pope") is convinced of this after having studied Columbus's papers for over 25 years.
Innocent had two illegitimate children born before he entered the clergy "towards whom his nepotism had been as lavish as it was shameless". In 1487 he married his elder son Franceschetto Cybo (d. 1519) to Maddalena de' Medici (1473–1528), the daughter of Lorenzo de' Medici, who in return obtained the cardinal's hat for his thirteen-year-old son Giovanni, later Pope Leo X. His daughter Teodorina Cybo married Gerardo Usodimare and had a daughter. Savonarola chastised him for his worldly ambitions.
His grandnephew was Bindo Altoviti, one of the most influential bankers of his time and patron of the arts, being friends with Raphael and Michelangelo. | https://en.wikipedia.org/wiki?curid=24643 |
Pope Innocent IX
Pope Innocent IX (; 20 July 1519 – 30 December 1591), born Giovanni Antonio Facchinetti, was head of the Catholic Church and ruler of the Papal States from 29 October to 30 December 1591.
Prior to his short papacy, he had been a canon lawyer, diplomat, and chief administrator during the reign of Pope Gregory XIV (r. 1590–1591).
Giovanni Antonio Facchinetti, whose family came from Crodo, in the diocese of Novara, northern Italy, was born in Bologna on 20 July 1519. He was the son of Antonio Facchinetti and Francesca Cini.
He studied at the University of Bologna - which was pre-eminent in jurisprudence — where he obtained a doctorate in both civil and canon law in 1544. He was later ordained to the priesthood on 11 March 1544 and was appointed a canon of the church of Saints Gervasio and Protasio of Domodossola in 1547.
He travelled to Rome and he became the secretary to Cardinal Nicolò Ardinghelli before entering the service of Cardinal Alessandro Farnese, brother of the Duke of Parma and grandson of Pope Paul III (1534–1549), one of the great patrons of the time. The cardinal, who was the Archbishop of Avignon, sent Facchinetti there as his ecclesiastical representative and subsequently recalled him to the management of his affairs at Parma, where he was acting governor of the city, from 1556 to 1558. He was also made the Referendary of the Apostolic Signatura in 1559 and held that post for a year.
In 1560, Facchinetti was named as the Bishop of Nicastro, in Calabria, and in 1562 was present at the Council of Trent. He was the first bishop to actually reside in the diocese in three decades. Pope Pius V (1566–1572) sent him as papal nuncio to Venice in 1566 to further the papal alliance with Spain and Venice against the Turks, which ultimately resulted in the victory of Lepanto in 1571. He was recalled from Venice in 1572 and was made the Prior Commendatario of S. Andrea di Carmignano in the diocese of Padua from 1576 to 1587.
Relinquishing his see to pursue his career in Rome in 1575 and also because of health reasons, he was named the Titular Latin Patriarch of Jerusalem in 1572. He occupied that post until he was made a cardinal.
Pope Gregory XIII made him a cardinal on 12 December 1583 as the Cardinal-Priest of Santi Quattro Coronati and he was to receive the red hat and title on 9 January 1584. Pope Gregory XIV made him the Prefect of the Apostolic Signatura in 1591.
Even before Pope Gregory XIV died, Spanish and anti-Spanish factions were electioneering for the next pope. Philip II of Spain's (r. 1556–1598) high-handed interference at the previous conclave was not forgotten: he had barred all but seven cardinals. This time the Spanish party in the College of Cardinals did not go so far, but they still controlled a majority, and after a quick conclave they raised Facchinetti to the papal chair as Pope Innocent IX. It took three ballots to elect him as pope. Facchinetti received 24 votes on 28 October but was not successful in that ballot to be elected as pope. He received 28 votes on 29 October in the second ballot while the third saw him prevail.
The cardinal protodeacon Andreas von Austria crowned Innocent IX as pontiff on 3 November 1591. He elevated two cardinals to the cardinalate in the only papal consistory of his papacy on 18 December 1591.
Mindful of the origin of his success, Innocent IX supported, during his two months' pontificate, the cause of Philip II and the Catholic League against Henry IV of France (r. 1589–1610) in the French Wars of Religion (1562–1598), where a papal army was in the field. His death, however, prevented the realisation of Innocent IX's schemes.
His grandnephew Giovanni Antonio Cardinal Facchinetti de Nuce, Jr., was one of two cardinals appointed during the weeks of Innocent IX's pontificate. A later member of the Cardinalate was his great-grandnephew Cesare Facchinetti (made a Cardinal in 1643).
On 18 December, the pope made a pilgrimage of Rome's seven pilgrimage churches, despite being ill, and caught a cold as a result. This became a heavy cough combined with a fever that led to his death.
Innocent IX died in the early morning of 30 December 1591. He was buried in the Vatican grottoes in a simple tomb. | https://en.wikipedia.org/wiki?curid=24644 |
Pope Innocent X
Pope Innocent X (; 6 May 1574 – 7 January 1655), born Giovanni Battista Pamphilj (or Pamphili), was head of the Catholic Church and ruler of the Papal States from 15 September 1644 to his death in 1655.
Born in Rome of a family from Gubbio in Umbria who had come to Rome during the pontificate of Pope Innocent IX, Pamphili was trained as a lawyer and graduated from the Collegio Romano. He followed a conventional "cursus honorum", following his uncle Girolamo Pamphili as auditor of the Rota, and like him, attaining the position of cardinal-priest of Sant'Eusebio, in 1629. Before becoming pope, Pamphili served as a papal diplomat to Naples, France, and Spain.
Pamphili succeeded Pope Urban VIII (1623–44) on 15 September 1644 as Pope Innocent X, after a contentious papal conclave that featured a rivalry between French and Spanish factions.
Innocent X was one of the most politically shrewd pontiffs of the era, greatly increasing the temporal power of the Holy See. Major political events in which he was involved included the English Civil War, conflicts with French church officials over financial fraud issues, and hostilities with the Duchy of Parma related to the First War of Castro. In terms of theological events, Innocent X issued a papal bull condemning the beliefs of Jansenism.
Giovanni Battista Pamphili was born in Rome on 5 May 1574, the son of Camillo Pamphili, of the Roman Pamphili family. The family, originally from Gubbio, was directly descended from Pope Alexander VI.
In 1594 he graduated from the Roman College and followed a conventional path through the ranks of the Catholic Church. He served as a Consistorial lawyer in 1601, and in 1604 succeeded his uncle, Cardinal Girolamo Pamphili, as auditor of the Roman Rota, the ecclesiastical appellate tribunal. He was also a canonist of the Sacred Apostolic Penitentiary, a second tribunal.
In 1623 Pope Gregory XV sent him as apostolic nuncio (ecclesiastical diplomat) to the court of the Kingdom of Naples. In 1625 Pope Urban VIII sent him to accompany his nephew, Francesco Barberini, whom he had accredited as nuncio, first to France and then Spain. In January 1626, Pamphili was appointed titular Latin Patriarch of Antioch.
In reward for his labors, in May 1626 Giovanni Battista was made nuncio to the court of Philip IV of Spain. The position led to a lifelong association with the Spaniards which was of great use during the papal conclave of 1644. He was created Cardinal "in pectore" in 1627 and published in 1629.
The 1644 conclave for the election of a successor to Pope Urban VIII was long and contentious, lasting from 9 August to 15 September. A large French faction led by Urban VIII's nephews objected to the Spanish candidate, as an enemy of Cardinal Mazarin, who guided French policy. They put up their own candidate (Giulio Cesare Sacchetti) but could not establish enough support for him and agreed to Cardinal Pamphili as an acceptable compromise, though he had served as legate to Spain. Mazarin, bearing the French veto of Pamphili, arrived too late, and the election was accomplished.
Pamphili chose to be called Innocent X, and soon after his accession he initiated legal action against the Barberini for misappropriation of public funds. The brothers Francesco Barberini, Antonio Barberini and Taddeo Barberini fled to Paris, where they found a powerful protector in Cardinal Mazarin. Innocent X confiscated their property, and on 19 February 1646, issued a papal bull decreeing that all cardinals who might leave the Papal States for six months without express papal permission would be deprived of their benefices and eventually of their cardinalate itself. The French parliament declared the papal ordinance void in France, but Innocent X did not yield until Mazarin prepared to send troops to Italy. Henceforth the papal policy towards France became more friendly, and somewhat later the Barberini were rehabilitated when the son of Taddeo Barberini, Maffeo Barberini, married Olimpia Giustiniani, a niece of Innocent X.
In 1653, Innocent X, with the "Cum occasione" papal bull, condemned five propositions of Jansenius's "Augustinus", inspired by St. Augustine, as heretical and close to Lutheranism. This led to the formulary controversy, Blaise Pascal's writing of the "Lettres Provinciales", and finally to the razing of the Jansenist convent of Port-Royal and the subsequent dissolving of its community.
The death of Pope Urban VIII is said to have been hastened by his chagrin at the result of the First War of Castro, a war he had undertaken against Odoardo Farnese, the duke of Parma. Hostilities between the papacy and the Duchy of Parma resumed in 1649, and forces loyal to Pope Innocent X destroyed the city of Castro on 2 September 1649.
Innocent X objected to the conclusion of the Peace of Westphalia, which his nuncio, Fabio Chigi, protested in vain. In 1650 Innocent X issued the brief "Zelo Domus Dei" against the Peace of Westphalia, and backdated it to 1648 in order to preserve potential claims for confiscated land and property. The protests were ignored by the European powers.
During the Civil War (1642–49) in England and Ireland, Innocent X strongly supported the independent Confederate Ireland, over the objections of Mazarin and the former English Queen and at that time Queen Mother, Henrietta Maria, exiled in Paris. The pope sent Giovanni Battista Rinuccini, archbishop of Fermo, as a special nuncio to Ireland. He arrived at Kilkenny with a large quantity of arms including 20,000 pounds of gunpowder, and a very large sum of money. Rinuccini hoped he could discourage the Confederates from allying with Charles I and the Royalists in the English Civil War and instead encourage them towards the foundation of an independent Catholic-ruled Ireland.
At Kilkenny, Rinuccini was received with great honours, asserting in his Latin declaration that the object of his mission was to sustain the king but, above all, to rescue from pains and penalties the Catholic people of Ireland in securing the free and public exercise of the Catholic religion, and the restoration of the churches and church property. In the end, Oliver Cromwell restored Ireland to the Parliamentarian side and Rinuccini returned to Rome in 1649, after four fruitless years.
Olimpia Maidalchini was married to Innocent X's late brother, and was believed to be his mistress because of her influence over him in matters of promotion and politics. This state of affairs was alluded to in the "Encyclopædia Britannica" 9th edition (1880):
During the papacy of Pope Urban VIII, the future Innocent X was the pope's most significant rival among the College of Cardinals. Antonio Barberini, Urban VIII's brother, was a cardinal who had begun his career with the Capuchin brothers. About 1635, at the height of the Thirty Years' War in Germany, in which the Papacy was intricately involved, Cardinal Antonio commissioned Guido Reni's painting of the Archangel Michael, trampling Satan, who bears the recognizable features of Innocent X. This bold political artwork still hangs in a side chapel of the Capuchin friars' Church of the Conception (Santa Maria della Concezione) in Rome. A legend related to the painting is that the dashing and high-living artist, Guido Reni, had been insulted by rumours he thought were circulated by Cardinal Pamphili.
When, a few years later, Pamphili was raised to the papacy, other Barberini relatives fled to France on embezzlement accusations. Despite this, the Capuchins held fast to their chapel altarpiece.
Innocent was responsible for raising the Colegio de Santo Tomás de Nuestra Señora del Santísimo Rosario into the rank of a university. It is now the University of Santo Tomás in Manila, the oldest existing in Asia.
In 1650, Innocent X celebrated a Jubilee. He embellished Rome with inlaid floors and bas-relief in Saint Peter's, erected Bernini's "Fontana dei Quattro Fiumi" in Piazza Navona, the Pamphili stronghold in Rome, and ordered the construction of Palazzo Nuovo at the Campidoglio.
Innocent X is also the subject of "Portrait of Innocent X", a famous painting by Diego Velázquez housed in the family gallery of Palazzo Doria (Galleria Doria Pamphili). This portrait inspired the "Screaming Pope" paintings by 20th-century painter Francis Bacon, the most famous of which is Bacon's "Study after Velázquez's Portrait of Pope Innocent X".
Innocent X died 7 January 1655, and the following April was succeeded by Pope Alexander VII who promised Innocent X that he would build more schools in Europe. | https://en.wikipedia.org/wiki?curid=24645 |
Property law
Property law is the area of law that governs the various forms of ownership in real property (land) and personal property. Property refers to legally protected claims to resources, such as land and personal property, including intellectual property. Property can be exchanged through contract law, and if property is violated, one could sue under tort law to protect it.
The concept, idea or philosophy of property underlies all property law. In some jurisdictions, historically all property was owned by the monarch and it devolved through feudal land tenure or other feudal systems of loyalty and fealty.
Though the Napoleonic code was among the first government acts of modern times to introduce the notion of absolute ownership into statute, protection of personal property rights was present in medieval Islamic law and jurisprudence, and in more feudalist forms in the common law courts of medieval and early modern England.
The word "property", in everyday usage, refers to an object (or objects) owned by a person—a car, a book, or a cellphone—and the relationship the person has to it. In law, the concept acquires a more nuanced rendering. Factors to consider include the nature of the object, the relationship between the person and the object, the relationship between a number of people in relation to the object, and how the object is regarded within the prevailing political system. Most broadly and concisely, property in the legal sense refers to the rights of people in or over certain objects or things.
Non-legally recognized or documented property rights are known as informal property rights. These informal property rights are non-codified or documented, but recognized among local residents to varying degrees.
In capitalist societies with market economies, much of property is owned privately by persons or associations and not the government. Five general justifications have been given on private property rights:
Arguments in favor of limiting private property rights have also been raised:
In his "Second Treatise on Government", English philosopher John Locke asserted the right of an individual to own one part of the world, when, according to the Bible, God gave the world to all humanity in common. He claimed that although persons belong to God, they own the fruits of their labor. When a person works, that labor enters into the object. Thus, the object becomes the property of that person. However, Locke conditioned property on the Lockean proviso, that is, "there is enough, and as good, left in common for others".
U.S. Supreme Court Justice James Wilson undertook a survey of the philosophical grounds of American property law in 1790 and 1791. He proceeds from two premises: “Every crime includes an injury: every injury includes a violation of a right.” (Lectures, III, ii.) The government's role in protecting property depends upon an idea of right. Wilson believes that "man has a natural right to his property, to his character, to liberty, and to safety.” He also indicates that “the primary and principal object in the institution of government... was... to acquire a new security for the possession or the recovery of those rights”.
Wilson states that: “Property is the right or lawful power, which a person has to a thing.” He then divides the right into three degrees: possession, the lowest; possession and use; and, possession, use, and disposition – the highest. Further, he states: “Useful and skillful industry is the soul of an active life. But industry should have her just reward. That reward is property, for of useful and active industry, property is the natural result.” From this simple reasoning he is able to present the conclusion that exclusive, as opposed to communal property, is to be preferred. Wilson does, however, give a survey of communal property arrangements in history, not only in colonial Virginia but also ancient Sparta.
There are two main views on the right to property, the traditional view and the bundle of rights view. The traditionalists believe that there is a core, inherent meaning in the concept of property, while the bundle of rights view states that the property owner only has bundle of permissible uses over the property. The two views exist on a spectrum and the difference may be a matter of focus and emphasis.
William Blackstone, in his "Commentaries on the Laws of England," wrote that the essential core of property is the right to exclude. That is, the owner of property must be able to exclude others from the thing in question, even though the right to exclude is subject to limitations. By implication, the owner can use the thing, unless another restriction, such as zoning law, prevents it. Other traditionalists argue that three main rights define property: the right to exclusion, use and transfer.
An alternative view of property, favored by legal realists, is that property simply denotes a bundle of rights defined by law and social policy. Which rights are included in the bundle known as property rights, and which bundles are preferred to which others, is simply a matter of policy. Therefore, a government can prevent the building of a factory on a piece of law, through zoning law or criminal law, without damaging the concept of property. The "bundle of rights" view was prominent in academia in the 20th century and remains influential today in American law.
Different parties may claim a competing interest in the same property by mistake or by fraud, with the claims being inconsistent of each other. For example, the party creating or transferring an interest may have a valid title, but may intentionally or negligently create several interests wholly or partially inconsistent with each other. A court resolves the dispute by adjudicating the priorities of the interests.
Property rights are rights over things enforceable against all other persons. By contrast, contractual rights are rights enforceable against particular persons. Property rights may, however, arise from a contract; the two systems of rights overlap. In relation to the sale of land, for example, two sets of legal relationships exist alongside one another: the contractual right to sue for damages, and the property right exercisable over the land. More minor property rights may be created by contract, as in the case of easements, covenants, and equitable servitudes.
A separate distinction is evident where the rights granted are insufficiently substantial to confer on the nonowner a definable interest or right in the thing. The clearest example of these rights is the license. In general, even if licenses are created by a binding contract, they do not give rise to property interests.
Property rights are also distinguished from personal rights. Practically all contemporary societies acknowledge this basic ontological and ethical distinction. In the past, groups lacking political power have often been disqualified from the benefits of property. In an extreme form, this has meant that people have become "objects" of property—legally "things" or chattels (see slavery.) More commonly, marginalized groups have been denied legal rights to own property. These include Jews in England and married women in Western societies until the late 19th century.
The dividing line between personal rights and property rights is not always easy to draw. For instance, is one's reputation property that can be commercially exploited by affording property rights to it? The question of the proprietary character of personal rights is particularly relevant in the case of rights over human tissue, organs and other body parts.
The rights of women to control their own body have been in some times and some places subordinated to other people's control over their fetus. For example, government intervention that controls the conditions of birthing by prohibiting or requiring caesarian sections. Whether and how a woman becomes pregnant or carries a pregnancy to term is also subject to laws mandating or forbidding abortion, or restricting access to birth control. A woman's right to control her body during pregnancy or possible pregnancy – what work she does, what food or substances she ingests, other activities she engages in – have also frequently been subject to restrictions by many other parties; in response, a number of countries have passed laws banning pregnancy discrimination. English judges have recently made the point that such women lack the right to exclusive control over their own bodies, formerly considered a fundamental common-law right.
In the United States, a "quasi-property" interest has been explicitly declared in the dead body. Also in the United States, it has been recognised that people have an alienable proprietary "right of publicity" over their "persona". The patent/patenting of biotechnological processes and products based on human genetic material may be characterised as creating property in human life.
A particularly difficult question is whether people have rights to intellectual property developed by others from their body parts. In the pioneering case on this issue, the Supreme Court of California held in "Moore v. Regents of the University of California" (1990) that individuals do not have such a property right.
Property law is characterised by a great deal of historical continuity and technical terminology. The basic distinction in common law systems is between real property (land) and personal property (chattels).
Before the mid-19th century, the principles governing the transfer of real property and personal property on an intestacy were quite different. Though this dichotomy does not have the same significance anymore, the distinction is still fundamental because of the essential differences between the two categories. An obvious example is the fact that land is immovable, and thus the rules that govern its use must differ. A further reason for the distinction is that legislation is often drafted employing the traditional terminology.
The division of land and chattels has been criticised as being not satisfactory as a basis for categorising the principles of property law since it concentrates attention not on the proprietary interests themselves but on the objects of those interests. Moreover, in the case of fixtures, chattels which are affixed to or placed on land may become part of the land.
Real property is generally sub-classified into:
Although a tenancy involves rights to real property, a leasehold estate is typically considered personal property, being derived from contract law. In the civil law system, the distinction is between movable and immovable property, with movable property roughly corresponding to personal property, while immovable property corresponding to real estate or real property, and the associated rights, and obligations thereon.
The concept of possession developed from a legal system whose principal concern was to avoid civil disorder. The general principle is that a person in possession of land or goods, even as a wrongdoer, is entitled to take action against anyone interfering with the possession unless the person interfering is able to demonstrate a superior right to do so.
In England, the Torts (Interference with Goods) Act 1977 has significantly amended the law relating to wrongful interference with goods and abolished some longstanding remedies and doctrines.
The term "transfer of property" generally means an act by which a living person conveys property, in present or in future, to one or more other living persons, or to himself and one or more other living persons. To transfer property is to perform such an act.
The most common method of acquiring an interest in property is as the result of a consensual transaction with the previous owner, for example, a sale or a gift. Dispositions by will may also be regarded as consensual transactions, since the effect of a will is to provide for the distribution of the deceased person's property to nominated beneficiaries. A person may also obtain an interest in property under a trust established for his or her benefit by the owner of the property.
It is also possible for property to pass from one person to another independently of the consent of the property owner. For example, this occurs when a person dies intestate, goes bankrupt, or has the property taken in execution of a court judgment.
Historically, leases served many purposes, and the regulation varied according to intended purposes and the economic conditions of the time. Leaseholds, for example, were mainly granted for agriculture until the late eighteenth century and early nineteenth century, when the growth of cities made the leasehold an important form of landholding in urban areas.
The modern law of landlord and tenant in common law jurisdictions retains the influence of the common law and, particularly, the "laissez-faire" philosophy that dominated the law of contract and the law of property in the 19th century. With the growth of consumerism, the law of consumer protection recognised that common law principles assuming equal bargaining power between parties may cause unfairness. Consequently, reformers have emphasised the need to assess residential tenancy laws in terms of protection they provide to tenants. Legislation to protect tenants is now common. | https://en.wikipedia.org/wiki?curid=24647 |
Plea
In legal terms, a plea is simply an answer to a claim made by someone in a criminal case under common law using the adversarial system. Colloquially, a plea has come to mean the assertion by a defendant at arraignment, or otherwise in response to a criminal charge, whether that person pleaded or pled guilty, not guilty, "nolo contendere" (a.k.a. no contest), no case to answer (in the United Kingdom), or Alford plea (in the United States).
The concept of the plea is one of the major differences between criminal procedure under common law and procedure under the civil law system. Under common law, a defendant who pleads guilty is automatically convicted and the remainder of the trial is used to determine the sentence. This produces a system known as plea bargaining, in which defendants may plead guilty in exchange for a more lenient punishment. In civil law jurisdictions, a confession by the defendant is treated like any other piece of evidence, and a full confession does not prevent a full trial from occurring or relieve the prosecutor from having to present a case to the court.
The most common types of plea are "guilty" and "not guilty".
Pleading guilty typically results in a more lenient punishment for the defendant; it is thus a type of mitigating factor in sentencing. In a plea bargain a defendant makes a deal with the prosecution or court to plead guilty in exchange for a more lenient punishment, or for related charges against them to be dropped. A "blind plea" is a guilty plea entered with no plea agreement in place. Plea bargains are particularly common in the United States. Other countries use a more limited form of plea bargaining. In the United Kingdom and Germany, guidelines state that only the timing of the guilty plea can affect the reduction in the punishment, with an earlier plea resulting in a greater reduction.
In the United States, a "nolo contendere" (no contest) plea is when the defendant submits a plea that neither admits nor denies the offense. It has the same immediate effect of a guilty plea, in that the trial avoids determining the defendant's guilt.
These are pleas which claim that a case cannot proceed for some reason. They are so called because, rather than being an answer to the question of guilt or innocence, they are a claim that the matter of guilt or innocence should not be considered.
They are:
A defendant who refuses to enter a plea is usually interpreted as giving a plea of not guilty; the Federal Rules of Criminal Procedure, for instance, state, "If a defendant refuses to enter a plea or if a defendant organization fails to appear, the court must enter a plea of not guilty." Similarly, if a defendant attempts to enter an unorthodox plea (a "creative plea"), this will usually be interpreted as a plea of not guilty. One example of this is a defendant accused of a crime committed while protesting nuclear power, who gave his plea as "I plead for the beauty that surrounds us".
Until 1772, English law stated that if a defendant refused to plead guilty or not guilty, the trial was delayed from taking place. Some of these defendants were subjected to peine forte et dure (torture by pressing) until he or she entered a plea, although some died. The last recorded instance of this was in 1741.
A defendant who enters a plea of guilty must do so, in the phraseology of a 1938 Supreme Court case, " Johnson v. Zerbst", "knowingly, voluntarily and intelligently". The burden is on the prosecution to prove that all waivers of the defendant's rights complied with due process standards. Accordingly, in cases of all but the most minor offences, the court or the prosecution (depending upon local custom and the presiding judge's preference) will engage in a plea colloquy wherein they ask the defendant a series of rote questions about the defendant's knowledge of his rights and the voluntariness of the plea. Typically the hearing on the guilty plea is transcribed by a court reporter and the transcript is made a part of the permanent record of the case in order to preserve the conviction's validity from being challenged at some future time. "Voluntary" has been described as "an elusive term which has come to mean not induced by 'improper' inducements, such as bribing or physical violence, but not including the inducements normally associated with charge and sentence bargaining (except for inducements involving 'overcharging' by prosecutors)." "Intelligent" has been described as "also an elusive term, meaning that the defendant knows his rights, the nature of the charge to which he is pleading, and the consequences of his plea."
Virtually all jurisdictions hold that defense counsel need not discuss with defendants the collateral consequences of pleading guilty, such as consecutive sentencing or even treatment as an aggravating circumstance in an ongoing capital prosecution. However, the Supreme Court recognized an important exception in "Padilla v. Kentucky" (2010), in which the Court held that defense counsel is obligated to inform defendants of the potential immigration consequences of a guilty plea. Thus a defendant who is not advised of immigration consequences may have an ineffective assistance of counsel argument.
In the U.S. federal system, the court must also satisfy itself that there is a factual basis for the guilty plea. However, this safeguard may not be very effective, because the parties, having reached a plea agreement, may be reluctant to reveal any information that could disturb the agreement. When a plea agreement has been made, the judge's factual basis inquiry is usually perfunctory, and the standard for finding that the plea is factually based is very low.
Other special pleas used in criminal cases include the plea of mental incompetence, challenging the jurisdiction of the court over the defendant's person, the plea in bar, attacking the jurisdiction of the court over the crime charged, and the plea in abatement, which is used to address procedural errors in bringing the charges against the defendant, not apparent on the "face" of the indictment or other charging instrument. Special pleas in federal criminal cases have been abolished, and defenses formerly raised by special plea are now raised by motion to dismiss.
A conditional plea is one where the defendant pleads guilty to the offense, but specifically reserves the right to appeal certain aspects of the charges (for example, that the evidence was illegally obtained).
In "United States v. Binion", malingering or feigning illness during a competency evaluation was held to be obstruction of justice and led to an enhanced sentence. Although the defendant had pleaded guilty, he was not awarded a reduction in sentence because the feigned illness was considered to mean that he was not accepting responsibility for his illegal behavior.
A plea in mitigation is a term used during criminal law proceedings in many Commonwealth countries. It typically involves a lawyer telling a judge of extenuating circumstances that could result in a lesser sentence for an offender. | https://en.wikipedia.org/wiki?curid=24649 |
Pope Innocent XI
Pope Innocent XI (; 16 May 1611 – 12 August 1689), born Benedetto Odescalchi, was Pope from 21 September 1676 to his death on August 12, 1689. He is known in Budapest as the "Saviour of Hungary".
Much of his reign was concerned with tension with Louis XIV of France. He lowered taxes in the Papal States during his pontificate and he also produced a surplus in the papal budget. Because of this surplus he repudiated excessive nepotism within the Church. Innocent XI was frugal in matters of governing the Papal States, from dress to leading a life with Christian values. Once he was elected to the Papacy, he applied himself to moral and administrative reform of the Roman Curia. He abolished sinecures and pushed for greater simplicity in preaching as well as greater reverence in worship—requesting this of both the clergy and faithful.
After a difficult cause for canonization, starting in 1791, which caused considerable controversy over the years and which was stopped on several occasions, he was beatified with no opposition in 1956 by Pope Pius XII.
Benedetto Odescalchi was born in Como on 16 May 1611, the son of a Como nobleman, Livio Odescalchi, and Paola Castelli Giovanelli from Gandino. His siblings were Carlo, Lucrezia, Giulio Maria, Constantino, Nicola and Paolo. He also had several collateral descendants of note through his sister: her grandson Cardinal Baldassare Erba-Odescalchi, Cardinal Benedetto Erba Odescalchi, and Cardinal Carlo Odescalchi.
The Odescalchi, a family of minor nobility, were determined entrepreneurs. In 1619, Benedetto's brother founded a bank with his three uncles in Genoa which quickly grew into a successful money-lending business. After completing his studies in grammar and letters, the 15-year-old Benedetto moved to Genoa to take part in the family business as an apprentice. Lucrative economic transactions were established with clients in the major Italian and European cities, such as Nuremberg, Milan, Kraków, and Rome.
In 1626 Benedetto's father died, and he began schooling in the humane sciences taught by the Jesuits at his local college, before transferring to Genoa. In 1630 he narrowly survived an outbreak of plague, which killed his mother.
Some time between 1632 and 1636, Benedetto decided to move to Rome and then Naples in order to study civil law. This led to his securing the offices of protonotary apostolic, president of the apostolic chamber, commissary of the Marco di Roma, and governor of Macerata; on 6 March 1645, Pope Innocent X (1644–55) made him Cardinal-Deacon with the deaconry of "Santi Cosma e Damiano". He subsequently became legate to Ferrara. When he was sent to Ferrara in order to assist the people stricken with a severe famine, the Pope introduced him to the people of Ferrara as the "father of the poor".
In 1650, Odescalchi became bishop of Novara, in which capacity he spent all the revenues of his see to relieve the poor and sick in his diocese. He participated in the 1655 conclave. With the permission of the pope he resigned as bishop of Novara in favor of his brother Giulio in 1656 and went to Rome. While there he took a prominent part in the consultations of the various congregations of which he was a member. He participated in the 1669–70 conclave.
Odescalchi was a strong papal candidate after the death of Pope Clement IX (1667–69) in 1669, but the French government rejected him (using the now-abolished veto). After Pope Clement X (1670–76) died, Louis XIV of France (1643–1715) again intended to use his royal influence against Odescalchi's election. Instead, believing that the cardinals as well as the Roman people were of one mind in their desire to have Odescalchi as their Pope, Louis reluctantly instructed the French party cardinals to acquiesce in his candidacy.
On 21 September 1676, Odescalchi was chosen to be Clement X's successor and took the name of Innocent XI. He chose this name in honour of Pope Innocent X, who made him a cardinal in 1645. He was formally crowned as pontiff on 4 October 1676 by the protodeacon, Cardinal Francesco Maidalchini.
Immediately upon his accession, Innocent XI turned all his efforts towards reducing the expenses of the Curia. He passed strict ordinances against nepotism among the cardinals. He lived very parsimoniously and exhorted the cardinals to do the same. In this manner he not only squared the annual deficit which at his accession had reached the sum of 170,000 scudi, but within a few years the papal income was even in excess of the expenditures. He lost no time in declaring and practically manifesting his zeal as a reformer of manners and a corrector of administrative abuses. Beginning with the clergy, he sought to raise the laity also to a higher moral standard of living. He closed all of the theaters in Rome (considered to be centers of vice and immorality) and famously brought a temporary halt to the flourishing traditions of Roman opera. In 1679 he publicly condemned sixty-five propositions, taken chiefly from the writings of Escobar, Suarez and other casuists (mostly Jesuit casuists, who had been heavily attacked by Pascal in his "Provincial Letters") as "propositiones laxorum moralistarum" and forbade anyone to teach them under penalty of excommunication. He condemned in particular the most radical form of mental reservation ("stricte mentalis") which authorised deception without an outright lie.
Personally not unfriendly to Miguel de Molinos, Innocent XI nevertheless yielded to the enormous pressure brought to bear upon him to confirm in 1687 the judgement of the inquisitors by which sixty-eight quietist propositions of Molinos were condemned as blasphemous and heretical.
Innocent XI showed a degree of sensitivity in his dealings with the Jews within the Italian States. He compelled the city of Venice to release the Jewish prisoners taken by Francesco Morosini in 1685. He also discouraged compulsory baptisms which accordingly became less frequent under his pontificate, but he could not abolish the old practice altogether.
More controversially on 30 October 1682, he issued an edict by which all the money-lending activities carried out by the Roman Jews were to cease. Such a move would incidentally have financially benefitted his own brothers who played a dominant role in European money-lending. However ultimately convinced that such a measure would cause much misery in destroying livelihoods, the enforcement of the edict was twice delayed.
Innocent XI was an enthusiastic initiator of the Holy League which brought together the German Estates and King John III of Poland who in 1683 hastened to the relief of Vienna which was being besieged by the Turks. After the siege was raised, Innocent XI again spared no efforts to induce the Christian princes to lend a helping hand for the expulsion of the Turks from Hungary. He contributed millions of scudi to the Turkish war fund in Austria and Hungary and had the satisfaction of surviving the capture of Belgrade, 6 September 1688.
During England's Exclusion Crisis (1679-1681), when Parliament sought to exclude the Catholic Duke of York from gaining the throne,
the radical Protestants of London's Green Ribbon Club regularly held mass processions culminating with burning "The Pope" in effigy. Evidently, the organizers of these events were unaware that the actual Pope in Rome was involved in a deep conflict with the King of France — and therefore, far from supporting the drive to get the Duke of York crowned, which served Louis XIV's political ambitions.
The pontificate of Innocent XI was marked by the struggle between the absolutism and hegemonic intentions of Louis XIV, and the primacy of the Catholic Church. As early as 1673, Louis had by his own power extended the right of the "régale" over the provinces of Languedoc, Guyenne, Provence, and Dauphiné, where it had previously not been exercised.
All the efforts of Innocent XI to induce Louis XIV to respect the rights and primacy of the Church proved useless. In 1682, the King convoked an assembly of the French clergy which adopted the four articles that became known as the Gallican Liberties. Innocent XI annulled the four articles on 11 April 1682, and refused his approbation to all future episcopal candidates who had taken part in the assembly.
To appease the Pope, Louis XIV began to act as a zealot of Catholicism. In 1685 he revoked the Edict of Nantes and inaugurated a persecution of French Hugenots. Innocent expressed displeasure at these drastic measures and continued to withhold his approbation from the episcopal candidates.
Innocent XI irritated the King still more that same year by abolishing the much abused right of asylum, by which foreign ambassadors in Rome had been able to harbor in embassies any criminal wanted by the papal court of justice. He notified the new French ambassador, Marquis de Lavardin, that he would not be recognised as ambassador in Rome unless he renounced this right, but Louis XIV would not give it up. At the head of an armed force of about 800 men Lavardin entered Rome in November 1687, and took forcible possession of his palace. Innocent XI treated him as excommunicated and placed under interdict the Church of St. Louis at Rome where he attended services on 24 December 1687.
In January 1688, Innocent XI also received the diplomatic mission which had been dispatched to France and the Vatican by Narai, the King of Siam under Fr. Guy Tachard and Ok-khun Chamnan in order to establish relations.
The tension between the Pope and the King of France was increased by Innocent's procedure in filling the vacant archiepiscopal see of Cologne. The two candidates for the see were Cardinal William Egon of Fürstenberg, then Bishop of Strasbourg, and Joseph Clement, a brother of Max Emanuel, Elector of Bavaria. The former was a willing tool in the hands of Louis XIV and his appointment as Archbishop and Prince-elector of Cologne would have implied French preponderance in north-western Germany.
Joseph Clement was not only the candidate of Emperor Leopold I (1658–1705) but of all European rulers, with the exception of the King of France and his supporter, King James II of England (1685–88). At the election, which took place on 19 July 1688, neither of the candidates received the required number of votes. The decision, therefore, fell to Innocent XI, who designated Joseph Clement as Archbishop and Elector of Cologne.
Louis XIV retaliated by taking possession of the papal territory of Avignon, imprisoning the papal nuncio and appealing to a general council. Nor did he conceal his intention to separate the French Church entirely from Rome. The Pope remained firm. The subsequent fall of James II in England destroyed French preponderance in Europe and soon after Innocent XI's death the struggle between Louis XIV and the papacy was settled in favour of the Church.
Innocent XI dispatched Ferdinando d'Adda as nuncio to the Kingdom of England, the first representative of the Papacy to go to England for over a century. Even so, the Pope did not approve the imprudent manner in which James II attempted to restore Catholicism in England. He also repeatedly expressed his displeasure at the support which James II gave to the autocratic King Louis XIV in his measures against the Church. It is not surprising, therefore, that Innocent XI had less sympathy for James than for William of Orange and that he did not afford James help in his hour of trial. Innocent refused to nominate James II's choice as a Cardinal, Sir Edward Petre, 3rd Baronet.
In 2007, fiction writers Rita Monaldi and Francesco Sorti drew popular attention to the claim repeatedly made by historians over the intervening centuries that Innocent XI had secretly funded the resistance of the Protestant hero William of Orange to the French King, and even financed his overthrow of James II of England. This was done using the established Odescalchi family business in money-lending.
Innocent XI issued the papal bull "Sanctissimus Dominus" in 1679 to condemn 65 propositions that favored a liberal approach to doctrine which included two that related to abortion. He first condemned proposition 34 and countered that it was unlawful to procure abortion. He also condemned proposition 35, which stated: "It seems probable that the fetus (as long as it is in the uterus) lacks a rational soul and begins first to have one when it is born; and consequently it must be said that no abortion is a homicide."
Innocent XI was no less intent on preserving the purity of faith and morals among all people. He insisted on thorough education and an exemplary lifestyle for all people and he passed strict rules in relation to the modesty of dress among Roman women. Furthermore, he put an end to the ever-increasing passion for gambling by suppressing the gambling houses at Rome. By a decree of 12 February 1679 he encouraged frequent and even daily reception of Holy Communion. On 4 March 1679, he condemned the proposition that "the precept of keeping Holy Days is not obligatory under pain of mortal sin, aside from scandal, if contempt is absent". The document stated that the Church taught it was a mortal sin to intentionally skip Mass attendance on Sunday or a Holy Day without a legitimate excuse. It further stated that the faithful had to attend the Mass on Sunday itself or on the Saturday evening. In 1688, he reiterated a decree of Pope Sixtus V that banned women from singing on stage in all public theatres or opera houses.
He elevated 43 new cardinals into the cardinalate in two consistories. He also canonized two saints: Bernard of Menthon in 1681 and Pedro Armengol on 8 April 1687. He beatified six individuals.
Innocent XI was hostile towards the book "Varia Opuscula Theologica" (Various Theological Brochures) that the Spanish Jesuit Francisco Suárez published. He ordered all copies to be burnt in 1679 but his orders went ignored. One of the books was discovered in 2015.
Innocent XI's health declined in 1689 and he was confined to his bed beginning in June. He cancelled a consistory of cardinals on 19 June for the examination of bishops due to ill health and did not hold meetings on 21 June. The pope suddenly took ill with a fever on 25 June and on 29 June was unable to celebrate Mass for the Feast of Saints Peter and Paul; he had Cardinal Chigi celebrate it in his place. The Pope's condition worsened on 2 July and led his doctors to lance his left leg which caused fluid release, eventually having an operation on his right leg on 31 July, and two more in the following two days.
The Pontiff received the Viaticum on 9 August since doctors were of the belief that the Pope had little time left to live. On 11 August Cardinal Leandro Colloredo met with him to remind him that the pope was set to raise ten men into the cardinalate but the pope refused to do so despite the cardinal's insistence. On the morning of 12 August he lost the ability to speak and suffered from breathing difficulties.
Innocent XI died on 12 August 1689 at 22:00 (Rome time) after a long period of ill health due to kidney stones, from which he had suffered since 1682. Following his death, he was buried in St Peter's Basilica beneath his funeral monument near the Clementine Chapel, which his nephew, Livio Odescalchi, commissioned. The monument, which was designed and sculpted by Pierre-Étienne Monnot, features the pope seated upon the throne above a sarcophagus with a base-relief showing the liberation of Vienna from the Turks by John III Sobieski, flanked by two allegorical figures representing Faith and Fortitude.
In April 2011 the remains of Innocent XI where moved to make way for remains of the beatified John Paul II
The process of Innocent XI's beatification was introduced in 1691 by Pope Innocent XII who proclaimed him a Servant of God and was continued by Clement XI and Clement XII, but French influence and the accusation of Jansenism caused it to be suspended in 1744 by Pope Benedict XIV. In the 20th century it was reintroduced and Pope Pius XII proclaimed him Venerable on 15 November 1955. Pius XII announced his beatification on 7 October 1956.
Following his beatification, his sarcophagus was placed under the Altar of St. Sebastian in the basilica's Chapel of St. Sebastian, where it remained until 8 April 2011 when it was moved to make way for the remains of Pope John Paul II to be relocated to the basilica from the grotto beneath St. Peter's in honor of his beatification and in order to make his resting place more accessible to the public. Innocent's body was transferred to the basilica's Altar of Transfiguration, which is located near the Clementine Chapel and the entombed remains of Pope St. Gregory the Great (590–604). The altar is also across from Innocent XI's monument, which was his original site of burial before his beatification.
The feast day assigned to Innocent XI is 12 August, the date of his death. In the Hungarian calendar, it is commemorated on August 13.
Reports suggest that following the attacks on the United States of America on 9/11, the Church decided to advance the long-suspended cause of Innocent XI to be canonised: as the pope who had prevented the Turks from overrunning Christendom in 1683, thus drawing parallels with aggressive Islamism. However, popular revelations made in the novel, "Imprimatur" damaged Innocent XI's reputation and thus the planned canonisation of Benedetto Odescalchi was suspended indefinitely.
It was believed that the canonization would have taken place in 2003 but the book's publication halted all plans to canonize Innocent XI. | https://en.wikipedia.org/wiki?curid=24650 |
Pantograph
A pantograph (Greek roots παντ- "all, every" and γραφ- "to write", from their original use for copying writing) is a mechanical linkage connected in a manner based on parallelograms so that the movement of one pen, in tracing an image, produces identical movements in a second pen. If a line drawing is traced by the first point, an identical, enlarged, or miniaturized copy will be drawn by a pen fixed to the other. Using the same principle, different kinds of pantographs are used for other forms of duplication in areas such as sculpture, minting, engraving, and milling.
Because of the shape of the original device, a pantograph also refers to a kind of structure that can compress or extend like an accordion, forming a characteristic rhomboidal pattern. This can be found in extension arms for wall-mounted mirrors, temporary fences, scissor lifts, and other scissor mechanisms such as the pantograph used on electric locomotives and trams.
The ancient Greek engineer Hero of Alexandria in his work Mechanics, described pantographs.
In 1603, Christoph Scheiner used a pantograph to copy and scale diagrams, and wrote about the invention over 27 years later, in ""Pantographice"" (Rome 1631).
One arm of the pantograph contained a small pointer, while the other held a drawing implement, and by moving the pointer over a diagram, a copy of the diagram was drawn on another piece of paper. By changing the positions of the arms in the linkage between the pointer arm and drawing arm, the scale of the image produced can be changed.
The original use of the pantograph was for copying and scaling line drawings. Modern versions are sold as toys.
Sculptors use a three-dimensional version of the pantograph, usually a large boom connected to a fixed point at one end, bearing two rotating pointing needles at arbitrary points along this boom. By adjusting the needles different enlargement or reduction ratios can be achieved. This device, now largely overtaken by computer guided router systems that scan a model and can produce it in a variety of materials and in any desired size, was invented by inventor and steam pioneer James Watt (1736–1819) and perfected by Benjamin Cheverton (1796–1876) in 1836. Cheverton's machine was fitted with a rotating cutting bit to carve reduced versions of well-known sculptures. A three-dimensional pantograph can also be used to enlarge sculpture by interchanging the position of the model and the copy.
Another version is still very much in use to reduce the size of large relief designs for coins down to the required size of the coin.
One advantage of phonograph and gramophone discs over cylinders in the 1890s—before electronic amplification was available—was that large numbers of discs could be stamped quickly and cheaply. In 1890, the only ways of manufacturing copies of a master cylinder were to mold the cylinders (which was slow and, early on, produced very poor copies), to record cylinders by the "round", over and over again, or to acoustically copy the sound by placing the horns of two phonographs together or to hook the two together with a rubber tube (one phonograph recording and the other playing the cylinder back). Edison, Bettini, Leon Douglass and others solved this problem (partly) by mechanically linking a cutting stylus and a playback stylus together and copying the "hill-and-dale" grooves of the cylinder mechanically. When molding improved somewhat, molded cylinders were used as pantograph masters. This was employed by Edison and Columbia in 1898, and was used until about January 1902 (Columbia brown waxes after this were molded). Some companies like the United States Phonograph Co. of Newark, New Jersey, supplied cylinder masters for smaller companies so that they could duplicate them, sometimes pantographically. Pantographs could turn out about 30 records per day and produce up to about 150 records per master. In theory, pantograph masters could be used for 200 or 300 duplicates if the master and the duplicate were running in reverse and the record would be duplicated in reverse. This, in theory, could extend the usability of a pantograph master by using the unworn/lesser worn part of the recording for duplication. Pathé employed this system with mastering their vertically-cut records until 1923; a , master cylinder, rotating at a high speed, would be recorded on. This was done as the resulting cylinder was considerably loud and of very high fidelity. Then, the cylinder would be placed on the mandrel of a duplicating pantograph that would be played with a stylus on the end of a lever, which would transfer the sound to a wax disc master, which would be electroplated and be used to stamp copies out. This system resulted in some fidelity reduction and rumble, but relatively high quality sound. Edison Diamond Disc Records were made by recording "directly" onto the wax master disc.
Before the advent of control technologies such as numerical control (NC and CNC) and programmable logic control (PLC), duplicate parts being milled on a milling machine could not have their contours mapped out by moving the milling cutter in a "connect-the-dots" ("by-the-numbers") fashion. The only ways to control the movement of the cutting tool were to dial the positions by hand using dexterous skill (with natural limits on a human's accuracy and precision) or to trace a cam, template, or model in some way, and have the cutter mimic the movement of the tracing stylus. If the milling head was mounted on a pantograph, a duplicate part could be cut (and at various scales of magnification besides 1:1) simply by tracing a template. (The template itself was usually made by a tool and die maker using toolroom methods, including milling via dialing followed by hand sculpting with files and/or die grinder points.) This was essentially the same concept as reproducing documents with a pen-equipped pantograph, but applied to the machining of hard materials such as metal, wood, or plastic. Pantograph routing, which is conceptually identical to pantograph milling, also exists (as does CNC routing). The Blanchard lathe, a copying lathe developed by Thomas Blanchard, used the same essential concept.
The development and dissemination throughout industry of NC, CNC, PLC, and other control technologies provided a new way to control the movement of the milling cutter: via feeding information from a program to actuators (servos, selsyns, leadscrews, machine slides, spindles, and so on) that would move the cutter as the information directed. Today most commercial machining is done via such programmable, computerized methods. Home machinists are likely to work via manual control, but computerized control has reached the home-shop level as well (it's just not yet as pervasive as its commercial counterparts). Thus pantograph milling machines are largely a thing of the past. They are still in commercial use, but at a greatly reduced and ever-dwindling level. They are no longer built new by machine tool builders, but a small market for used machines still exists. As for the magnification-and-reduction feature of a pantograph (with the scale determined by the adjustable arm lengths), it is achieved in CNC via mathematic calculations that the computer applies to the program information practically instantaneously. Scaling functions (as well as mirroring functions) are built into languages such as G-code.
Perhaps the pantograph that is most familiar to the general public is the extension arm of an adjustable wall-mounted mirror.
In another application similar to drafting, the pantograph is incorporated into a pantograph engraving machine with a revolving cutter instead of a pen, and a tray at the pointer end to fix precut lettered plates (referred to as 'copy'), which the pointer follows and thus the cutter, via the pantograph, reproduces the 'copy' at a ratio to which the pantograph arms have been set. The typical range of ratio is Maximum 1:1 Minimum 50:1 (reduction) In this way machinists can neatly and accurately engrave numbers and letters onto a part. Pantographs are no longer commonly used in modern engraving, with computerized laser and rotary engraving taking favor.
The device which maintains electrical contact with the contact wire and transfers power from the wire to the traction unit, used in electric locomotives and trams, is also called a "pantograph".
Some types of trains on the New York City Subway use end pantograph gates (which, to avoid interference, compress under spring pressure around curves while the train is en route) to prevent passengers on station platforms from falling into or riding in the gaps between the cars.
Some commercial vehicles have windscreen wipers on pantographs to allow the blade to cover more of the windscreen on each wipe.
Old-style 'baby gates' used a 2-dimensional pantograph mechanism (in a similar style to pantograph gates on subway cars) as a means of keeping toddlers away from stairways. The openings in these gates are too large to meet modern baby gate safety standards.
Herman Hollerith's "Keyboard punch" used for the 1890 U.S. Census was a pantograph design and sometimes referred to as "The Pantograph Punch".
An early 19th-century device employing this mechanism is the polygraph, which produces a duplicate of a letter as the original is written.
In churches in many countries (generally before modern animal welfare), dog whippers used 'dog tongs' with a pantograph mechanism to control dogs at a distance.
Fools in German carnivals use "stretching shears" ("Streckschere"), a.k.a. "Nürnberger Scissors" () as "hat snatchers" to entertain the crowds.
The fencing and swordsmanship manual "Ms.Thott.290.2º" written in 1459 by Hans Talhoffer includes what appears to be an extending blade working on the same principle.
In 1886, Eduard Selling patented a prize-winning calculating machine based on the pantograph, although it was not commercially successful.
In many cartoons, the bird in a cuckoo clock is depicted as extending on a pantograph mechanism, although this is seldom the case in actual clocks.
Expanding fences or trellises use folding pantograph mechanisms, for ease of transport and storage.
Longarm quilting machine operators may trace a pantograph, paper pattern, with a laser pointer to stitch a custom pattern onto the quilt. Digitized pantographs are followed by computerized machines.
Linn Boyd Benton invented a pantographic engraving machine for type design, which was capable not only of scaling a single font design pattern to a variety of sizes, but could also condense, extend, and slant the design (mathematically, these are cases of affine transformation, which is the fundamental geometric operation of most systems of digital typography today, including PostScript).
Pantographs are also used as guide frames in heavy-duty applications including scissor lifts, material handling equipment, stage lifts and specialty hinges (such as for panel doors on boats and airplanes).
Richard Feynman used the analogy of a pantograph as a way of scaling down tools to the nanometer scale in his talk There's Plenty of Room at the Bottom.
Numerous trade-show displays use 3-dimensional pantograph mechanisms to support backdrops for exhibit booths. The framework expands in 2 directions (vertical and horizontal) from a bundle of connected rods into a self-supporting structure on which a fabric backdrop is hung. | https://en.wikipedia.org/wiki?curid=24651 |
Princess Mononoke
"Princess Mononoke" is set in the late Muromachi period (approximately 1336 to 1573) of Japan with fantasy elements. The story follows the young Emishi prince Ashitaka's involvement in a struggle between the gods of a forest and the humans who consume its resources. The term or is not a name, but a Japanese word for a spirit or monster: supernatural, shape-shifting beings.
The film was released in Japan on July 12, 1997, and in the United States on October 29, 1999. It was a critical and commercial blockbuster, becoming the highest-grossing film in Japan of 1997, and also held Japan's box office record for domestic films until 2001's "Spirited Away", another Miyazaki film. It was dubbed into English and distributed in North America by Miramax, and despite a poor box office performance there, it sold well on DVD and video, greatly increasing Ghibli's popularity and influence outside Japan.
In Muromachi Japan, an Emishi village is attacked by a demon. The last Emishi prince, Ashitaka, kills it before it reaches the village, but its corruption curses his right arm. The curse gives him superhuman strength, but will eventually spread through his body and kill him. The villagers discover that the demon was a boar god, Nago, corrupted by an iron ball lodged in his body. The village's wise woman tells Ashitaka that he may find a cure in the western lands Nago came from, but he cannot return to his homeland. Before Ashitaka leaves, his fiancée Kaya gives him her crystal dagger so that he will not forget her.
Heading west, Ashitaka meets Jigo ("Jiko-bō" in the original Japanese version), an opportunist posing as a monk, who tells Ashitaka he may find help from the Great Forest Spirit, a deer-like animal god by day and a giant "nightwalker" by night. Nearby, men herd oxen to Irontown ("Tataraba" in Japanese), led by Lady Eboshi, and repel an attack by a wolf pack led by the wolf goddess Moro. Riding one of the wolves is San, a human girl. Ashitaka discovers two injured Irontown men and carries them through the forest, where he encounters many kodama and glimpses the Forest Spirit. In Irontown, Ashitaka learns that Eboshi built the town by clearcutting forests to claim ironsand and produce iron, leading to conflicts with the forest gods and Asano, a local daimyō. Irontown is a refuge for social outcasts, including lepers employed to manufacture firearms; it was one of these guns that had wounded Nago. Eboshi also explains that San was raised by the wolves as one of their own and resents humankind.
San infiltrates Irontown to kill Eboshi, but Ashitaka intervenes, revealing the curse to the town's people and knocks Eboshi and San out cold. As Ashitaka leaves, he is unintentionally shot by a villager, but the curse gives him the strength to carry San out of the village. San awakens and prepares to kill the weakened Ashitaka, but hesitates when he tells her that she is beautiful. She takes him to the forest, and decides to trust him after the Forest Spirit saves his life. A boar clan, led by the blind boar god Okkoto, plans to attack Irontown to save the forest. Eboshi prepares for battle and sets out to kill the Forest Spirit with Jigo, who is working for the government; she intends to give the god's head to the Emperor in return for protection from Lord Asano. According to legend, the Forest Spirit's head grants immortality.
Ashitaka recovers from his wound but remains cursed; he returns to Irontown to find it besieged by Asano's samurai, and heads out to warn Eboshi. The boar clan is annihilated in battle, and Okkoto is corrupted by his wounds. Jigo's men disguise themselves in boar skins and trick the rampaging Okkoto into leading them to the Forest Spirit. San tries to stop Okkoto, but is swept up in his demonic corruption. Moro intervenes and Ashitaka dives into the corruption, saving San. The Forest Spirit euthanizes Okkoto and Moro. As it transforms into the nightwalker, Eboshi decapitates it. It bleeds ooze which spreads over the land, killing anything it touches as the nightwalker searches for its head, which Jigo steals. The forest and kodama begin to die; Moro's head comes alive and bites off Eboshi's right arm, but she survives.
After the samurai flee and Irontown is evacuated, Ashitaka and San pursue Jigo and retrieve the head, returning it to the Forest Spirit. The Spirit dies as the sun rises, but its form washes over the land and heals it, and Ashitaka's curse is lifted. Ashitaka stays to help rebuild Irontown, but promises San he will visit her in the forest. Eboshi reunites with the townspeople and vows to build a better town. The forest begins to regrow, and a kodama emerges from the undergrowth.
The cast also includes: Akira Nagoya as the ; Kimihiro Reizei as a ; Tetsu Watanabe as a ; Makoto Sato as , a wild boar turned into a demon who curses Ashitaka when he attacks the Emishi village; and Sumi Shimamoto as , Kohroku's wife, a former sex worker, and the leader of Eboshi's women, voiced by Jada Pinkett Smith in the English version.
In the late 1970s, Miyazaki drew sketches of a film about a princess living in the woods with a beast. Miyazaki began writing the film's plotline and drew the initial storyboards for the film in August 1994. He had difficulties adapting his early ideas and visualisations, because elements had already been used in "My Neighbor Totoro" and because of societal changes since the creation of the original sketches and image boards. This writer's block prompted him to accept a request for the creation of the "On Your Mark" promotional music video for the Chage and Aska song of the same title. According to Toshio Suzuki, the diversion allowed Miyazaki to return for a fresh start on the creation of "Princess Mononoke". In April 1995, supervising animator Masashi Ando devised the character designs from Miyazaki's storyboard. In May 1995, Miyazaki drew the initial storyboards. That same month, Miyazaki and Ando went to the ancient forests of Yakushima, of Kyushu, an inspiration for the landscape of "Nausicaä of the Valley of the Wind", and the mountains of Shirakami-Sanchi in northern Honshu for location scouting along with a group of art directors, background artists and digital animators for three days. Animation production commenced in July 1995. Miyazaki personally oversaw each of the 144,000 cels in the film, and is estimated to have redrawn parts of 80,000 of them. The final storyboards of the film's ending were finished only months before the Japanese premiere date.
Inspired by John Ford, an Irish-American director best known for his Westerns, Miyazaki created Irontown as a "tight-knit frontier town" and populated it with "characters from outcast groups and oppressed minorities who rarely, if ever, appear in Japanese films." He made the characters "yearning, ambitious and tough." Miyazaki did not want to create an accurate history of Medieval Japan, and wanted to "portray the very beginnings of the seemingly insoluble conflict between the natural world and modern industrial civilization." The landscapes appearing in the film were inspired by Yakushima. Despite being set during the Muromachi period, the actual time period of "Princess Mononoke" depicts a "symbolic neverwhen clash of three proto-Japanese races (the Jomon, Yamato and Emishi)."
"Princess Mononoke" was produced with an estimated budget of ¥2.35 billion (approximately US$23.5 million). It was mostly hand-drawn, but incorporates some use of computer animation in approximately ten percent of the film. The computer animated parts are designed to blend in and support the traditional cel animation, and are mainly used in images consisting of a mixture of computer generated graphics and traditional drawing. A further 10 minutes uses inked-and-painted, a technique used in all subsequent Studio Ghibli films. Most of the film is colored with traditional paint, based on the color schemes designed by Miyazaki and Michiyo Yasuda. However, producers agreed on the installation of computers to successfully complete the film prior to the Japanese premiere date. Telecom Animation Film Company helped animate the film. DR Movie helped with the painting process.
Two titles were originally considered for the film. One, ultimately chosen, has been translated into English as "Princess Mononoke". The other title can be translated into English as either "The Story of Ashitaka" or "The Legend of Ashitaka". In a Tokyo Broadcasting System program, televised on November 26, 2013, Toshio Suzuki mentioned that Hayao Miyazaki had preferred "The Legend of Ashitaka" as the title while Suzuki himself favoured "Princess Mononoke". Suzuki also mentioned that Miyazaki had created a new kanji to write his preferred title. The English dub contains minor additional voice overs to explain nuances of Japanese culture to western audiences.
A central theme of "Princess Mononoke" is the environment. The film centers on the adventure of Ashitaka as he journeys to the west to undo a fatal curse inflicted upon him by Nago, a boar turned into a demon by Eboshi. Michelle J. Smith and Elizabeth Parsons said that the film "makes heroes of outsiders in all identity politics categories and blurs the stereotypes that usually define such characters". In the case of the deer god's destruction of the forest and Tataraba, Smith and Parsons said that the "supernatural forces of destruction are unleashed by humans greedily consuming natural resources". They also characterized Eboshi as a business-woman who has a desire to make money at the expense of the forest, and also cite Eboshi's intention to destroy the forest to mine the mountain "embodies environmentalist evil". Deirdre M. Pike writes that Princess Mononoke is simultaneously part of nature and part of the problem. Mononoke represents the connection between the environment and humans, but also demonstrates that there is an unbalance in power between the two.
Two other themes found in the plot of "Princess Mononoke" are sexuality and disability. Speaking at the International Symposium on Leprosy / Hansen's Disease History in Tokyo, Miyazaki explained that he was inspired to portray people living with leprosy, "said to be an incurable disease caused by bad karma", after visiting the Tama Zenshoen Sanatorium near his home in Tokyo. Lady Eboshi is driven by her compassion for the disabled, and believes that blood from the Great Forest Spirit could allow her to "cure [her] poor lepers". Michelle Jarman, Assistant Professor of Disability Studies at the University of Wyoming, and Eunjung Kim, Assistant Professor of Gender and Women's Studies at the University of Wisconsin–Madison, said the disabled and gendered sexual bodies were partially used as a transition from the feudal era to a hegemony that "embraces modern social systems, such as industrialization, gendered division of labor, institutionalization of people with diseases, and militarization of men and women." They likened Lady Eboshi to a monarch. Kim and Jarman suggested that Eboshi's disregard of ancient laws and curses towards sex workers and lepers was enlightenment reasoning and her exploitation of disabled people furthered her modernist viewpoints. Kim and Jarman conclude that Lady Eboshi's supposed benevolence in incorporating lepers and sex workers into her society leverages the social stigma attached to marginalized groups, pointing out that the hierarchical structures within Irontown still support the stigmatization of lepers and sex workers.
An additional theme is the morally ambiguous conflict between humankind's growth and development and Nature's need for preservation. According to the "Chicago Sun-Times"'s Roger Ebert, "It is not a simplistic tale of good and evil, but the story of how humans, forest animals and nature gods all fight for their share of the new emerging order." Billy Crudup, who provided the English voice for Ashitaka, said "The movie was such an entirely different experience; it had a whole new sensibility I had never seen in animation. It also had something profound to say: that there has to be a give and take between man and nature. One of the things that really impressed me is that Miyazaki shows life in all its multi-faceted complexity, without the traditional perfect heroes and wicked villains. Even Lady Eboshi, who Ashitaka respects, is not so much evil as short-sighted." Minnie Driver, the English voice actress for Lady Eboshi, commented similarly: "It's one of the most remarkable things about the film: Miyazaki gives a complete argument for both sides of the battle between technological achievement and our spiritual roots in the forest. He shows that good and evil, violence and peace exist in us all. It's all about how you harmonize it all." Anime historian Susan Napier said there is no clear good vs. evil conflict in Princess Mononoke, unlike other films popular with children. Based on the multiple point of views the film adopts, San and Lady Eboshi can simultaneously be viewed as heroic or villainous. San defends the forest and viewers empathize with her. But she also attacks innocent people, complicating how we evaluate her. Opposed to San, Eboshi tries to destroy the forest and could be considered a villain. But everything she does is out of a desire to protect her village and see it prosper. San and Lady Eboshi survive until film's end, defying the usual convention of good triumphing over evil with the antagonist defeated. Napier concluded that the resolution of the conflict is left ambiguous, implying that Lady Eboshi and San will be able to come to some sort of compromise. The ambiguity suggests that there are no true villains or heroes.
Dan Jolin of "Empire" said that a potential theme could be that of lost innocence. Miyazaki attributes this to his experience of making his previous film, "Porco Rosso", and the wars in the former Yugoslavia, which he cites as an example of mankind never learning, making it difficult for him to go back to making a film such as "Kiki's Delivery Service", where he has been quoted as saying "It felt like children were being born to this world without being blessed. How could we pretend to them that we're happy?"
Duality is central to Eboshi's characterization. Benjamin Thevenin, Assistant Professor of Theater and Media Arts at Brigham Young University, said Eboshi does not fully understand the harm she does to the spirits. Her focus is on creating a safe home for her people. She holds no malicious intent toward nature and its spirits until they begin attacking her people. Once nature attacks, she gathers her soldiers to protect the inhabitants of her town, a place where all are welcome. Irontown is a haven for sex workers and lepers. She brings them to Irontown and gives them jobs, hospitality, and a kindness that they have never experienced before. The same treatment goes for all Irontown's inhabitants, not just the sickly and the scorned. Lady Eboshi treats everyone equally, no matter the race, sex, or history of the individual, creating a caring community. While Eboshi hates San and the forest spirits, she keeps a garden in her town. Her care for the garden implies that her intention is not to ravage nature to no end, but rather to help her own people. Thevenin concluded that although Eboshi can be seen as the film's villain, she is also a hero to the citizens of Irontown and to humankind in general.
Another theme in this film is between individualism and societal conformity. According to University of Bristol professors Christos Ellinas, Neil Allan and Anders Johansson, this struggle can be seen between San, a strong individualistic force, and Eboshi, the leader of a great society. San has fully committed to living with the wolves in the forest and to renouncing her association with the human race. Eboshi has vowed to sustain her society of Irontown by any means including destroying the environment. The people of Irontown have a cohesive ideology and agree with Eboshi to protect Irontown at the cost of the environment's destruction. This conformity can be found within their society, because “even though there is an envisioned culture at which an organization abides to, achieving coherence at lower aggregation levels (e.g. individuals) is increasingly challenging due to its emergent nature”. The different viewpoints of San and Lady Eboshi eventually leads to a physical altercation where they both aim to kill one another. This dynamic between them represents the struggle to find a balance between the needs of individualism and those of conformity.
"Princess Mononoke" was released theatrically in Japan on July 12, 1997. The film was extremely successful in Japan and with both anime fans and arthouse moviegoers in English-speaking countries. Since the Walt Disney Company made a distribution deal with Tokuma Shoten for Studio Ghibli's films in 1996, it was the first film from Studio Ghibli along with "Kiki's Delivery Service" and "Castle in the Sky" to have been dubbed into English by Disney; in this case, subsidiary Miramax Films was assigned to release the movie in America. In response to demands from Miramax chairman Harvey Weinstein to edit the film, one of Miyazaki's producers sent Weinstein a katana with the message: "No cuts." Promotion manager, Steve Alpert, revealed that Weinstein had wanted to trim the film down from 135 minutes to 90 minutes "despite having promised not to do so." When Alpert informed him that Miyazaki would not agree to these demands, Weinstein flew into one of his infamous rages and threatened Alpert that he would "never work in this...industry again". Possibly as a result of this, the movie only had a limited theatrical release in the United States, reportedly to the disdain of Walt Disney Pictures.
On April 29, 2000, the English-dub version of "Princess Mononoke" was released theatrically in Japan along with the documentary "Mononoke hime in U.S.A.". The documentary was directed by Toshikazu Sato and featured Miyazaki visiting the Walt Disney Studios and various film festivals. The film had a limited theatrical re-release in the United States during July 2018.
"Princess Mononoke" was the highest-grossing Japanese film of 1997, earning ¥11.3 billion in distribution receipts. It became the highest-grossing film in Japan, beating the record set by "E.T. the Extra-Terrestrial" in 1982, but was surpassed several months later by "Titanic". The film earned a domestic total of .
It was the top-grossing anime film in the United States in January 2001, but the film did not fare as well financially in the country when released in December 1997. It grossed $2,298,191 for the first eight weeks. It showed more strength internationally, where it earned a total of $11 million outside Japan, bringing its worldwide total to $159,375,308 at the time. The film's limited US re-release in 2018 grossed $1,423,877 over five days, bringing its US total to $3,799,185 and worldwide total to $160,799,185.
In Japan, the film was released on VHS by Buena Vista Home Entertainment on June 26, 1998. A LaserDisc edition was also released by Tokuma Japan Communications on the same day. The film was released on DVD by Buena Vista Home Entertainment on November 21, 2001 with bonus extras added, including the international versions of the film as well as the storyboards. By 2007, "Princess Mononoke" sold 4.4million DVD units in Japan. At an average retail price of , this is equivalent to approximately () in Japanese sales revenue as of 2007.
In July 2000, Buena Vista Home Entertainment via Miramax Home Entertainment announced plans to release the film on VHS and DVD in North America on August 29. Initially, the DVD version of "Princess Mononoke" wasn't going to include the Japanese-language track at the request of Buena Vista's Japan division. Because that the film hadn't been released on DVD in Japan yet, there were concerns that "a foreign-released DVD containing the Japanese language track will allow for the importation of such a DVD to Japan, which could seriously hurt the local sales of a future release of the [film]". The fansite Nausicaa.net organized an email campaign for fans to include the Japanese language track, while DVD Talk began an online petition to retain the Japanese language track. The DVD release of "Princess Mononoke" was delayed as a result. Miramax Home Entertainment released the DVD in December 2000 with the original Japanese audio, the English dubbed audio and extras including a trailer and a documentary with interviews from the English dub voice actors. The film was released on Blu-ray disc in Japan on December 4, 2013.
Walt Disney Studios Home Entertainment released "Princess Mononoke" on Blu-ray Disc on November 18, 2014. In its first week, it sold 21,860 units; by November 23, 2014, it had grossed $502,332. It was later included in the Blu-ray Miyazaki Collection, released on November 17, 2015. GKIDS re-issued the film on Blu-ray and DVD on October 17, 2017. In total, "Mononoke"s video releases in Japan and the United States sold approximately .
As of March 2018, on the review aggregator website Rotten Tomatoes, "Princess Mononoke" had a 93% approval rating based on 108 reviews, with an average rating of 7.97/10. The website's consensus reads: "With its epic story and breathtaking visuals, "Princess Mononoke" is a landmark in the world of animation." On Metacritic, the film has an average score of 76 out of 100 based on 29 reviews, indicating "generally favorable reviews".
"The Daily Yomiuri"s Aaron Gerow called the film a "powerful compilation of [Hayao] Miyazaki's world, a cumulative statement of his moral and filmic concerns." Leonard Klady of "Variety" said that "Princess Mononoke" "is not only more sharply drawn, it has an extremely complex and adult script" and the film "has the soul of a romantic epic, and its lush tones, elegant score by Joe Hisaishi and full-blooded characterizations give it the sweep of cinema's most grand canvases". Roger Ebert of the "Chicago Sun-Times" called "Princess Mononoke" "a great achievement and a wonderful experience, and one of the best films of the year. […] You won’t find many Hollywood love stories (animated or otherwise) so philosophical." Ty Burr of "Entertainment Weekly" called the film "a windswept pinnacle of its art" and that it "has the effect of making the average Disney film look like just another toy story". However, Stephen Hunter of "The Washington Post" stated that the film "is as spectacular as it is dense and as dense as it is colorful and as colorful as it is meaningless and as meaningless as it is long. And it's very long." Kenneth Turan of the "Los Angeles Times" said that the film "brings a very different sensibility to animation, a medium [Miyazaki] views as completely suitable for straight dramatic narrative and serious themes." In his review, Dave Smith from "Gamers' Republic" called it "one of the greatest animated films ever created, and easily one of the best films of 1999."
Roger Ebert placed "Princess Mononoke" sixth on his top ten movies of 1999. It ranked 488th on "Empire"s list of the 500 greatest films. Time Out ranked the film 26th on 50 greatest animated films. It also ranked 26 on "Total Film"'s list of 50 greatest animated films.
James Cameron cited "Princess Mononoke" as an influence on his 2009 film "Avatar". He acknowledged that it shares themes with "Princess Mononoke", including its clash between cultures and civilizations, and cited "Princess Mononoke" as an influence on the ecosystem of Pandora.
"Princess Mononoke" is the first animated feature film to win the Japan Academy Prize for Best Picture. For the 70th Academy Awards ceremony, "Princess Mononoke" was the Japanese submission to be nominated for the Academy Award for Best Foreign Language Film, but was not successfully nominated. Hayao Miyazaki was also nominated for an Annie Award for his work on the film.
The film score of "Princess Mononoke" was composed and performed by Joe Hisaishi, the soundtrack composer for nearly all of Miyazaki's productions, and Miyazaki wrote the lyrics of the two vocal tracks, "The Tatara Women Work Song" and its title song. The music was performed by Tokyo City Philharmonic Orchestra and conducted by Hiroshi Kumagai. The soundtrack was released in Japan by Tokuma Japan Communications on July 2, 1997, and the North American version was released by Milan Records on October 12, 1999.
The titular theme song was performed by counter-tenor Yoshikazu Mera. For the English adaptation, Sasha Lazard sang the song.
During the movie Hisaishi makes use of a few known classical pieces and quotes them, such as Dmitri Shostakovich's 5th symphony.
As with other Studio Ghibli films, additional albums featuring soundtrack themes in alternative versions have been released. The image album features early versions of the themes, recorded at the beginning of the film production process, and used as source of inspiration for the various artists involved. The symphonic suite features longer compositions, each encompassing several of the movie themes, performed by the Czech Philharmonic Orchestra conducted by Mario Klemens.
In 2012, it was announced that Studio Ghibli and British theatre company Whole Hog Theatre would be bringing "Princess Mononoke" to the stage. It is the first stage adaptation of a Studio Ghibli work. The contact between Whole Hog Theatre and Studio Ghibli was facilitated by Nick Park of Aardman Animations after he sent footage of Whole Hog performances to Studio Ghibli's Toshio Suzuki. The play features large puppets made out of recycled and reclaimed materials.
The first performances were scheduled for London's New Diorama Theatre and sold out in 72 hours, a year in advance. In March 2013, it was announced that the show would transfer to Japan after its first run of shows in London. A second series of performances followed in London after the return from Tokyo. The second run of London performances sold out in four and half hours. The play received positive reviews and was one of Lyn Gardner's theatre picks in "The Guardian". On April 27, 2013, the play was presented at Nico Nico Douga's Cho Party and was streamed online in Japan. | https://en.wikipedia.org/wiki?curid=24653 |
Standard Chinese
Standard Chinese, also known as Modern Standard Mandarin, Standard Mandarin, Modern Standard Mandarin Chinese (MSMC), or simply Mandarin, is a standard variety of Chinese that is one of the official languages of the People's Republic of China. Its pronunciation is based on the Beijing dialect, its vocabulary on the Mandarin dialects, and its grammar is based on written vernacular Chinese. The similar Taiwanese Mandarin is the de facto official language of Taiwan. Standard Singaporean Mandarin is one of the four official languages of Singapore.
Like other varieties of Chinese, Standard Chinese is a tonal language with topic-prominent organization and subject–verb–object word order. It has more initial consonants but fewer vowels, final consonants and tones than southern varieties. Standard Chinese is an analytic language, though with many compound words.
Standard Chinese is a standardised form of the language called Putonghua in Mainland China. Guoyu (Standard Taiwanese Mandarin) is a similar linguistic standard in Taiwan. Aside from a number of differences in pronunciation and vocabulary, Putonghua is written using simplified Chinese characters (plus Hanyu Pinyin romanization for teaching), and Guoyu is written using traditional Chinese characters (plus Zhuyin for teaching). Many characters are identical between the two systems.
In Chinese, the standard variety is known as:
Standard Chinese is also commonly referred to by generic names for "Chinese", notably , and (compare for English, and ). In total, there have been known over 20 various names for the language.
The term "Guoyu" had previously been used by non-Han rulers of China to refer to their languages, but in 1909 the Qing education ministry officially applied it to Mandarin, a lingua franca based on northern Chinese varieties, proclaiming it as the new "national language".
The name "Putonghua" also has a long, albeit unofficial, history. It was used as early as 1906 in writings by Zhu Wenxiong to differentiate a modern, standard Chinese from classical Chinese and other varieties of Chinese.
For some linguists of the early 20th century, the "Putonghua", or "common tongue/speech", was conceptually different from the "Guoyu", or "national language". The former was a national prestige variety, while the latter was the "legal" standard.
Based on common understandings of the time, the two were, in fact, different. "Guoyu" was understood as formal vernacular Chinese, which is close to classical Chinese. By contrast, "Putonghua" was called "the common speech of the modern man", which is the spoken language adopted as a national lingua franca by conventional usage.
The use of the term "Putonghua" by left-leaning intellectuals such as Qu Qiubai and Lu Xun influenced the People's Republic of China government to adopt that term to describe Mandarin in 1956. Prior to this, the government used both terms interchangeably.
In Taiwan, "Guoyu" (national language) continues to be the official term for Standard Chinese. The term "Guoyu" however, is less used in the PRC, because declaring a Beijing dialect-based standard to be the national language would be deemed unfair to speakers of other varieties and to the ethnic minorities. The term "Putonghua" (common speech), on the contrary, implies nothing more than the notion of a lingua franca.
During the government of a pro-Taiwan independence coalition (2000–2008), Taiwan officials promoted a different reading of "Guoyu" as all of the "national languages", meaning Hokkien, Hakka and Formosan as well as Standard Chinese.
"Huayu", or "language of the Chinese nation", originally simply meant "Chinese language", and was used in overseas communities to contrast Chinese with foreign languages. Over time, the desire to standardise the variety of Chinese spoken in these communities led to the adoption of the name "Huayu" to refer to Mandarin.
This name also avoids choosing a side between the alternative names of "Putonghua" and "Guoyu", which came to have political significance after their usages diverged along political lines between the PRC and the ROC. It also incorporates the notion that Mandarin is usually not the national or common language of the areas in which overseas Chinese live.
"Hanyu", or "language of the Han people", is another umbrella term used for Chinese. However, it has confusingly two different meanings:
This term, as well as "Hànzú" (), is a relatively modern concept; it came into being with the rise of Chinese nationalism in the 19th and 20th centuries. A related concept is "Hànzì" ().
The term "Mandarin" is a translation of "Guānhuà" (, literally "official's speech"), which referred to the lingua franca of the late Chinese empire. The Chinese term is obsolete as a name for the standard language, but is used by linguists to refer to the major group of Mandarin dialects spoken natively across most of northern and southwestern China.
In English, "Mandarin" may refer to the standard language, the dialect group as a whole, or to historic forms such as the late Imperial lingua franca. The name "Modern Standard Mandarin" is sometimes used by linguists who wish to distinguish the current state of the shared language from other northern and historic dialects.
Chinese has long had considerable dialectal variation, hence prestige dialects have always existed, and linguae francae have always been needed. Confucius, for example, used "yǎyán" () rather than colloquial regional dialects; text during the Han dynasty also referred to "tōngyǔ" (). Rime books, which were written since the Northern and Southern dynasties, may also have reflected one or more systems of standard pronunciation during those times. However, all of these standard dialects were probably unknown outside the educated elite; even among the elite, pronunciations may have been very different, as the unifying factor of all Chinese dialects, Classical Chinese, was a written standard, not a spoken one.
The Ming dynasty (1368–1644) and the Qing dynasty (1644–1912) began to use the term "guānhuà" (官话/官話), or "official speech", to refer to the speech used at the courts. The term "Mandarin" is borrowed directly from Portuguese. The Portuguese word "mandarim", derived from the Sanskrit word "mantrin" "counselor or minister", was first used to refer to the Chinese bureaucratic officials.
The Portuguese then translated "guānhuà" as "the language of the mandarins" or "the mandarin language".
In the 17th century, the Empire had set up Orthoepy Academies () in an attempt to make pronunciation conform to the standard. But these attempts had little success, since as late as the 19th century the emperor had difficulty understanding some of his own ministers in court, who did not always try to follow any standard pronunciation.
Before the 19th century, the standard was based on the Nanjing dialect, but later the Beijing dialect became increasingly influential, despite the mix of officials and commoners speaking various dialects in the capital, Beijing. By some accounts, as late as the early 20th century, the position of Nanjing Mandarin was considered to be higher than that of Beijing by some and the postal romanization standards set in 1906 included spellings with elements of Nanjing pronunciation. Nevertheless, by 1909, the dying Qing dynasty had established the Beijing dialect as (/), or the "national language".
As the island of Taiwan had fallen under Japanese rule per the 1895 Treaty of Shimonoseki, the term referred to the Japanese language until the handover to the ROC in 1945.
After the Republic of China was established in 1912, there was more success in promoting a common national language. A Commission on the Unification of Pronunciation was convened with delegates from the entire country. A "Dictionary of National Pronunciation" (国音字典/國音字典) was published in 1919, defining a hybrid pronunciation that did not match any existing speech. Meanwhile, despite the lack of a workable standardized pronunciation, colloquial literature in written vernacular Chinese continued to develop apace.
Gradually, the members of the National Language Commission came to settle upon the Beijing dialect, which became the major source of standard national pronunciation due to its prestigious status. In 1932, the commission published the "Vocabulary of National Pronunciation for Everyday Use" (国音常用字汇/國音常用字彙), with little fanfare or official announcement. This dictionary was similar to the previous published one except that it normalized the pronunciations for all characters into the pronunciation of the Beijing dialect. Elements from other dialects continue to exist in the standard language, but as exceptions rather than the rule.
After the Chinese Civil War, the People's Republic of China continued the effort, and in 1955, officially renamed "guóyǔ" as "pǔtōnghuà" (普通话/普通話), or "common speech". By contrast, the name "guóyǔ" continued to be used by the Republic of China which, after its 1949 loss in the Chinese Civil War, was left with a territory consisting only of Taiwan and some smaller islands; in its retreat to Taiwan. Since then, the standards used in the PRC and Taiwan have diverged somewhat, especially in newer vocabulary terms, and a little in pronunciation.
In 1956, the standard language of the People's Republic of China was officially defined as: ""Pǔtōnghuà" is the standard form of Modern Chinese with the Beijing phonological system as its norm of pronunciation, and Northern dialects as its base dialect, and looking to exemplary modern works in "báihuà" 'vernacular literary language' for its grammatical norms." By the official definition, Standard Chinese uses:
In the early 1950s, this standard language was understood by 41% of the population of the country, including 54% of speakers of Mandarin dialects, but only 11% of people in the rest of the country. By 1984, the proportion understanding the standard language nationally had risen to 90% and the proportion understanding the standard language among the speakers of Mandarin dialects had risen to 91%. A survey conducted by the China's Education Ministry in 2007 indicated that 53.06% of the population were able to effectively communicate orally in Standard Chinese.
From an official point of view, Standard Chinese serves the purpose of a lingua franca—a way for speakers of the several mutually unintelligible varieties of Chinese, as well as the ethnic minorities in China, to communicate with each other. The very name "Pǔtōnghuà," or "common speech," reinforces this idea. In practice, however, due to Standard Chinese being a "public" lingua franca, other Chinese varieties and even non-Sinitic languages have shown signs of losing ground to the standard.
While the Chinese government has been actively promoting "Pǔtōnghuà" on TV, radio and public services like buses to ease communication barriers in the country, developing "Pǔtōnghuà" as the official common language of the country has been challenging due to the presence of various ethnic groups which fear for the loss of their cultural identity and native dialect. In the summer of 2010, reports of increasing the use of the "Pǔtōnghuà" in local TV broadcasting in Guangdong led to thousands of Cantonese-speaking citizens in demonstration on the street.
In both mainland China and Taiwan, the use of Mandarin as the medium of instruction in the educational system and in the media has contributed to the spread of Mandarin. As a result, Mandarin is now spoken by most people in mainland China and Taiwan, though often with some regional or personal variation from the standard in terms of pronunciation or lexicon. However, the Ministry of Education in 2014 estimated that only about 70% of the population of China spoke Standard Mandarin to some degree, and only one tenth of those could speak it "fluently and articulately". There is also a 20% difference in penetration between eastern and western parts of China and a 50% difference between urban and rural areas. In addition, there are still 400 million Chinese who are only able to listen and understand Mandarin and not able to speak it. Therefore, in China's 13th Five Year Plan, the general goal is to raise the penetration rate to over 80% by 2020.
Mainland China and Taiwan use Standard Mandarin in most official contexts. The PRC in particular is keen to promote its use as a national lingua franca and has enacted a law (the "National Common Language and Writing Law") which states that the government must "promote" Standard Mandarin. There is no explicit official intent to have Standard Chinese replace the regional varieties, but local governments have enacted regulations (such as the "Guangdong National Language Regulations") which "implement" the national law by way of coercive measures to control the public use of regional spoken varieties and traditional characters in writing. In practice, some elderly or rural Chinese-language speakers do not speak Standard Chinese fluently, if at all, though most are able to understand it. But urban residents and the younger generations, who received their education with Standard Mandarin as the primary medium of education, are almost all fluent in a version of Standard Chinese, some to the extent of being unable to speak their local dialect.
In the predominantly Han areas in mainland China, while the use of Standard Chinese is encouraged as the common working language, the PRC has been somewhat sensitive to the status of minority languages and, outside the education context, has generally not discouraged their social use. Standard Chinese is commonly used for practical reasons, as, in many parts of southern China, the linguistic diversity is so large that neighboring city dwellers may have difficulties communicating with each other without a "lingua franca".
In Taiwan, the relationship between Standard Mandarin and other varieties, particularly Taiwanese Hokkien, has been more politically heated. During the martial law period under the Kuomintang (KMT) between 1949 and 1987, the KMT government revived the Mandarin Promotion Council and discouraged or, in some cases, forbade the use of Hokkien and other non-standard varieties. This produced a political backlash in the 1990s. Under the administration of Chen Shui-Bian, other Taiwanese varieties were taught in schools. The former president, Chen Shui-Bian, often spoke in Hokkien during speeches, while after the late 1990s, former President Lee Teng-hui, also speaks Hokkien openly. In an amendment to Article 14 of the Enforcement Rules of the Passport Act () passed on August 9, 2019, the Ministry of Foreign Affairs (Taiwan) announced that Taiwanese can use the romanized spellings of their names in Hoklo, Hakka and Aboriginal languages for their passports. Previously, only Mandarin Chinese names could be romanized.
In Hong Kong and Macau, which are now special administrative regions of the People's Republic of China, Cantonese is the primary language spoken by the majority of the population and used by government and in their respective legislatures. After Hong Kong's handover from the United Kingdom and Macau's handover from Portugal, their governments use Putonghua to communicate with the Central People's Government of the PRC. There have been widespread efforts to promote usage of Putonghua in Hong Kong since the handover, with specific efforts to train police and teachers.
In Singapore, the government has heavily promoted a "Speak Mandarin Campaign" since the late 1970s, with the use of other Chinese varieties in broadcast media being prohibited and their use in any context officially discouraged until recently. This has led to some resentment amongst the older generations, as Singapore's migrant Chinese community is made up almost entirely of people of south Chinese descent. Lee Kuan Yew, the initiator of the campaign, admitted that to most Chinese Singaporeans, Mandarin was a "stepmother tongue" rather than a true mother language. Nevertheless, he saw the need for a unified language among the Chinese community not biased in favor of any existing group.
Mandarin is now spreading overseas beyond East Asia and Southeast Asia as well. In New York City, the use of Cantonese that dominated the Manhattan Chinatown for decades is being rapidly swept aside by Mandarin, the lingua franca of most of the latest Chinese immigrants.
In both the PRC and Taiwan, Standard Chinese is taught by immersion starting in elementary school. After the second grade, the entire educational system is in Standard Chinese, except for local language classes that have been taught for a few hours each week in Taiwan starting in the mid-1990s.
In December 2004, the first survey of language use in the People's Republic of China revealed that only 53% of its population, about 700 million people, could communicate in Standard Chinese. This 53% is defined as a passing grade above 3-B (a score above 60%) of the Evaluation Exam.
With the fast development of the country and the massive internal migration in China, the standard Putonghua Proficiency Test has quickly become popular. Many university graduates in mainland China take this exam before looking for a job. Employers often require varying proficiency in Standard Chinese from applicants depending on the nature of the positions. Applicants of some positions, e.g. telephone operators, may be required to obtain a certificate. People raised in Beijing are sometimes considered inherently 1-A (A score of at least 97%) and exempted from this requirement. As for the rest, the score of 1-A is rare. According to the official definition of proficiency levels, people who get 1-B (A score of at least 92%) are considered qualified to work as television correspondents or in broadcasting stations. 2-A (A score of at least 87%) can work as Chinese Literature Course teachers in public schools. Other levels include: 2-B (A score of at least 80%), 3-A (A score of at least 70%) and 3-B (A score of at least 60%). In China, a proficiency of level 3-B usually cannot be achieved unless special training is received. Even though many Chinese do not speak with standard pronunciation, spoken Standard Chinese is widely understood to some degree.
The China National Language And Character Working Committee was founded in 1985. One of its important responsibilities is to promote Standard Chinese proficiency for Chinese native speakers.
The usual unit of analysis is the syllable, consisting of an optional initial consonant, an optional medial glide, a main vowel and an optional coda, and further distinguished by a tone.
The palatal initials , and pose a classic problem of phonemic analysis. Since they occur only before high front vowels, they are in complementary distribution with three other series, the dental sibilants, retroflexes and velars, which never occur in this position.
The final, which occurs only after dental sibilant and retroflex initials, is a syllabic approximant, prolonging the initial.
The rhotacized vowel forms a complete syllable.
A reduced form of this syllable occurs as a sub-syllabic suffix, spelled "-r" in pinyin and often with a diminutive connotation. The suffix modifies the coda of the base syllable in a rhotacizing process called "erhua".
Each full syllable is pronounced with a phonemically distinctive pitch contour. There are four tonal categories, marked in pinyin with iconic diacritic symbols, as in the words "mā" (妈/媽 "mother"), "má" (麻 "hemp"), "mǎ" (马/馬 "horse") and "mà" (骂/罵 "curse"). The tonal categories also have secondary characteristics. For example, the third tone is long and murmured, whereas the fourth tone is relatively short. Statistically, vowels and tones are of similar importance in the language.
There are also weak syllables, including grammatical particles such as the interrogative "ma" (吗/嗎) and certain syllables in polysyllabic words. These syllables are short, with their pitch determined by the preceding syllable.
It is common for Standard Chinese to be spoken with the speaker's regional accent, depending on factors such as age, level of education, and the need and frequency to speak in official or formal situations. This appears to be changing, though, in large urban areas, as social changes, migrations, and urbanization take place.
Due to evolution and standardization, Mandarin, although based on the Beijing dialect, is no longer synonymous with it. Part of this was due to the standardization to reflect a greater vocabulary scheme and a more archaic and "proper-sounding" pronunciation and vocabulary.
Distinctive features of the Beijing dialect are more extensive use of "erhua" in vocabulary items that are left unadorned in descriptions of the standard such as the "Xiandai Hanyu Cidian", as well as more neutral tones. An example of standard versus Beijing dialect would be the standard "mén" (door) and Beijing "ménr".
Most Standard Chinese as spoken on Taiwan differs mostly in the tones of some words as well as some vocabulary. Minimal use of the neutral tone and "erhua", and technical vocabulary constitute the greatest divergences between the two forms.
The stereotypical "southern Chinese" accent does not distinguish between retroflex and alveolar consonants, pronouncing pinyin "zh" [tʂ], "ch" [tʂʰ], and "sh" [ʂ] in the same way as "z" [ts], "c" [tsʰ], and "s" [s] respectively. Southern-accented Standard Chinese may also interchange "l" and "n", final "n" and "ng", and vowels "i" and "ü" [y]. Attitudes towards southern accents, particularly the Cantonese accent, range from disdain to admiration.
Chinese is a strongly analytic language, having almost no inflectional morphemes, and relying on word order and particles to express relationships between the parts of a sentence.
Nouns are not marked for case and rarely marked for number.
Verbs are not marked for agreement or grammatical tense, but aspect is marked using post-verbal particles.
The basic word order is subject–verb–object (SVO), as in English.
Nouns are generally preceded by any modifiers (adjectives, possessives and relative clauses), and verbs also generally follow any modifiers (adverbs, auxiliary verbs and prepositional phrases).
The predicate can be an intransitive verb, a transitive verb followed by a direct object, a copula (linking verb) "shì" () followed by a noun phrase, etc.
In predicative use, Chinese adjectives function as stative verbs, forming complete predicates in their own right without a copula. For example,
Another example is the common greeting "nǐ hăo" (你好), literally "you good".
Chinese additionally differs from English in that it forms another kind of sentence by stating a topic and following it by a comment. To do this in English, speakers generally flag the topic of a sentence by prefacing it with "as for". For example:
The time when something happens can be given by an explicit term such as "yesterday," by relative terms such as "formerly," etc.
As in many east Asian languages, classifiers or measure words are required when using numerals, demonstratives and similar quantifiers.
There are many different classifiers in the language, and each noun generally has a particular classifier associated with it.
The general classifier "ge" (/) is gradually replacing specific classifiers.
Many formal, polite and humble words that were in use in imperial China have not been used in daily conversation in modern-day Mandarin, such as "jiàn" (贱/賤 "my humble") and "guì" (贵/貴 "your honorable").
Although Chinese speakers make a clear distinction between Standard Chinese and the Beijing dialect, there are aspects of Beijing dialect that have made it into the official standard. Standard Chinese has a T–V distinction between the polite and informal "you" that comes from the Beijing dialect, although its use is quite diminished in daily speech. It also distinguishes between ""zánmen"" ("we" including the listener) and ""wǒmen"" ("we" not including the listener). In practice, neither distinction is commonly used by most Chinese, at least outside the Beijing area.
The following samples are some phrases from the Beijing dialect which are not yet accepted into Standard Chinese:
The following samples are some phrases from Beijing dialect which have become accepted as Standard Chinese:
Standard Chinese is written with characters corresponding to syllables of the language, most of which represent a morpheme.
In most cases, these characters come from those used in Classical Chinese to write cognate morphemes of late Old Chinese, though their pronunciation, and often meaning, has shifted dramatically over two millennia.
However, there are several words, many of them heavily used, which have no classical counterpart or whose etymology is obscure.
Two strategies have been used to write such words:
The government of the PRC (as well as some other governments and institutions) has promulgated a set of simplified forms. Under this system, the forms of the words "zhèlǐ" ("here") and "nàlǐ" ("there") changed from 這裏/這裡 and 那裏/那裡 to 这里 and 那里.
Chinese characters were traditionally read from top to bottom, right to left, but in modern usage it is more common to read from left to right. | https://en.wikipedia.org/wiki?curid=24657 |
Privatization
Privatization (or privatisation in British English) can mean different things including moving something from the public sector into the private sector. It is also sometimes used as a synonym for deregulation when a heavily regulated private company or industry becomes less regulated. Government functions and services may also be privatised (which may also be known as "franchising" or "out-sourcing"); in this case, private entities are tasked with the implementation of government programs or performance of government services that had previously been the purview of state-run agencies. Some examples include revenue collection, law enforcement, water supply, and prison management.
Another definition is the purchase of all outstanding shares of a publicly traded company by private investors, or the sale of a state-owned enterprise or municipally owned corporation to private investors. In the case of a for-profit company, the shares are then no longer traded at a stock exchange, as the company became private through private equity; in the case the partial or full sale of a state-owned enterprise or municipally owned corporation to private owners shares may be traded in the public market for the first time, or for the first time since an enterprise's previous nationalization. The second such type of privatization is the demutualization of a mutual organization, cooperative, or public-private partnership in order to form a joint-stock company.
"The Economist" magazine introduced the term "privatisation" (alternatively "privatisation" or "reprivatisation" after the German ) during the 1930s when it covered Nazi Germany's economic policy. It is not clear if the magazine coincidentally invented the word in English or if the term is a loanword from the same expression in German, where it has been in use since the 19th century.
The word privatization may mean different things depending on the context in which it is used. It can mean moving something from the public sphere into the private sphere, but it may also be used to describe something that was always private, but heavily regulated, which becomes less regulated through a process of deregulation. The term may also be used descriptively for something that has always been private, but could be public in other jurisdictions.
There are also private entities that may perform public functions. These entities could also be described as privatized. Privatization may mean the government sells state-owned businesses to private interests, but it may also be discussed in the context of the privatization of services or government functions, where private entities are tasked with the implementation of government programs or performance of government services. Gillian E. Metzger has written that: "Private entities [in the US] provide a vast array of social services for the government; administer core aspects of government programs; and perform tasks that appear quintessentially governmental, such as promulgating standards or regulating third-party activities." Metzger mentions an expansion of privatization that includes health and welfare programs, public education, and prisons.
The history of privatization dates from Ancient Greece, when governments contracted out almost everything to the private sector. In the Roman Republic private individuals and companies performed the majority of services including tax collection (tax farming), army supplies (military contractors), religious sacrifices and construction. However, the Roman Empire also created state-owned enterprises—for example, much of the grain was eventually produced on estates owned by the Emperor. David Parker and David S. Saal suggest that the cost of bureaucracy was one of the reasons for the fall of the Roman Empire.
Perhaps one of the first ideological movements towards privatization came during China's golden age of the Han Dynasty. Taoism came into prominence for the first time at a state level, and it advocated the laissez-faire principle of Wu wei (無為), literally meaning "do nothing". The rulers were counseled by the Taoist clergy that a strong ruler was virtually invisible.
During the Renaissance, most of Europe was still by and large following the feudal economic model. By contrast, the Ming dynasty in China began once more to practice privatization, especially with regards to their manufacturing industries. This was a reversal of the earlier Song dynasty policies, which had themselves overturned earlier policies in favor of more rigorous state control.
In Britain, the privatization of common lands is referred to as enclosure (in Scotland as the Lowland Clearances and the Highland Clearances). Significant privatizations of this nature occurred from 1760 to 1820, preceding the industrial revolution in that country.
The first mass privatization of state property occurred in Nazi Germany between 1933–1937: "It is a fact that the government of the National Socialist Party sold off public ownership in several state-owned firms in the middle of the 1930s. The firms belonged to a wide range of sectors: steel, mining, banking, local public utilities, shipyard, ship-lines, railways, etc. In addition to this, delivery of some public services produced by public administrations prior to the 1930s, especially social services and services related to work, was transferred to the private sector, mainly to several organizations within the Nazi Party."
Great Britain privatized its steel industry in the 1950s, and the West German government embarked on large-scale privatization, including sale of the majority stake in Volkswagen to small investors in public share offerings in 1961. However, it was in the 1980s under Margaret Thatcher in the United Kingdom and Ronald Reagan in the United States that privatization gained worldwide momentum. Notable privatization attempts in the UK included privatization of Britoil (1982), Amersham International PLC (1982), British Telecom (1984), Sealink ferries (1984), British Petroleum (gradually privatized between 1979 and 1987), British Aerospace (1985 to 1987), British Gas (1986), Rolls-Royce (1987), Rover Group (formerly British Leyland, 1988), British Steel Corporation (1988), and the regional water authorities (mostly in 1989). After 1979, council house tenants in the UK were given the right to buy their homes (at a heavily discounted rate). One million purchased their residences by 1986.
Such efforts culminated in 1993 when British Rail was privatized under Thatcher's successor, John Major. British Rail had been formed by prior nationalization of private rail companies. The privatization was controversial, and the its impact is still debated today, as doubling of passenger numbers and investment was balanced by an increase in rail subsidy.
Privatization in Latin America flourished in the 1980s and 1990s as a result of a Western liberal economic policy. Companies providing public services such as water management, transportation, and telecommunication were rapidly sold off to the private sector. In the 1990s, privatization revenue from 18 Latin American countries totaled 6% of gross domestic product. Private investment in infrastructure from 1990 and 2001 reached $360.5 billion, $150 billion more than in the next emerging economy.
While economists generally give favorable evaluations of the impact of privatization in Latin America, opinion polls and public protests across the countries suggest that a large segment of the public is dissatisfied with or have negative views of privatization in the region.
In the 1990s, the governments in Eastern and Central Europe engaged in extensive privatization of state-owned enterprises in Eastern and Central Europe and Russia, with assistance from the World Bank, the U.S. Agency for International Development, the German Treuhand, and other governmental and nongovernmental organizations.
Ongoing privatization of Japan Post relates to that of the national postal service and one of the largest banks in the world. After years of debate, the privatization of Japan Post spearheaded by Junichiro Koizumi finally started in 2007. The privatization process is expected to last until 2017. Japan Post was one of the nation's largest employers, as one-third of Japanese state employees worked for it. It was also said to be the largest holder of personal savings in the world. Criticisms against Japan Post were that it served as a channel of corruption and was inefficient. In September 2003, Koizumi's cabinet proposed splitting Japan Post into four separate companies: a bank, an insurance company, a postal service company, and a fourth company to handle the post offices and retail storefronts of the other three. After the Upper House rejected privatization, Koizumi scheduled nationwide elections for September 11, 2005. He declared the election to be a referendum on postal privatization. Koizumi subsequently won the election, gaining the necessary supermajority and a mandate for reform, and in October 2005, the bill was passed to privatize Japan Post in 2007.
Nippon Telegraph and Telephone's privatization in 1987 involved the largest share offering in financial history at the time. 15 of the world's 20 largest public share offerings have been privatizations of telecoms.
In 1988, the perestroika policy of Mikhail Gorbachev started allowing privatization of the centrally planned economy. Large privatization of the Soviet economy occurred over the next few years as the country dissolved. Other Eastern Bloc countries followed suit after the Revolutions of 1989 introduced non-communist governments.
The United Kingdom's largest public share offerings were privatizations of British Telecom and British Gas during the 1980s under the Conservative government of Margaret Thatcher, when many state-run firms were sold off to the private sector. The privatization received very mixed views from the public and the parliament. Even former Conservative prime minister Harold Macmillan was critical of the policy, likening it to "selling the family silver". There were around 3 million shareholders in Britain when Thatcher took office in 1979, but the subsequent sale of state-run firms saw the number of shareholders double by 1985. By the time of her resignation in 1990, there were more than 10 million shareholders in Britain.
The largest public shares offering in France involved France Télécom.
Egypt undertook widespread privatization under Hosni Mubarak. Following his overthrow in the 2011 revolution, most of the public began to call for re-nationalization, citing allegations of the privatized firms practicing crony capitalism under the old regime.
There are five main methods of privatization:
The choice of sale method is influenced by the capital market and the political and firm-specific factors. Privatization through the stock market is more likely to be the method used when there is an established capital market capable of absorbing the shares. A market with high liquidity can facilitate the privatization. If the capital markets are insufficiently developed, however, it would be difficult to find enough buyers. The shares may have to be underpriced, and the sales may not raise as much capital as would be justified by the fair value of the company being privatized. Many governments, therefore, elect for listings in more sophisticated markets, for example, Euronext, and the London, New York and Hong Kong stock exchanges.
Governments in developing countries and transition countries more often resort to direct asset sales to a few investors, partly because those countries do not yet have a stock market with high capital.
Voucher privatization occurred mainly in the transition economies in Central and Eastern Europe, such as Russia, Poland, the Czech Republic, and Slovakia. Additionally, privatization from below had made important contribution to economic growth in transition economies.
In one study assimilating some of the literature on "privatization" that occurred in Russian and Czech Republic transition economies, the authors identified three methods of privatization: "privatization by sale", "mass privatization", and "mixed privatization". Their calculations showed that "mass privatization" was the most effective method.
However, in economies "characterized by shortages" and maintained by the state bureaucracy, wealth was accumulated and concentrated by "gray/black market" operators. Privatizing industries by sale to these individuals did not mean a transition to "effective private sector owners [of former] state assets". Rather than mainly participating in a market economy, these individuals could prefer elevating their personal status or prefer accumulating political power. Instead, outside foreign investment led to the efficient conduct of former state assets in the private sector and market economy.
Through privatization by direct asset sale or the stock market, bidders compete to offer higher prices, generating more revenue for the state. Voucher privatization, on the other hand, could represent a genuine transfer of assets to the general population, creating a sense of participation and inclusion. A market could be created if the government permits transfer of vouchers among voucher holders.
Some privatization transactions can be interpreted as a form of a secured loan and are criticized as a "particularly noxious form of governmental debt". In this interpretation, the upfront payment from the privatization sale corresponds to the principal amount of the loan, while the proceeds from the underlying asset correspond to secured interest payments – the transaction can be considered substantively the same as a secured loan, though it is structured as a sale. This interpretation is particularly argued to apply to recent municipal transactions in the United States, particularly for fixed term, such as the 2008 sale of the proceeds from Chicago parking meters for 75 years. It is argued that this is motivated by "politicians' desires to borrow money surreptitiously", due to legal restrictions on and political resistance to alternative sources of revenue, viz, raising taxes or issuing debt.
Literature reviews find that in competitive industries with well-informed consumers, privatization consistently improves efficiency. The more competitive the industry, the greater the improvement in output, profitability, and efficiency. Such efficiency gains mean a one-off increase in GDP, but through improved incentives to innovate and reduce costs also tend to raise the rate of economic growth. Although typically there are many costs associated with these efficiency gains,
many economists argue that these can be dealt with by appropriate government support through redistribution and perhaps retraining. Yet, some empirical literature suggests that privatization could also have very modest effects on efficiency and quite regressive distributive impact. In the first attempt at a social welfare analysis of the British privatization program under the Conservative governments of Margaret Thatcher and John Major during the 1980s and 1990s, Massimo Florio points to the absence of any productivity shock resulting strictly from ownership change. Instead, the impact on the previously nationalized companies of the UK productivity leap under the Conservatives varied in different industries. In some cases, it occurred prior to privatization, and in other cases, it occurred upon privatization or several years afterward.
A study by the European Commission found that the UK rail network (which was privatized from 1994–97) was most improved out of all the 27 EU nations from 1997–2012. The report examined a range of 14 different factors and the UK came top in four of the factors, second and third in another two and fourth in three, coming top overall.
Privatizations in Russia and Latin America were accompanied by large-scale corruption during the sale of the state-owned companies. Those with political connections unfairly gained large wealth, which has discredited privatization in these regions. While media have widely reported the grand corruption that accompanied those sales, studies have argued that in addition to increased operating efficiency, daily petty corruption is, or would be, larger without privatization, and that corruption is more prevalent in non-privatized sectors. Furthermore, there is evidence to suggest that extralegal and unofficial activities are more prevalent in countries that privatized less.
A 2009 study published in "The Lancet" medical journal initially claimed to have found that as many as a million working men died as a result of economic shocks associated with mass privatization in the former Soviet Union and in Eastern Europe during the 1990s, although a further study revealed that there were errors in their method and "correlations reported in the original article are simply not robust." Historian Walter Scheidel, a specialist in ancient history, posits that economic inequality and wealth concentration in the top percentile "had been made possible by the transfer of state assets to private owners."
In Latin America, there is a discrepancy between the economic efficiency of privatization and the political/social ramifications that occur. On the one hand, economic indicators, including firm profitability, productivity, and growth, project positive microeconomic results. On the other hand, however, these results have largely been met with a negative criticism and citizen coalitions. This neoliberal criticism highlights the ongoing conflict between varying visions of economic development. Karl Polanyi emphasizes the societal concerns of self-regulating markets through a concept known as a "double movement". In essence, whenever societies move towards increasingly unrestrained, free-market rule, a natural and inevitable societal correction emerges to undermine the contradictions of capitalism. This was the case in the 2000 Cochabamba protests.
Privatization in Latin America has invariably experienced increasing push-back from the public. Some suggest that implementing a less efficient but more politically mindful approach could be more sustainable.
In India, a survey by the National Commission for Protection of Child Rights (NCPCR) —Utilization of Free Medical Services by Children Belonging to the Economically Weaker Section (EWS) in Private Hospitals in New Delhi, 2011-12: A Rapid Appraisal—indicates under-utilization of the free beds available for EWS category in private hospitals in Delhi, though they were allotted land at subsidized rates.
In Australia a "People's Inquiry into Privatisation" (2016/17) found that the impact of privatisation on communities was negative. The report from the inquiry "Taking Back Control" https://d3n8a8pro7vhmx.cloudfront.net/cpsu/pages/1573/attachments/original/1508714447/Taking_Back_Control_FINAL.pdf?1508714447 made a range of recommendations to provide accountability and transparency in the process. The report highlighted privatisation in healthcare, aged care, child care, social services, government departments, electricity, prisons and vocational education featuring the voices of workers, community members and academics.
Arguments for and against the controversial subject of privatization are presented here.
Studies show that private market factors can more efficiently deliver many goods or service than governments due to free market competition. Over time, this tends to lead to lower prices, improved quality, more choices, less corruption, less red tape, and/or quicker delivery. Many proponents do not argue that everything should be privatized. According to them, market failures and natural monopolies could be problematic. However, anarcho-capitalists prefer that every function of the state be privatized, including defense and dispute resolution.
Proponents of privatization make the following arguments:
Opponents of certain privatizations believe that certain public goods and services should remain primarily in the hands of government in order to ensure that everyone in society has access to them (such as law enforcement, basic health care, and basic education). There is a positive externality when the government provides society at large with public goods and services such as defense and disease control. Some national constitutions in effect define their governments' "core businesses" as being the provision of such things as justice, tranquility, defense, and general welfare. These governments' direct provision of security, stability, and safety, is intended to be done for the common good (in the public interest) with a long-term (for posterity) perspective. As for natural monopolies, opponents of privatization claim that they aren't subject to fair competition, and better administrated by the state.
Although private companies will provide a similar good or service alongside the government, opponents of privatization are careful about completely transferring the provision of public goods, services and assets into private hands for the following reasons:
In economic theory, privatization has been studied in the field of contract theory. When contracts are complete, institutions such as (private or public) property are difficult to explain, since every desired incentive structure can be achieved with sufficiently complex contractual arrangements, regardless of the institutional structure (all that matters is who are the decision makers and what is their available information). In contrast, when contracts are incomplete, institutions matter. A leading application of the incomplete contract paradigm in the context of privatization is the model by Hart, Shleifer, and Vishny (1997). In their model, a manager can make investments to increase quality (but they may also increase costs) and investments to decrease costs (but they may also reduce quality). It turns out that it depends on the particular situation whether private ownership or public ownership is desirable. The Hart-Shleifer-Vishny model has been further developed in various directions, e.g. to allow for mixed public-private ownership and endogenous assignments of the investment tasks. | https://en.wikipedia.org/wiki?curid=24661 |
Passage grave
A passage grave or passage tomb consists of one or more burial chambers covered in earth or with stone, and having a narrow access passage made of large stones. These structures usually date from the Neolithic Age, and are found largely in Western Europe. When covered in earth, a passage grave is a type of burial mound which are found in various forms all over the world. When a passage grave is covered in stone, it is a type of cairn.
The building of passage graves was normally carried out with megaliths along with smaller stones. The earliest passage tombs seem to take the form of small dolmens, although not all dolmens are passage graves. The passage itself, in a number of notable instances, is aligned in such a way that the sun shines through the passage, into the chamber, at a significant point in the year, often at sunrise on the winter solstice or at sunset on the equinox. Many later passage tombs were constructed at the tops of hills or mountains, indicating that their builders intended them to be seen from a great distance.
The interior of passage graves varies in number of burials, shape, and other aspects. Those with more than one chamber may have multiple sub-chambers leading off from the main burial chamber. One common interior layout, the cruciform passage grave, is cross-shaped, although prior to the Christian Era and thus having no Christian associations. Some passage tombs are covered with a cairn, especially those dating from later times. Passage tombs of the cairn type often have elaborate corbelled roofs rather than simple slabs. Megalithic art has been identified carved into the stones at some sites. Not all passage "graves" have been found to contain evidence that they were used for burial. One such example is Maeshowe.
The passage tomb tradition is believed to have originated in the French region of Brittany. It was introduced to other regions such as Ireland by colonists from Brittany.
In a 1961 survey of megalithic tombs in Ireland, Irish scholars Seán Ó Nualláin and Rúaidhrí de Valera describe four categories of megalithic tombs: court cairns, portal dolmens, wedge-shaped gallery graves, and passage tombs. This appears to be one of the first uses of the term. It is likely that the writers borrowed from the Spanish term "tumbas de corredor", "corridor tombs", which is used for tombs in Cantabria, Galicia and the Basque Country. Of the megalithic tombs in Ireland, only passage tombs appear to have widespread distribution throughout Europe.
Passage graves are distributed extensively in lands along the Atlantic seaboard of Europe. They are found in Ireland, Britain, Scandinavia, northern Germany and the Drenthe area of the Netherlands. They are also found in Iberia, some parts of the Mediterranean, and along the northern coast of Africa. In Ireland and Britain, passage tombs are often found in large clusters, giving rise to the term passage tomb cemeteries. | https://en.wikipedia.org/wiki?curid=24663 |
P-group
In mathematics, specifically group theory, given a prime number "p", a "p"-group is a group in which the order of every element is a power of "p". That is, for each element "g" of a "p"-group "G", there exists a nonnegative integer "n" such that the product of "pn" copies of "g", and not fewer, is equal to the identity element. The orders of different elements may be different powers of "p".
Abelian "p"-groups are also called "p"-primary or simply primary.
A finite group is a "p"-group if and only if its order (the number of its elements) is a power of "p". Given a finite group "G", the Sylow theorems guarantee the existence of a subgroup of "G" of order "pn" for every prime power "pn" that divides the order of "G".
The remainder of this article deals with finite "p"-groups. For an example of an infinite abelian "p"-group, see Prüfer group, and for an example of an infinite simple "p"-group, see Tarski monster group.
Every "p"-group is periodic since by definition every element has finite order.
If "p" is prime and "G" is a group of order "p""k", then "G" has a normal subgroup of order "p""m" for every 1 ≤ "m" ≤ "k". This follows by induction, using Cauchy's theorem and the Correspondence Theorem for groups. A proof sketch is as follows: because the center "Z" of "G" is non-trivial (see below), according to Cauchy's theorem "Z" has a subgroup "H" of order "p". Being central in "G", "H" is necessarily normal in "G". We may now apply the inductive hypothesis to "G/H", and the result follows from the Correspondence Theorem.
One of the first standard results using the class equation is that the center of a non-trivial finite "p"-group cannot be the trivial subgroup.
This forms the basis for many inductive methods in "p"-groups.
For instance, the normalizer "N" of a proper subgroup "H" of a finite "p"-group "G" properly contains "H", because for any counterexample with "H" = "N", the center "Z" is contained in "N", and so also in "H", but then there is a smaller example "H"/"Z" whose normalizer in "G"/"Z" is "N"/"Z" = "H"/"Z", creating an infinite descent. As a corollary, every finite "p"-group is nilpotent.
In another direction, every normal subgroup of a finite "p"-group intersects the center non-trivially as may be proved by considering the elements of "N" which are fixed when "G" acts on "N" by conjugation. Since every central subgroup is normal, it follows that every minimal normal subgroup of a finite "p"-group is central and has order "p". Indeed, the socle of a finite "p"-group is the subgroup of the center consisting of the central elements of order "p".
If "G" is a "p"-group, then so is "G"/"Z", and so it too has a non-trivial center. The preimage in "G" of the center of "G"/"Z" is called the second center and these groups begin the upper central series. Generalizing the earlier comments about the socle, a finite "p"-group with order "pn" contains normal subgroups of order "pi" with 0 ≤ "i" ≤ "n", and any normal subgroup of order "pi" is contained in the "i"th center "Z""i". If a normal subgroup is not contained in "Z""i", then its intersection with "Z""i"+1 has size at least "p""i"+1.
The automorphism groups of "p"-groups are well studied. Just as every finite "p"-group has a non-trivial center so that the inner automorphism group is a proper quotient of the group, every finite "p"-group has a non-trivial outer automorphism group. Every automorphism of "G" induces an automorphism on "G"/Φ("G"), where Φ("G") is the Frattini subgroup of "G". The quotient G/Φ("G") is an elementary abelian group and its automorphism group is a general linear group, so very well understood. The map from the automorphism group of "G" into this general linear group has been studied by Burnside, who showed that the kernel of this map is a "p"-group.
"p"-groups of the same order are not necessarily isomorphic; for example, the cyclic group "C"4 and the Klein four-group "V"4 are both 2-groups of order 4, but they are not isomorphic.
Nor need a "p"-group be abelian; the dihedral group Dih4 of order 8 is a non-abelian 2-group. However, every group of order "p"2 is abelian.
The dihedral groups are both very similar to and very dissimilar from the quaternion groups and the semidihedral groups. Together the dihedral, semidihedral, and quaternion groups form the 2-groups of maximal class, that is those groups of order 2"n"+1 and nilpotency class "n".
The iterated wreath products of cyclic groups of order "p" are very important examples of "p"-groups. Denote the cyclic group of order "p" as "W"(1), and the wreath product of "W"("n") with "W"(1) as "W"("n" + 1). Then "W"("n") is the Sylow "p"-subgroup of the symmetric group Sym("p""n"). Maximal "p"-subgroups of the general linear group GL("n",Q) are direct products of various "W"("n"). It has order "p""k" where "k" = ("p""n" − 1)/("p" − 1). It has nilpotency class "p""n"−1, and its lower central series, upper central series, lower exponent-"p" central series, and upper exponent-"p" central series are equal. It is generated by its elements of order "p", but its exponent is "p""n". The second such group, "W"(2), is also a "p"-group of maximal class, since it has order "p""p"+1 and nilpotency class "p", but is not a regular "p"-group. Since groups of order "p""p" are always regular groups, it is also a minimal such example.
When "p" = 2 and "n" = 2, "W"("n") is the dihedral group of order 8, so in some sense "W"("n") provides an analogue for the dihedral group for all primes "p" when "n" = 2. However, for higher "n" the analogy becomes strained. There is a different family of examples that more closely mimics the dihedral groups of order 2"n", but that requires a bit more setup. Let ζ denote a primitive "p"th root of unity in the complex numbers, let Z[ζ] be the ring of cyclotomic integers generated by it, and let "P" be the prime ideal generated by 1−ζ. Let "G" be a cyclic group of order "p" generated by an element "z". Form the semidirect product "E"("p") of Z[ζ] and "G" where "z" acts as multiplication by ζ. The powers "P""n" are normal subgroups of "E"("p"), and the example groups are "E"("p","n") = "E"("p")/"P""n". "E"("p","n") has order "p""n"+1 and nilpotency class "n", so is a "p"-group of maximal class. When "p" = 2, "E"(2,"n") is the dihedral group of order 2"n". When "p" is odd, both "W"(2) and "E"("p","p") are irregular groups of maximal class and order "p""p"+1, but are not isomorphic.
The Sylow subgroups of general linear groups are another fundamental family of examples. Let "V" be a vector space of dimension "n" with basis { "e"1, "e"2, …, "e""n" } and define "V""i" to be the vector space generated by { "e""i", "e""i"+1, …, "e""n" } for 1 ≤ "i" ≤ "n", and define "V""i" = 0 when "i" > "n". For each 1 ≤ "m" ≤ "n", the set of invertible linear transformations of "V" which take each "V""i" to "V""i"+"m" form a subgroup of Aut("V") denoted "U""m". If "V" is a vector space over Z/"p"Z, then "U"1 is a Sylow "p"-subgroup of Aut("V") = GL("n", "p"), and the terms of its lower central series are just the "U""m". In terms of matrices, "U""m" are those upper triangular matrices with 1s one the diagonal and 0s on the first "m"−1 superdiagonals. The group "U"1 has order "p""n"·("n"−1)/2, nilpotency class "n", and exponent "p""k" where "k" is the least integer at least as large as the base "p" logarithm of "n".
The groups of order "p""n" for 0 ≤ "n" ≤ 4 were classified early in the history of group theory, and modern work has extended these classifications to groups whose order divides "p"7, though the sheer number of families of such groups grows so quickly that further classifications along these lines are judged difficult for the human mind to comprehend. For example, Marshall Hall Jr. and James K. Senior classified groups of order 2"n" for "n" ≤ 6 in 1964.
Rather than classify the groups by order, Philip Hall proposed using a notion of isoclinism of groups which gathered finite "p"-groups into families based on large quotient and subgroups.
An entirely different method classifies finite "p"-groups by their coclass, that is, the difference between their composition length and their nilpotency class. The so-called coclass conjectures described the set of all finite "p"-groups of fixed coclass as perturbations of finitely many pro-p groups. The coclass conjectures were proven in the 1980s using techniques related to Lie algebras and powerful p-groups. The final proofs of the coclass theorems are due to A. Shalev and independently to C. R. Leedham-Green, both in 1994. They admit a classification of finite "p"-groups in directed coclass graphs consisting of only finitely many coclass trees whose (infinitely many) members are characterized by finitely many parametrized presentations.
Every group of order "p"5 is metabelian.
The trivial group is the only group of order one, and the cyclic group "C""p" is the only group of order "p". There are exactly two groups of order "p"2, both abelian, namely "C""p"2 and "C""p" × "C""p". For example, the cyclic group "C"4 and the Klein four-group "V"4 which is "C"2 × "C"2 are both 2-groups of order 4.
There are three abelian groups of order "p"3, namely "C""p"3, "C""p"2×"C""p", and "C""p"×"C""p"×"C""p". There are also two non-abelian groups.
For "p" ≠ 2, one is a semi-direct product of "C""p"×"C""p" with "C""p", and the other is a semi-direct product of "C""p"2 with "C""p". The first one can be described in other terms as group UT(3,"p") of unitriangular matrices over finite field with "p" elements, also called the Heisenberg group mod "p".
For "p" = 2, both the semi-direct products mentioned above are isomorphic to the dihedral group Dih4 of order 8. The other non-abelian group of order 8 is the quaternion group "Q"8.
The number of isomorphism classes of groups of order "pn" grows as formula_4, and these are dominated by the classes that are two-step nilpotent. Because of this rapid growth, there is a folklore conjecture asserting that almost all finite groups are 2-groups: the fraction of isomorphism classes of 2-groups among isomorphism classes of groups of order at most "n" is thought to tend to 1 as "n" tends to infinity. For instance, of the 49 910 529 484 different groups of order at most 2000, 49 487 365 422, or just over 99%, are 2-groups of order 1024.
Every finite group whose order is divisible by "p" contains a subgroup which is a non-trivial "p"-group, namely a cyclic group of order "p" generated by an element of order "p" obtained from Cauchy's theorem. In fact, it contains a "p"-group of maximal possible order: if formula_5 where "p" does not divide "m," then "G" has a subgroup "P" of order formula_6 called a Sylow "p"-subgroup. This subgroup need not be unique, but any subgroups of this order are conjugate, and any "p"-subgroup of "G" is contained in a Sylow "p"-subgroup. This and other properties are proved in the Sylow theorems.
"p"-groups are fundamental tools in understanding the structure of groups and in the classification of finite simple groups. "p"-groups arise both as subgroups and as quotient groups. As subgroups, for a given prime "p" one has the Sylow "p"-subgroups "P" (largest "p"-subgroup not unique but all conjugate) and the "p"-core formula_7 (the unique largest "normal" "p"-subgroup), and various others. As quotients, the largest "p"-group quotient is the quotient of "G" by the "p"-residual subgroup formula_8 These groups are related (for different primes), possess important properties such as the focal subgroup theorem, and allow one to determine many aspects of the structure of the group.
Much of the structure of a finite group is carried in the structure of its so-called local subgroups, the normalizers of non-identity "p"-subgroups.
The large elementary abelian subgroups of a finite group exert control over the group that was used in the proof of the Feit–Thompson theorem. Certain central extensions of elementary abelian groups called extraspecial groups help describe the structure of groups as acting on symplectic vector spaces.
Richard Brauer classified all groups whose Sylow 2-subgroups are the direct product of two cyclic groups of order 4, and John Walter, Daniel Gorenstein, Helmut Bender, Michio Suzuki, George Glauberman, and others classified those simple groups whose Sylow 2-subgroups were abelian, dihedral, semidihedral, or quaternion. | https://en.wikipedia.org/wiki?curid=24664 |
Pope Innocent XII
Pope Innocent XII (; 13 March 1615 – 27 September 1700), born Antonio Pignatelli, was Pope from 12 July 1691 to his death in 1700.
He took a hard stance against nepotism in the church, continuing the policies of Pope Innocent XI, who started the battle against nepotism but which did not gain traction under Pope Alexander VIII. To that end, he issued a papal bull strictly forbidding it. The pope also used this bull to ensure that no revenue or land could be bestowed to relatives.
Antonio Pignatelli was born on 13 March 1615 in Spinazzola (now in Apulia) to one of the most aristocratic families of the Kingdom of Naples, which had included several Viceroys and ministers of the crown. He was the fourth of five children of Francesco Pignatelli and Porzia Carafa. His siblings were Marzio, Ludovico, Fabrizio and Paola Maria.
He was educated at the Collegio Romano in Rome where he earned a doctorate in both canon and civil law.
At the age of 20 he became an official of the court of Pope Urban VIII. Pignatelli was the Referendary of the Apostolic Signatura and served as the Governor of Fano and Viterbo. Later he went to Malta where he served as an inquisitor from 1646 to 1649, and then governor of Perugia. Shortly after this, he received his priestly ordination.
Pignatelli was made Titular Archbishop of Larissa in 1652 and received episcopal consecration in Rome. He served as the Apostolic Nuncio to Poland from 1660 to 1668 and later in Austria from 1668 to 1671. He was transferred to Lecce in 1671. Pope Innocent XI appointed him as the Cardinal-Priest of San Pancrazio in 1681 and then moved him to the see of Faenza in 1682. He was moved to his final post before the papacy, as Archbishop of Naples in 1686.
Pope Alexander VIII died in 1691 and the College of Cardinals assembled to hold a conclave to select his successor. Factions loyal to the Kingdom of France, Kingdom of Spain and the broader Holy Roman Empire failed to agree on a consensus candidate.
After five months, Cardinal Pignatelli emerged as a compromise candidate between the cardinals of France and those of the Holy Roman Empire. Pignatelli took his new name in honour of Pope Innocent XI and was crowned on 15 July 1691 by the protodeacon, Cardinal Urbano Sacchetti. He took possession of the Basilica of Saint John Lateran on 13 April 1692.
Immediately after his election on 12 July 1691, Innocent XII declared his opposition to the nepotism which had afflicted the reigns of previous popes. The following year he issued the papal bull, "Romanum decet Pontificem", banning the curial office of the Cardinal-Nephew and prohibiting popes from bestowing estates, offices, or revenues on any relative. Further, only one relative (and only "if otherwise suitable") was to be raised to the cardinalate.
At the same time he sought to check the simony in the practices of the Apostolic Chamber and to that end introduced a simpler and more economical manner of life into his court. Innocent XII said that "the poor were his nephews" and compared his public beneficence to the nepotism of many predecessors.
Innocent XII also introduced various reforms into the States of the Church including the "Forum Innocentianum", designed to improve the administration of justice dispensed by the Church. In 1693 he compelled French bishops to retract the four propositions relating to the Gallican Liberties which had been formulated by the assembly of 1682.
In 1699, he decided in favour of Jacques-Benigne Bossuet in that prelate's controversy with Fénelon about the "Explication des Maximes des Saints sur la Vie Intérieure" of the latter. Innocent XII's pontificate also differed greatly from his predecessors' because of his leanings towards France instead of the Habsburg Monarchy; the first in the 20 years following France's failure to have its candidate elected in 1644 and 1655.
Innocent XII created 30 cardinals in four consistories; two of those he elevated were those he reserved "in pectore".
He canonized Saint Zita of Lucca on 5 September 1696. Innocent XII beatified Augustin Kažotić on 17 July 1700 and approved the cultus of Angela of Foligno in 1693. He also beatified Osanna Andreasi on 24 November 1694, Mary de Cerevellon on 13 February 1692, Jane of Portugal on 31 December 1692, Umiliana de' Cerchi on 24 July 1694, Helen Enselmini on 29 October 1695 and Delphine in 1694.
Innocent died on 27 September 1700 and was succeeded by Pope Clement XI (1700–1721). His tomb at St. Peter's Basilica was sculpted by Filippo della Valle.
Innocent appears as one of the narrators in Robert Browning's long poem "The Ring and the Book" (1869), based on the true story of the pope's intervention in a historical murder trial in Rome during his papacy. Innocent is the most recent pope to have decorative facial hair. | https://en.wikipedia.org/wiki?curid=24666 |
Protein phosphatase
A protein phosphatase is a phosphatase enzyme that removes a phosphate group from the phosphorylated amino acid residue of its substrate protein. Protein phosphorylation is one of the most common forms of reversible protein posttranslational modification (PTM), with up to 30% of all proteins being phosphorylated at any given time. Protein kinases (PKs) are the effectors of phosphorylation and catalyse the transfer of a γ-phosphate from ATP to specific amino acids on proteins. Several hundred PKs exist in mammals and are classified into distinct super-families. Proteins are phosphorylated predominantly on Ser, Thr and Tyr residues, which account for 79.3, 16.9 and 3.8% respectively of the phosphoproteome, at least in mammals. In contrast, protein phosphatases (PPs) are the primary effectors of dephosphorylation and can be grouped into three main classes based on sequence, structure and catalytic function. The largest class of PPs is the phosphoprotein phosphatase (PPP) family comprising PP1, PP2A, PP2B, PP4, PP5, PP6 and PP7, and the protein phosphatase Mg2+- or Mn2+-dependent (PPM) family, composed primarily of PP2C. The protein Tyr phosphatase (PTP) super-family forms the second group, and the aspartate-based protein phosphatases the third. The protein pseudophosphatases form part of the larger phosphatase family, and in most cases are thought to be catalytically inert, instead functioning as phosphate-binding proteins, integrators of signalling or subcellular traps. Examples of membrane-spanning protein phosphatases containing both active (phosphatase) and inactive (pseudophosphatase) domains linked in tandem are known, conceptually similar to the kinase and pseudokinase domain polypeptide structure of the JAK pseudokinases. A complete comparative analysis of human phosphatases and pseudophosphatases has been completed by Manning and colleagues, forming a companion piece to the ground-breaking analysis of the human kinome, which encodes the complete set of ~536 human protein kinases.
Phosphorylation involves the transfer of phosphate groups from ATP to the enzyme, the energy for which comes from hydrolysing ATP into ADP or AMP. However, dephosphorylation releases phosphates into solution as free ions, because attaching them back to ATP would require energy input.
Cysteine-dependent phosphatases (CDPs) catalyse the hydrolysis of a phosphoester bond via a phospho-cysteine intermediate.
The free cysteine nucleophile forms a bond with the phosphorus atom of the phosphate moiety, and the P-O bond linking the phosphate group to the tyrosine is protonated, either by a suitably positioned acidic amino acid residue (Asp in the diagram below) or a water molecule. The phospho-cysteine intermediate is then hydrolysed by another water molecule, thus regenerating the active site for another dephosphorylation reaction.
Metallo-phosphatases (e.g. PP2C) co-ordinate 2 catalytically essential metal ions within their active site. There is currently some confusion of the identity of these metal ions, as successive attempts to identify them yield different answers. There is currently evidence that these metals could be Magnesium, Manganese, Iron, Zinc, or any combination thereof. It is thought that a hydroxyl ion bridging the two metal ions takes part in nucleophilic attack on the phosphorus ion.
Phosphatases can be subdivided based upon their substrate specificity.
Protein Ser/Thr phosphatases were originally classified using biochemical assays as either, type 1 (PP1) or type 2 (PP2), and were further subdivided based on metal-ion requirement (PP2A, no metal ion; PP2B, Ca2+ stimulated; PP2C, Mg2+ dependent) (Moorhead et al., 2007). The protein Ser/Thr phosphatases PP1, PP2A and PP2B of the PPP family, together with PP2C of the PPM family, account for the majority of Ser/Thr PP activity in vivo (Barford et al., 1998). In the brain, they are present in different subcellular compartments in neuronal and glial cells, and contribute to different neuronal functions.
The PPM family, which includes PP2C and pyruvate dehydrogenase phosphatase, are enzymes with Mn2+/Mg2+ metal ions that are resistant to classic inhibitors and toxins of the PPP family. Unlike most PPPs, PP2C exists in only one subunit but, like PTPs, it displays a wide variety of structural domains that confer unique functions. In addition, PP2C does not seem to be evolutionarily related to the major family of Ser/Thr PPs and has no sequence homology to ancient PPP enzymes. The current assumption is that PPMs evolved separately from PPPs but converged during evolutionary development.
Class I PTPs constitute the largest family. They contain the well-known classical receptor (a) and non-receptor PTPs (b), which are strictly tyrosine-specific, and the DSPs (c) which target Ser/Thr as well as Tyr and are the most diverse in terms of substrate specificity.
The third class of PTPs contains three cell cycle regulators, CDC25A, CDC25B and CDC25C, which dephosphorylate CDKs at their N-terminal, a reaction required to drive progression of the cell cycle. They are themselves regulated by phosphorylation and are degraded in response to DNA damage to prevent chromosomal abnormalities.
The haloacid dehalogenase (HAD) superfamily is a further PP group that uses Asp as a nucleophile and was recently shown to have dual-specificity. These PPs can target both Ser and Tyr, but are thought to have greater specificity towards Tyr. A subfamily of HADs, the Eyes Absent Family (Eya), are also transcription factors and can therefore regulate their own phosphorylation and that of transcriptional cofactor/s, and contribute to the control of gene transcription. The combination of these two functions in Eya reveals a greater complexity of transcriptional gene control than previously thought . A further member of this class is the RNA polymerase II C-terminal domain phosphatase. While this family remains poorly understood, it is known to play important roles in development and nuclear morphology.
Many phosphatases are promiscuous with respect to substrate type, or can evolve quickly to change substrate. An alternative structural classification notes that 20 distinct protein folds have phosphatase activity, and 10 of these contain protein phosphatases.
Phosphatases act in opposition to kinases/phosphorylases, which add phosphate groups to proteins. The addition of a phosphate group may activate or de-activate an enzyme (e.g., kinase signalling pathways) or enable a protein-protein interaction to occur (e.g., SH2 domains ); therefore phosphatases are integral to many signal transduction pathways. Phosphate addition and removal do not necessarily correspond to enzyme activation or inhibition, and that several enzymes have separate phosphorylation sites for activating or inhibiting functional regulation. CDK, for example, can be either activated or deactivated depending on the specific amino acid residue being phosphorylated. Phosphates are important in signal transduction because they regulate the proteins to which they are attached. To reverse the regulatory effect, the phosphate is removed. This occurs on its own by hydrolysis, or is mediated by protein phosphatases.
Protein phosphorylation plays a crucial role in biological functions and controls nearly every cellular process, including metabolism, gene transcription and translation, cell-cycle progression, cytoskeletal rearrangement, protein-protein interactions, protein stability, cell movement, and apoptosis. These processes depend on the highly regulated and opposing actions of PKs and PPs, through changes in the phosphorylation of key proteins. Histone phosphorylation, along with methylation, ubiquitination, sumoylation and acetylation, also regulates access to DNA through chromatin reorganisation.
One of the major switches for neuronal activity is the activation of PKs and PPs by elevated intracellular calcium. The degree of activation of the various isoforms of PKs and PPs is controlled by their individual sensitivities to calcium. Furthermore, a wide range of specific inhibitors and targeting partners such as scaffolding, anchoring, and adaptor proteins also contribute to the control of PKs and PPs and recruit them into signalling complexes in neuronal cells. Such signalling complexes typically act to bring PKs and PPs in close proximity with target substrates and signalling molecules as well as enhance their selectivity by restricting accessibility to these substrate proteins. Phosphorylation events, therefore, are controlled not only by the balanced activity of PKs and PPs but also by their restricted localisation. Regulatory subunits and domains serve to restrict specific proteins to particular subcellular compartments and to modulate protein specificity. These regulators are essential for maintaining the coordinated action of signalling cascades, which in neuronal cells include short-term (synaptic) and long-term (nuclear) signalling. These functions are, in part, controlled by allosteric modification by secondary messengers and reversible protein phosphorylation.
It is thought that around 30% of known PPs are present in all tissues, with the rest showing some level of tissue restriction. While protein phosphorylation is a cell-wide regulatory mechanism, recent quantitative proteomics studies have shown that phosphorylation preferentially targets nuclear proteins. Many PPs that regulate nuclear events, are often enriched or exclusively present in the nucleus. In neuronal cells, PPs are present in multiple cellular compartments and play a critical role at both pre- and post-synapses, in the cytoplasm and in the nucleus where they regulate gene expression.
Phosphoprotein phosphatase is activated by the hormone insulin, which indicates that there is a high concentration of glucose in the blood. The enzyme then acts to dephosphorylate other enzymes, such as phosphorylase kinase, glycogen phosphorylase, and glycogen synthase. This leads to phosphorylase kinase and glycogen phosphorylase's becoming inactive, while glycogen synthase is activated. As a result, glycogen synthesis is increased and glycogenolysis is decreased, and the net effect is for energy to enter and be stored inside the cell.
In the adult brain, PPs are essential for synaptic functions and are involved in the negative regulation of higher-order brain functions such as learning and memory. Dysregulation of their activity has been linked to several disorders including cognitive ageing and neurodegeneration, as well as cancer, diabetes and obesity.
Human genes that encode proteins with phosphoprotein phosphatase activity include: | https://en.wikipedia.org/wiki?curid=24667 |
P5 (microarchitecture)
The first Pentium microprocessor was introduced by Intel on March 22, 1993. Its P5 microarchitecture, also sometimes referred to as i586, was the fifth generation for Intel, and the first superscalar IA-32 microarchitecture. As a direct extension of the 80486 architecture, it included dual integer pipelines, a faster floating-point unit, wider data bus, separate code and data caches and features for further reduced address calculation latency. In October 1996, the "Pentium with MMX Technology" (often simply referred to as "Pentium MMX") was introduced, complementing the same basic microarchitecture with the MMX instruction set, larger caches, and some other enhancements.
The P5 Pentium competitors included the Motorola 68060 and the PowerPC 601 as well as the SPARC, MIPS, and Alpha microprocessor families, most of which also used a superscalar in-order dual instruction pipeline configuration at some time.
Intel's Larrabee multicore architecture project uses a processor core derived from a P5 core (P54C), augmented by multithreading, 64-bit instructions, and a 16-wide vector processing unit. Intel's low-powered Bonnell microarchitecture employed in early Atom processor cores also uses an in-order dual pipeline similar to P5.
Intel discontinued the P5 Pentium processors (which had been downgraded to an entry-level product since the Pentium II debuted in 1997) in early 2000 in favor of the Celeron processor which also replaced the 80486 brand.
The P5 microarchitecture was designed by the same Santa Clara team which designed the 386 and 486. Design work started in 1989; the team decided to use a superscalar architecture, with on-chip cache, floating-point, and branch prediction. The preliminary design was first successfully simulated in 1990, followed by the laying-out of the design. By this time, the team had several dozen engineers. The design was taped out, or transferred to silicon, in April 1992, at which point beta-testing began. By mid-1992, the P5 team had 200 engineers. Intel at first planned to demonstrate the P5 in June 1992 at the trade show PC Expo, and to formally announce the processor in September 1992, but design problems forced the demo to be cancelled, and the official introduction of the chip was delayed until the spring of 1993.
John H. Crawford, chief architect of the original 386, co-managed the design of the P5, along with Donald Alpert, who managed the architectural team. Dror Avnon managed the design of the FPU. Vinod K. Dham was general manager of the P5 group.
The P5 microarchitecture brings several important advancements over the preceding i486 architecture.
The Pentium was designed to execute over 100 million instructions per second (MIPS), and the 75 MHz model was able to reach 126.5 MIPS in certain benchmarks. The Pentium architecture typically offered just under twice the performance of a 486 processor per clock cycle in common benchmarks. The fastest 80486 parts (with slightly improved microarchitecture and 100 MHz operation) were almost as powerful as the first-generation Pentiums, and the AMD Am5x86 was roughly equal to the Pentium 75 regarding pure ALU performance.
The early versions of 60–100 MHz P5 Pentiums had a problem in the floating-point unit that resulted in incorrect (but predictable) results from some division operations. This flaw, discovered in 1994 by professor Thomas Nicely at Lynchburg College, Virginia, became widely known as the Pentium FDIV bug and caused embarrassment for Intel, which created an exchange program to replace the faulty processors.
In 1997, another erratum was discovered that could allow a malicious program to crash a system without any special privileges, the "F00F bug". All P5 series processors were affected and no fixed steppings were ever released, however contemporary operating systems were patched with workarounds to prevent crashes.
The Pentium was Intel's primary microprocessor for personal computers during the mid-1990s. The original design was reimplemented in newer processes and new features were added to maintain its competitiveness as well as to address specific markets such as portable computers. As a result, there were several variants of the P5 microarchitecture.
The first Pentium microprocessor core was code-named "P5". Its product code was 80501 (80500 for the earliest steppings Q0399). There were two versions, specified to operate at 60 MHz and 66 MHz respectively, using Socket 4. This first implementation of the Pentium used a traditional 5-volt power supply (descended from the usual TTL logic compatibility requirements). It contained 3.1 million transistors and measured 16.7 mm by 17.6 mm for an area of 293.92 mm2. It was fabricated in a 0.8 μm BiCMOS process. The 5-volt design resulted in relatively high energy consumption for its operating frequency when compared to the directly following models.
The P5 was followed by the P54C (80502) in 1994, with versions specified to operate at 75, 90, or 100 MHz using a 3.3 volt power supply. Marking the switch to Socket 5, this was the first Pentium processor to operate at 3.3 volts, reducing energy consumption, but necessitating voltage regulation on mainboards. As with higher-clocked 486 processors, an internal clock multiplier was employed from here on to let the internal circuitry work at a higher frequency than the external address and data buses, as it is more complicated and cumbersome to increase the external frequency, due to physical constraints. It also allowed two-way multiprocessing and had an integrated local APIC as well as new power management features. It contained 3.3 million transistors and measured 163 mm2. It was fabricated in a BiCMOS process which has been described as both 0.5 μm and 0.6 μm due to differing definitions.
The P54C was followed by the P54CQS in early 1995, which operated at 120 MHz. It was fabricated in a 0.35 μm BiCMOS process and was the first commercial microprocessor to be fabricated in a 0.35 μm process. Its transistor count is identical to the P54C and, despite the newer process, it had an identical die area as well. The chip was connected to the package using wire bonding, which only allows connections along the edges of the chip. A smaller chip would have required a redesign of the package, as there is a limit on the length of the wires and the edges of the chip would be further away from the pads on the package. The solution was to keep the chip the same size, retain the existing pad-ring, and only reduce the size of the Pentium's logic circuitry to enable it to achieve higher clock frequencies.
The P54CQS was quickly followed by the P54CS, which operated at 133, 150, 166 and 200 MHz, and introduced Socket 7. It contained 3.3 million transistors, measured 90 mm2 and was fabricated in a 0.35 μm BiCMOS process with four levels of interconnect.
The P24T Pentium OverDrive for 486 systems were released in 1995, which were based on 3.3 V 0.6 μm versions using a 63 or 83 MHz clock. Since these used Socket 2/3, some modifications had to be made to compensate for the 32-bit data bus and slower on-board L2 cache of 486 motherboards. They were therefore equipped with a 32 KB L1 cache (double that of pre-P55C Pentium CPUs).
The P55C (or 80503) was developed by Intel's Research & Development Center in Haifa, Israel. It was sold as Pentium with MMX Technology (usually just called Pentium MMX); although it was based on the P5 core, it featured a new set of 57 "MMX" instructions intended to improve performance on multimedia tasks, such as encoding and decoding digital media data. The Pentium MMX line was introduced on October 22, 1996, and released in January 1997.
The new instructions worked on new data types: 64-bit packed vectors of either eight 8-bit integers, four 16-bit integers, two 32-bit integers, or one 64-bit integer. So, for example, the PADDUSB (Packed ADD Unsigned Saturated Byte) instruction adds two vectors, each containing eight 8-bit unsigned integers together, elementwise; each addition that would overflow "saturates", yielding 255, the maximal unsigned value that can be represented in a byte. These rather specialized instructions generally require special coding by the programmer for them to be used.
Other changes to the core include a 6-stage pipeline (vs. 5 on P5) with a return stack (first done on Cyrix 6x86) and better parallelism, an improved instruction decoder, 32 KB L1 cache with 4-way associativity (vs. 16 KB with 2-way on P5), 4 write buffers that could now be used by either pipeline (vs. one corresponding to each pipeline on P5) and an improved branch predictor taken from the Pentium Pro, with a 512-entry buffer (vs. 256 on P5).
It contained 4.5 million transistors and had an area of 140 mm2. It was fabricated in a 0.28 μm CMOS process with the same metal pitches as the previous 0.35 μm BiCMOS process, so Intel described it as "0.35 μm" because of its similar transistor density. The process has four levels of interconnect.
While the P55C remained compatible with Socket 7, the voltage requirements for powering the chip differ from the standard Socket 7 specifications. Most motherboards manufactured for Socket 7 prior to the establishment of the P55C standard are not compliant with the dual voltage rail required for proper operation of this CPU (2.9 volt core voltage, 3.3 volt I/O voltage). Intel addressed the issue with OverDrive upgrade kits that featured an interposer with its own voltage regulation.
Pentium MMX notebook CPUs used a "mobile module" that held the CPU. This module was a PCB with the CPU directly attached to it in a smaller form factor. The module snapped to the notebook motherboard, and typically a heat spreader was installed and made contact with the module. However, with the 0.25 μm "Tillamook" Mobile Pentium MMX (named after a city in Oregon), the module also held the 430TX chipset along with the system's 512 KB SRAM cache memory.
After the introduction of the Pentium, competitors such as Nexgen, AMD, Cyrix, and Texas Instruments announced Pentium-compatible processors in 1994. "CIO magazine" identified NexGen's Nx586 as the first Pentium-compatible CPU, while "PC Magazine" described the Cyrix 6x86 as the first. These were followed by the AMD K5, which was delayed due to design difficulties. AMD later bought NexGen in order to help design the AMD K6, and Cyrix was purchased by National Semiconductor. Later processors from AMD and Intel retain compatibility with the original Pentium.
These Manuals do provide an overview of the Pentium Processor and its features: | https://en.wikipedia.org/wiki?curid=24668 |
Pauli exclusion principle
The Pauli exclusion principle is the quantum mechanical principle which states that two or more identical fermions (particles with half-integer spin) cannot occupy the same quantum state within a quantum system simultaneously. This principle was formulated by Austrian physicist Wolfgang Pauli in 1925 for electrons, and later extended to all fermions with his spin–statistics theorem of 1940.
In the case of electrons in atoms, it can be stated as follows: it is impossible for two electrons of a poly-electron atom to have the same values of the four quantum numbers: "n", the principal quantum number, ', the azimuthal quantum number, "m", the magnetic quantum number, and "ms", the spin quantum number. For example, if two electrons reside in the same orbital, then their "n", ', and "m" values are the same, therefore their "ms" must be different, and thus the electrons must have opposite half-integer spin projections of 1/2 and −1/2.
Particles with an integer spin, or bosons, are not subject to the Pauli exclusion principle: any number of identical bosons can occupy the same quantum state, as with, for instance, photons produced by a laser or atoms in a Bose–Einstein condensate.
A more rigorous statement is that concerning the exchange of two identical particles: the total (many-particle) wave function is antisymmetric for fermions, and symmetric for bosons. This means that if the space "and" spin coordinates of two identical particles are interchanged, then the total wave function changes its sign for fermions and does not change for bosons.
If two fermions were in the same state (for example the same orbital with the same spin in the same atom), interchanging them would change nothing and the total wave function would be unchanged. The only way the total wave function can both change sign as required for fermions and also remain unchanged is that this function must be zero everywhere, which means that the state cannot exist. This reasoning does not apply to bosons because the sign does not change.
The Pauli exclusion principle describes the behavior of all fermions (particles with "half-integer spin"), while bosons (particles with "integer spin") are subject to other principles. Fermions include elementary particles such as quarks, electrons and neutrinos. Additionally, baryons such as protons and neutrons (subatomic particles composed from three quarks) and some atoms (such as helium-3) are fermions, and are therefore described by the Pauli exclusion principle as well. Atoms can have different overall "spin", which determines whether they are fermions or bosons — for example helium-3 has spin 1/2 and is therefore a fermion, in contrast to helium-4 which has spin 0 and is a boson. As such, the Pauli exclusion principle underpins many properties of everyday matter, from its large-scale stability, to the chemical behavior of atoms.
"Half-integer spin" means that the intrinsic angular momentum value of fermions is formula_1 (reduced Planck's constant) times a half-integer (1/2, 3/2, 5/2, etc.). In the theory of quantum mechanics fermions are described by antisymmetric states. In contrast, particles with integer spin (called bosons) have symmetric wave functions; unlike fermions they may share the same quantum states. Bosons include the photon, the Cooper pairs which are responsible for superconductivity, and the W and Z bosons. (Fermions take their name from the Fermi–Dirac statistical distribution that they obey, and bosons from their Bose–Einstein distribution.)
In the early 20th century it became evident that atoms and molecules with even numbers of electrons are more chemically stable than those with odd numbers of electrons. In the 1916 article "The Atom and the Molecule" by Gilbert N. Lewis, for example, the third of his six postulates of chemical behavior states that the atom tends to hold an even number of electrons in any given shell, and especially to hold eight electrons which are normally arranged symmetrically at the eight corners of a cube (see: cubical atom). In 1919 chemist Irving Langmuir suggested that the periodic table could be explained if the electrons in an atom were connected or clustered in some manner. Groups of electrons were thought to occupy a set of electron shells around the nucleus. In 1922, Niels Bohr updated his model of the atom by assuming that certain numbers of electrons (for example 2, 8 and 18) corresponded to stable "closed shells".
Pauli looked for an explanation for these numbers, which were at first only empirical. At the same time he was trying to explain experimental results of the Zeeman effect in atomic spectroscopy and in ferromagnetism. He found an essential clue in a 1924 paper by Edmund C. Stoner, which pointed out that, for a given value of the principal quantum number ("n"), the number of energy levels of a single electron in the alkali metal spectra in an external magnetic field, where all degenerate energy levels are separated, is equal to the number of electrons in the closed shell of the noble gases for the same value of "n". This led Pauli to realize that the complicated numbers of electrons in closed shells can be reduced to the simple rule of "one" electron per state if the electron states are defined using four quantum numbers. For this purpose he introduced a new two-valued quantum number, identified by Samuel Goudsmit and George Uhlenbeck as electron spin.
The Pauli exclusion principle with a single-valued many-particle wavefunction is equivalent to requiring the wavefunction to be antisymmetric with respect to exchange. An antisymmetric two-particle state is represented as a sum of states in which one particle is in state formula_2 and the other in state formula_3, and is given by:
and antisymmetry under exchange means that . This implies when , which is Pauli exclusion. It is true in any basis since local changes of basis keep antisymmetric matrices antisymmetric.
Conversely, if the diagonal quantities are zero "in every basis", then the wavefunction component
is necessarily antisymmetric. To prove it, consider the matrix element
This is zero, because the two particles have zero probability to both be in the superposition state formula_7. But this is equal to
The first and last terms are diagonal elements and are zero, and the whole sum is equal to zero. So the wavefunction matrix elements obey:
or
According to the spin–statistics theorem, particles with integer spin occupy symmetric quantum states, and particles with half-integer spin occupy antisymmetric states; furthermore, only integer or half-integer values of spin are allowed by the principles of quantum mechanics.
In relativistic quantum field theory, the Pauli principle follows from applying a rotation operator in imaginary time to particles of half-integer spin.
In one dimension, bosons, as well as fermions, can obey the exclusion principle. A one-dimensional Bose gas with delta-function repulsive interactions of infinite strength is equivalent to a gas of free fermions. The reason for this is that, in one dimension, the exchange of particles requires that they pass through each other; for infinitely strong repulsion this cannot happen. This model is described by a quantum nonlinear Schrödinger equation. In momentum space, the exclusion principle is valid also for finite repulsion in a Bose gas with delta-function interactions, as well as for interacting spins and Hubbard model in one dimension, and for other models solvable by Bethe ansatz. The ground state in models solvable by Bethe ansatz is a Fermi sphere.
The Pauli exclusion principle helps explain a wide variety of physical phenomena. One particularly important consequence of the principle is the elaborate electron shell structure of atoms and the way atoms share electrons, explaining the variety of chemical elements and their chemical combinations. An electrically neutral atom contains bound electrons equal in number to the protons in the nucleus. Electrons, being fermions, cannot occupy the same quantum state as other electrons, so electrons have to "stack" within an atom, i.e. have different spins while at the same electron orbital as described below.
An example is the neutral helium atom, which has two bound electrons, both of which can occupy the lowest-energy ("1s") states by acquiring opposite spin; as spin is part of the quantum state of the electron, the two electrons are in different quantum states and do not violate the Pauli principle. However, the spin can take only two different values (eigenvalues). In a lithium atom, with three bound electrons, the third electron cannot reside in a "1s" state and must occupy one of the higher-energy "2s" states instead. Similarly, successively larger elements must have shells of successively higher energy. The chemical properties of an element largely depend on the number of electrons in the outermost shell; atoms with different numbers of occupied electron shells but the same number of electrons in the outermost shell have similar properties, which gives rise to the periodic table of the elements.
To test the Pauli exclusion principle for the He atom, Drake carried out very precise calculations for states of the He atom that violate it; they are called paronic states. Later, the paronic state 1s2s 1S0 calculated by Drake was looked for using an atomic beam spectrometer. The search was unsuccessful with an upper limit of 5x10−6.
In conductors and semiconductors, there are very large numbers of molecular orbitals which effectively form a continuous band structure of energy levels. In strong conductors (metals) electrons are so degenerate that they cannot even contribute much to the thermal capacity of a metal. Many mechanical, electrical, magnetic, optical and chemical properties of solids are the direct consequence of Pauli exclusion.
The stability of each electron state in an atom is described by the quantum theory of the atom, which shows that close approach of an electron to the nucleus necessarily increases the electron's kinetic energy, an application of the uncertainty principle of Heisenberg. However, stability of large systems with many electrons and many nucleons is a different question, and requires the Pauli exclusion principle.
It has been shown that the Pauli exclusion principle is responsible for the fact that ordinary bulk matter is stable and occupies volume. This suggestion was first made in 1931 by Paul Ehrenfest, who pointed out that the electrons of each atom cannot all fall into the lowest-energy orbital and must occupy successively larger shells. Atoms, therefore, occupy a volume and cannot be squeezed too closely together.
A more rigorous proof was provided in 1967 by Freeman Dyson and Andrew Lenard, who considered the balance of attractive (electron–nuclear) and repulsive (electron–electron and nuclear–nuclear) forces and showed that ordinary matter would collapse and occupy a much smaller volume without the Pauli principle.
The consequence of the Pauli principle here is that electrons of the same spin are kept apart by a repulsive exchange interaction, which is a short-range effect, acting simultaneously with the long-range electrostatic or Coulombic force. This effect is partly responsible for the everyday observation in the macroscopic world that two solid objects cannot be in the same place at the same time.
Freeman Dyson and Andrew Lenard did not consider the extreme magnetic or gravitational forces that occur in some astronomical objects. In 1995 Elliott Lieb and coworkers showed that the Pauli principle still leads to stability in intense magnetic fields such as in neutron stars, although at a much higher density than in ordinary matter. It is a consequence of general relativity that, in sufficiently intense gravitational fields, matter collapses to form a black hole.
Astronomy provides a spectacular demonstration of the effect of the Pauli principle, in the form of white dwarf and neutron stars. In both bodies, the atomic structure is disrupted by extreme pressure, but the stars are held in hydrostatic equilibrium by "degeneracy pressure", also known as Fermi pressure. This exotic form of matter is known as degenerate matter. The immense gravitational force of a star's mass is normally held in equilibrium by thermal pressure caused by heat produced in thermonuclear fusion in the star's core. In white dwarfs, which do not undergo nuclear fusion, an opposing force to gravity is provided by electron degeneracy pressure. In neutron stars, subject to even stronger gravitational forces, electrons have merged with protons to form neutrons. Neutrons are capable of producing an even higher degeneracy pressure, neutron degeneracy pressure, albeit over a shorter range. This can stabilize neutron stars from further collapse, but at a smaller size and higher density than a white dwarf. Neutron stars are the most "rigid" objects known; their Young modulus (or more accurately, bulk modulus) is 20 orders of magnitude larger than that of diamond. However, even this enormous rigidity can be overcome by the gravitational field of a massive star or by the pressure of a supernova, leading to the formation of a black hole. | https://en.wikipedia.org/wiki?curid=24669 |
Primate (bishop)
Primate () is a title or rank bestowed on some important archbishops in certain Christian churches. Depending on the particular tradition, it can denote either jurisdictional authority (title of authority) or (usually) ceremonial precedence (title of honour).
In the Western Church, a Primate is an Archbishop—or, rarely, a suffragan or exempt bishop—of a specific (mostly Metropolitan) episcopal see (called a "primatial see") who has precedence over the bishoprics of one or more ecclesiastical provinces of a particular historical, political or cultural area. Historically, Primates of particular sees were granted privileges including the authority to call and preside at national synods, jurisdiction to hear appeals from metropolitan tribunals, the right to crown the sovereign of the nation, and presiding at the investiture (installation) of archbishops in their sees.
The office is generally found only in older Catholic countries, and is now purely honorific, enjoying no effective powers under canon law—except for the Archbishop of Esztergom (Gran) in Hungary. Thus, e.g., the Primate of Poland holds no jurisdictional authority over other Polish bishops or their dioceses, but is "durante munere" a member of the standing committee of the episcopal conference and has honorary precedence among Polish bishops (e.g., in liturgical ceremonies). The Holy See has also granted Polish primates the privilege of wearing cardinal's crimson attire, except for the skullcap and biretta, even if they have not been made cardinals.
Where the title of primate exists, it may be vested in one of the oldest archdioceses in a country, often based in a city other than the present capital, but which was the capital when the country was first Christianized. The city may no longer have the prominence it had when the title was granted. The political area over which primacy was originally granted may no longer exist: for example, the Archbishop of Toledo was designated "Primate of the Visigothic Kingdom", and the Archbishop of Lyon is the "Primate of the Gauls".
Some of the leadership functions once exercised by Primates, specifically presiding at meetings of the bishops of a nation or region, are now exercised by the president of the conference of bishops: "The president of the Conference or, when he is lawfully impeded, the vice-president, presides not only over the general meetings of the Conference but also over the permanent committee." The president is generally elected by the conference, but by exception the President of the Italian Episcopal Conference is appointed by the Pope, and the Irish Catholic Bishops' Conference has the Primate of All Ireland as President and the Primate of Ireland as Vice-President. Other former functions of primates, such as hearing appeals from metropolitan tribunals, were reserved to the Holy See by the early 20th century. Soon after, by the norm of the Code of Canon Law of 1917, confirmed in the 1983 Code, the tribunal of second instance for appeals from a metropolitan tribunal is "the tribunal which the metropolitan has designated in a stable manner with the approval of the Apostolic See".
The closest equivalent position in the Eastern Churches in 1911 was an Exarch.
The Holy See has continued in modern times to grant the title of Primate. With the papal decree "Sollicitae Romanis Pontificibus" of 24 January 1956 it granted the title of Primate of Canada to the Archbishop of Quebec. As stated above, this is merely an honorary title involving no additional power.
A right of precedence over other bishops and similar privileges can be granted even to a bishop who is not a Primate. Thus, in 1858, the Holy See granted the Archbishop of Baltimore precedence in meetings of the United States bishops. The Archbishop of Westminster has not been granted the title of Primate of England and Wales, which is sometimes applied to him, but his position has been described as that of "Chief Metropolitan" and as "similar to" that of the Archbishop of Canterbury.
The title of Primate is sometimes applied loosely to the Archbishop of a country's capital, as in the case of the Archbishops of Seoul in South Korea and of Edinburgh in Scotland. Functions can sometimes be exercised in practice ("de facto"), as by a "de facto" government, without having been granted by law; but since "Primate" is today a title, not a function, there is no such thing as a ""de facto"" primate.
The pre-reformation Metropolitan Archbishop of Nidaros was sometimes referred to as Primate of Norway, even though it is unlikely that this title ever was officially granted to him by the Holy See.
The heads of certain sees have at times been referred to, at least by themselves, as primates:
"Source"
In the modern confederation of the Benedictine Order, all the Black Monks of St. Benedict were united under the presidency of an Abbot Primate (Leo XIII, "Summum semper", 12 July 1893); but the unification, fraternal in its nature, brought no modification to the abbatial dignity, and the various congregations preserved their autonomy intact. The loose structure of the Benedictine Confederation is claimed to have made Pope Leo XIII exclaim that the Benedictines were "ordo sine ordine" ("an order without order"). The powers of the Abbot Primate are specified, and his position defined, in a decree of the Sacred Congregation of Bishops and Regulars dated 16 September 1893. The primacy is attached to the global Benedictine Confederation whose Primate resides at Sant'Anselmo in Rome. He takes precedence of all other abbots, is empowered to pronounce on all doubtful matters of discipline, to settle difficulties arising between monasteries, to hold a canonical visitation, if necessary, in any congregation of the order, and to exercise a general supervision for the regular observance of monastic discipline. The Primatial powers are only vested in the Abbot Primate to act by virtue of the proper law of its autonomous Benedictine congregation, which at the present is minimal to none. However, certain branches of the Benedictine Order seem to have lost their original autonomy to some extent.
In a similar way the Confederation of Canons Regular of St. Augustine, elects an Abbot Primate as figurehead of the Confederation and indeed the whole Canonical Order. The Abbots and Superiors General of the nine congregations of confederated congregations of Canons Regular elect a new Abbot Primate for a term of office lasting six years. The Current Abbot Primate is Rt Rev. Fr Jean-Michel Girard, CRB, Abbot General of the Canons Regular of the Grand St Bernard.
Anglican usage styles the bishop who heads an independent church as its "primate", though commonly they hold some other title (e.g. archbishop, presiding bishop, or moderator). The primates' authority within their churches varies considerably: some churches give the primate some executive authority, while in others they may do no more than preside over church councils and represent the church ceremonially.
In the context of the Anglican Communion Primates' Meeting, the chief bishop of each of the thirty-nine churches (also known as provinces) that compose the Anglican Communion acts as its primate, though this title may not necessarily be used within their own provinces. Thus the United Churches of Bangladesh, of North India, of Pakistan and of South India, which are united with other originally non-Anglican churches, are represented at the meetings by their moderators.
In both the Church of England and the Church of Ireland, two bishops have the title of primate: the archbishops of Canterbury and York in England and of Armagh and Dublin in Ireland. Only the bishop of the senior primatial see of each of these two churches participates in the meetings.
The Archbishop of Canterbury, who is considered "primus inter pares" of all the participants, convokes the meetings and issues the invitations.
Primates and archbishops are styled "The Most Reverend". All other bishops are styled "The Right Reverend".
The head of the Traditional Anglican Communion's College of Bishops takes the title of Primate. | https://en.wikipedia.org/wiki?curid=24672 |
Penny Arcade
Penny Arcade is a webcomic focused on video games and video game culture, written by Jerry Holkins and illustrated by Mike Krahulik. The comic debuted in 1998 on the website "loonygames.com". Since then, Holkins and Krahulik have established their own site, which is typically updated with a new comic strip each Monday, Wednesday, and Friday. The comics are accompanied by regular updates on the site's blog.
"Penny Arcade" is among the most popular and longest running webcomics currently online, listed in 2010 as having 3.5 million readers. Holkins and Krahulik are among the first webcomic creators successful enough to make a living from their work. In addition to the comic, Holkins and Krahulik also created Child's Play, a children's charity; PAX, a gaming convention; Penny Arcade TV, a YouTube channel; Pinny Arcade, a pin exchange; and the episodic video game "" with Hothead Games and Zeboyd Games.
The strip features Krahulik and Holkins' cartoon alter egos, John "Gabe" Gabriel and Tycho Brahe, respectively. While often borrowing from the authors' experiences, Holkins and Krahulik do not treat them as literal avatars or caricatures of themselves. The two characters spend much of their time playing and commenting on computer and video games, which forms the basis of the humor in the strip. Most of the time Gabe serves the purpose of the comic and Tycho the comic foil. The strip can feature in-jokes that are explained in the news posts accompanying each comic, written by the authors.
Both Krahulik and Holkins make a living from "Penny Arcade", placing them in a small group of professional webcomic artists devoted to their creations full-time. Originally, like many webcomics, "Penny Arcade" was supported solely by donations. A graph on the main page indicated how much people had donated that month. After hiring Robert Khoo as their business manager, Holkins and Krahulik switched to a different income stream based on advertising and merchandise revenue alone. According to Holkins, the website in 2006 handled more than two million pageviews daily (excluding forum traffic). On November 13, 2005, the website was given a facelift in celebration of their seventh year running and to match the designs of the Child's Play Charity and Penny Arcade Expo websites. Afterwards, the site has been redesigned multiple times.
As a (primarily) topical video gaming news comic, there is little plot or general continuity in "Penny Arcade" strips. Any story sustained for longer than a single strip is referred to as "dreaded continuity", something of a running gag in the newsposts. A character who dies a violent death in one strip will come back in the next, perfectly whole, though occasionally these deaths have an effect on later comics. For example, often, when Gabe kills Tycho or vice versa, the killer takes a certain Pac-Man watch off the dead character, but only if he currently has the watch. Profanity and violence are common in "Penny Arcade" and the strip is known for its surrealism; zombies, a talking alcoholic DIVX player called Div, Santa Claus, a robotic juicer called the "Fruit Fucker 2000", and Jesus, among others, are known to drop in often and for petty reasons. Other such occurrences are implied, if not shown, such as mentioning Dante from "Devil May Cry" living in the building next door. However, the comic does occasionally expand into more serious issues; one even had Krahulik, in the guise of the character Gabe, proposing to his girlfriend of two years, while another had both Gabe and Tycho praising Casey Heynes for standing up to bullying.
Some of the strips are drawn from the perspective of fictional characters within a game or movie. Occasionally, Gabe and Tycho are featured as they would be as characters or players in the game themselves, often having some sarcastic remark to make about some feature or bug in the game. At times the comic also depicts meetings between game developers or business people, and features or mocks the reporters of a news article that is commented on in Holkins' newspost.
"Penny Arcade" has a theme song, "Penny Arcade Theme", written and performed by nerdcore artist MC Frontalot. It was written as a thank-you by Frontalot for the creators of the webcomic linking his website to their front page and declaring him their "rapper laureate" in 2002. The song appears in the dance game "In the Groove".
Mike Krahulik's comic alter ego is energetic and free-spirited, but has a propensity to become extremely angry. As a contrast to Tycho's expansive vocabulary, Gabe usually speaks using only simple, common words.
He almost always wears a yellow Pac-Man shirt, and has a Pac-Man tattoo on his right arm. His eyes are a shade of slate blue.
He has a fascination with unicorns, a secret love of Barbies, is a dedicated fan of Spider-Man and "Star Wars", and has proclaimed "Jessie's Girl" to be the greatest song of all time. He has a wife and son.
Gabe is a diabetic, though he continues to consume large quantities of sugar products.
Krahulik named his son "Gabriel" in honor of the character.
Jerry Holkins' comic alter ego (named after the astronomer Tycho Brahe) is bitter and sarcastic. His eyes are burnt sienna, and he's almost invariably clad in a blue-striped sweater. Tycho enjoys books, role-playing video games, using large and uncommon words in conversation, and deflating Gabe's ego. He is an enthusiastic fan of "Harry Potter" and "Doctor Who". He also plays "Dungeons & Dragons" often (the website's previous banner illustrated him holding a 20-sided die), and adopts a wildly theatrical style when acting as a dungeon master.
Tycho occasionally makes reference to his scarring childhood, during which his mother physically abused him. Tycho also has a drinking problem.
In "Poker Night at the Inventory", Tycho is voiced by Kid Beyond.
Krahulik and Holkins began to record and release audio content on March 20, 2006, titled "Downloadable Content." The podcasts specifically captured the creative process that goes into the creation of a "Penny Arcade" comic, usually starting with a perusal of recent gaming news, with conversational tangents and digressions to follow. As well as being a behind-the-scenes look at the creation of "Penny Arcade", Krahulik and Holkins discuss possible subjects for the comic.
The format of the show was mostly "fly-on-the-wall" style, in that the hosts rarely acknowledged the existence of the microphone. There was no theme music, intro, or outro. The podcasts were of varying lengths, beginning abruptly and ending with the idea for the current comic. New episodes were released irregularly, with six month gaps not uncommon.
Although the shows were initially published weekly, Holkins stated in a May 2006 blog post that they have found difficulties when trying to produce the podcasts on a regular basis. The duo planned to keep recording podcasts occasionally.
Since airing the first episode of the new PATV in February 2010, the podcast has not been updated. A new segment has since appeared on PATV called "The Fourth Panel," which presents a fly-on-the-wall look at comics creation much as the podcast did.
On May 8, 2013 Penny Arcade launched a Kickstarter campaign to fund the continuation of "Downloadable Content". The kickstarter was successful, with new Podcasts being added each Wednesday.
"" is an episodic video game based on the strip. The first two episodes were developed by Hothead Games, and were built on a version of the Torque Game Engine. The first episode was released worldwide on May 21, 2008, and the second on October 29, 2008. They were self-published via the PlayStation Network and Xbox Live as well as the PlayGreenhouse.com service created by "Penny Arcade" to distribute independent games. The game features many elements of the "Penny Arcade" universe in a 1920s steampunk setting. In 2010, Krahulik and Holkins announced that the remainder of the series had been cancelled, to allow Hothead to focus on other projects. At PAX Prime 2011, however, it was announced that the series would be revived and developed by Zeboyd Games, with a retro style similar to Zeboyd's past titles. The third episode was released on Steam and on Penny Arcade's web store June 25, 2012. The fourth and final episode was announced in January 2013, and released to Steam and Xbox Live in June 2013.
A teaser trailer released by Telltale Games on August 28, 2010, revealed that Tycho would appear in an upcoming game alongside "Team Fortress 2's" Heavy, Strong Bad and Max. The game, called "Poker Night at the Inventory", was officially revealed on September 2, 2010.
"The Last Christmas" and "The Hawk and the Hare", two stories that were published on the site, were released as motion comics for iOS developed by SRRN Games.
The North American release of "Tekken 6" has a skin for Yoshimitsu based on the Cardboard Tube Samurai. An official DLC skin pack was released for Dungeon Defenders featuring Tycho, Cardboard Tube Samurai Gabe, Annarchy and Jim Darkmagic skins.
Cryptozoic Entertainment released the licensed deck-building card game "Penny Arcade The Game: Gamers Vs. Evil" in 2011, and followed it with the expansion pack "Penny Arcade The Game: Rumble in R'lyeh" in 2012. Playdek released a digital conversion of "Penny Arcade The Game: Gamers Vs. Evil" for iOS in 2012.
"Penny Arcade: The Series" first aired online on February 20, 2010. It is a multi-season documentary series based on the exploits of the Penny Arcade company and its founders Krahulik and Holkins.
Under the banner of "Penny Arcade Presents", Krahulik and Holkins are sometimes commissioned to create promotional artwork/comic strips for new video games, with their signature artistic style and humor. They are usually credited simply as "Penny Arcade" rather than by their actual names. Some of these works have been included with the distribution of the game, and others have appeared on pre-launch official websites. An official list can be found on the Penny Arcade website.
On August 8, 2005, Krahulik announced that "Penny Arcade", in partnership with Sabertooth Games, would be producing a collectible card game based on the "Penny Arcade" franchise. The resulting "Penny Arcade" "battle box" was released in February 2006 as part of the Universal Fighting System.
There are also a few spinoffs from the main comic that have gained independent existences. An example is "" (ELotH:TES), a parody of the written-by-committee fantasy fiction used as back-story for a wide variety of games: originally a one-off gag in the "Penny Arcade" comic, in late 2005 this was expanded into a complete fantasy universe, documented on a hoax "fan-wiki". ELotH:TES first appeared in the webcomic of February 7, 2005, and has subsequently been featured in the comics of November 7, 2005 and November 30, 2005. Several elements of the ELotH:TES universe are featured on the cover of their second comics collection, "Epic Legends of the Magic Sword Kings".
On May 31, 2006 Krahulik announced a new advertising campaign for the Entertainment Software Rating Board. According to Krahulik, the ESRB "wanted a campaign that would communicate to gamers why the ESRB is important even if they don't think it directly affects them." Among the reasons he listed for "Penny Arcade"'s accepting the job was that he and Holkins are both fathers and are concerned about the games their children might play. The ad campaign was rolled out in the summer and fall of 2006 and a second campaign was released in 2012 featuring a mother, a father and a gamer describing the tools employed by the ESRB.
Announced on June 2, 2011, Paramount Pictures had acquired the rights to produce an animated film, via Paramount Animation to make this, of the one-off strip "The New Kid" which was published on October 29, 2010. The strip was one of three mini-strips which featured a cinematic opening to a larger story left unexplored. "The New Kid" is about a boy who's moving to a new planet with his family because of his father's career. The script was written by Gary Whitta and would have been produced by Mary Parent and Cale Boyter.
At PAX Australia in 2016, during a Q&A session, Holkins revealed that changes at Paramount resulted in the movie rights being returned to Penny Arcade and the project canceled. He did note, however, that Whitta's script was complete and the project could move forward with another production company in the future.
The Trenches was a comic series by Krahulik and Holkins in collaboration with webcomic "PvP"'s creator Scott Kurtz. The comic followed a man named Issac and his life as a game tester. The series was launched on August 9, 2011 and featured new strips every Tuesday and Thursday, usually accompanied by a "Tale from the Trenches", which was a short piece submitted by a reader detailing their own experiences in the game industry.
In September 2012, Kurtz stopped illustrating the webcomic, due to lack of time, and was replaced by Mary Cagle, a former intern of his, and the creator of the webcomic Kiwi Blitz. Kurtz still continued to collaborate with Krahulik and Holkins in writing the comic. In late August 2013, illustration was taken over by Ty Halley ("Secret Life of a Journal Writer") and Monica Ray ("Phuzzy Comics"), former contestants of the Penny Arcade series "Strip Search".
"The Trenches" was ultimately abandoned. The last comic was posted January 5, 2016, while the last "Tales" is from September 10, 2015.
Krahulik and Holkins have also released an application for iOS devices called "The Decide-o-tron", presented by Eedar and developed by The Binary Mill. The app works as a recommendation engine for video games; users input games they've enjoyed and the app attempts to predict their ratings of titles they haven't yet played. Holkins described it as "Pandora for games".
Penny Arcade has created two Kickstarter projects. The first was the "Penny Arcade's Paint the Line" card game which was used as an alternative to pre-ordering it and came with an exclusive comic. The second was entitled "Penny Arcade Sells Out" and was intended to replace advertising revenue with crowd funding. The leaderboard ad on the home page of Penny Arcade would be removed if the minimum goal of $250,000 were reached, whereas the entire site would become completely ad-free for a year at $999,999. The reality web series described as "our version of America's Next Top Webcomic" titled "Strip Search" arose from the $450,000 stretch goal.
Krahulik and Holkins created a comic strip which compares the 7th generation consoles that appears in the December 2006 issue of "Wired" magazine.
Every Christmas since 2003, "Penny Arcade" hosts a charity called Child's Play to buy new toys for children's hospitals. They have also sponsored a three-day gaming festival called the Penny Arcade Expo every August since 2004.
Krahulik and Holkins received a cease-and-desist letter from American Greetings Corporation over the use of American Greetings' Strawberry Shortcake and Plum Puddin' characters in the April 14, 2003 "Penny Arcade" strip entitled "Tart as a Double Entendre". The strip was intended as a parody of the works of both American McGee (especially the computer game "Alice") and McFarlane Toys. At the time, McFarlane toys and American McGee made separate toy lines, each portraying a dark, frightening interpretation of the characters and situations from "The Wonderful Wizard of Oz". Krahulik and Holkins' portrayal of Strawberry Shortcake parodied McFarlane Toys' depiction of Dorothy as bound and blindfolded by a pair of munchkins.
Krahulik and Holkins chose not to enter into a legal battle over whether or not the strip was a protected form of parody, and they complied with the cease-and-desist by replacing it with an image directing their audience to send a letter to a lawyer for American Greetings. They later lampooned the incident by portraying an American Greetings employee as a Nazi.
On June 15, 2011, a strip entitled "Reprise" revisited the issue, due to the release of "", another American McGee game. In the strip, Gabe suggests that he and Tycho parody a brand not "under constant surveillance", resulting in a spoof of the "Rainbow Brite" franchise. Holkins stated in the accompanying news post that "it seemed like an incredible opportunity to relive the days of yore."
On October 17, 2005 Krahulik and Holkins donated US$10,000 to the Entertainment Software Association foundation in the name of Jack Thompson, an activist against violence in video games. Earlier, Thompson himself had promised to donate $10,000 if a video game was created in which the player kills video game developers ("A Modest Video Game Proposal"), but after a mod to the game "Grand Theft Auto" was pointed out to already exist, Thompson called his challenge satire (referring to the title of the letter as a reference to "A Modest Proposal") and refused to donate the money. He claimed these games were not going to be manufactured, distributed, or sold like retail games, as his Modest Proposal stated, and therefore, the deal went unfulfilled. His refusal was met with disdain, given that multiple games were created or in the process of being created under Thompson's criteria. Krahulik and Holkins donated the money in his place, with a check containing the memo: "For Jack Thompson, Because Jack Thompson Won't".
Thompson proceeded to phone Krahulik, as related by Holkins in the corresponding news post.
On October 18, 2005 it was reported that Jack Thompson had faxed a letter to Seattle Police Chief Gil Kerlikowske claiming that "Penny Arcade" "employs certain personnel who have decided to commence and orchestrate criminal harassment of me by various means". Holkins defended the site by saying that the "harassment" Thompson referred to was simply "the natural result of a public figure making statements that people disagree with, and letting him know their thoughts on the matter via his publicly available contact information".
On October 21, 2005 Thompson claimed to have sent a letter to John McKay, U.S. Attorney for the Western District of Washington, in an attempt to get the FBI involved. Thompson re-iterated his claims of "extortion" and accused "Penny Arcade" of using "their Internet site and various other means to encourage and solicit criminal harassment". Penny Arcade denied the charge of "extortion", noting that they paid the $10,000 to charity, and asked nothing in return.
Thompson claimed the harassment of him is a direct result of Mike Krahulik's posts, which listed links to the Florida Bar Association. Thompson accused "Penny Arcade" of soliciting complaints to the Bar against him, even though Krahulik actually posted the opposite, asking fans to cease sending letters to the Bar, as the Bar acknowledged that it is aware of Thompson's actions, thanks to previous letters.
The Seattle PD eventually acknowledged receiving a complaint from Thompson, but have commented that they believe the issue to be a civil, rather than criminal, matter. They noted that this was from initial impressions of the letter they received, and their criminal investigations bureau is reviewing the letter to make sure that there were not any criminal matters that they missed.
On the same day, Scott Kurtz, creator of the webcomic "PvP" and a longtime friend of Krahulik and Holkins, used the image of the letter Thompson sent to the Seattle PD to create a parody letter in which Jack attempts to enlist the aid of the Justice League of America by claiming Gabe and Tycho to be villains of some description.
The "Penny Arcade" shop had at the time sold an "I hate Jack Thompson" T-shirt, claiming that every living creature, including Thompson's own mother, hates Jack Thompson.
On March 21, 2007 Thompson filed a countersuit to the lawsuit brought against him by Take Two Interactive claiming that they are at the center of a RICO conspiracy. "Penny Arcade" was named as one of the co-conspirators. At Sakura-Con 2007, Krahulik announced that the suit had been dropped.
An August 11, 2010 comic entitled "The Sixth Slave" wherein an NPC pleads with a player to save him from being raped nightly by monsters called "dickwolves", drew criticism from many commentators, including from "The American Prospect" and "The Boston Phoenix". Krahulik and Holkins dismissed these criticisms, later selling "Team Dickwolves" T-shirts based on the strip. They later removed the "Team Dickwolves" shirt from their store due to complaints that it made potential PAX attendees uncomfortable. After the removal, Krahulik posted online that removing the shirts was only partly caving to pressure but mainly due to people who had personally emailed him and were reasonable with their concerns. Krahulik also stated that anyone still hesitant about going to PAX even after removal of the shirts should not come to PAX. In September 2013, on the last day of PAX, Krahulik told a panel that he thought that "pulling the dickwolves merchandise was a mistake", to cheers from the crowd. However, Krahulik later apologized on the "Penny Arcade" website, stating that he regretted contributing to the furor that had followed the original comic.
Both critics of the comic strip and Krahulik and Holkins, made claims of receiving verbal abuse through social media and death threats.
"John Gabriel's Greater Internet Fuckwad Theory" was posted in the "Penny Arcade" strip published March 19, 2004. It regards the online disinhibition effect, in which Internet users exhibit unsociable tendencies while interacting with other Internet users. Krahulik and Holkins suggest that, given both anonymity and an audience, an otherwise regular person becomes aggressively antisocial. In 2013, Holkins gave the corollary that "Normal Person - "Consequences" + Audience = Total Fuckwad".
Clay Shirky, an adjunct professor at New York University who studies social and economic effects of Internet technologies, explains: "There’s a large crowd and you can act out in front of it without paying any personal price to your reputation,” which "creates conditions most likely to draw out the typical Internet user’s worst impulses." In an "Advocate" article about online homophobia, this theory was used to account for behavior on online forums where one can remain anonymous in front of an audience: for instance, posting comments on popular YouTube videos.
On December 13, 2006, "Next Generation Magazine" rated Krahulik and Holkins among its "Top 25 People of the Year". Also appearing on the list were Nintendo of America President Reggie Fils-Aime and former Xbox corporate vice-president Peter Moore. Krahulik made a post about the honor, in which he explained that "Penny Arcade" was created only because Next Gen rejected the duo's entry to a comic contest many years before. "Entertainment Weekly" listed "Penny Arcade" on their "100 Sites to Bookmark Now," calling it "a hilarious and smart webcomic for gamers." MTV Online named Holkins and Krahulik two of the world's most influential gamers, saying "they have become the closest the medium has to leaders of a gamers' movement." Time.com named "Penny Arcade" as one of its "50 Best Websites" for 2008 "...for the way it pokes fun at the high-tech industry and the people who love it."
1UP.com described it as "the One True Gaming Webcomic." "Penny Arcade" was used along with "American Elf", "Fetus-X", and "Questionable Content" as an example of comics using the web to create "an explosion of diverse genres and styles" in Scott McCloud's 2006 book "Making Comics".
On March 5, 2009, the Washington State Senate honored Holkins and Krahulik, both originally from Spokane, for the contribution that they had made to the state, the video game industry, and to children's charities from around the world courtesy of their Child's Play initiative. Later in March, "Penny Arcade" won the category "Best Webcomic" in the fan voted Project Fanboy Awards for 2008.
In 2010, Holkins, Krahulik, and Khoo were awarded the annual "Ambassador Award" at GDC's Game Developers Choice Awards for contributions they had made to the industry. The same year, "Time" included Holkins and Krahulik in the annual "Time 100", the magazine's listing of the world's 100 most influential people.
In July 2015, Holkins and Krahulik were recognized as "Multimedia Empire Builders" in Ad Week's 10 Visual Artists Changing the Way We See Advertising issue. | https://en.wikipedia.org/wiki?curid=24673 |
Permanent Way Institution
The Permanent Way Institution is a technical Institution which aims to provide technical knowledge, advice and support to all those engaged in rail infrastructure systems worldwide.
Permanent Way is used to describe the course of a railway line, including the components that form the track, aggregate that supports the track and the civil engineering assets covering bridges, tunnels, viaducts and earthworks.
The Permanent Way Institution is split up into a number of sections throughout the United Kingdom and also has internationally located sections across the world.
Membership is open to anyone who is either actively involved in the rail industry, retired or just has a general interest in rail infrastructure engineering.
Home Sections are:
Ashford,
Croydon & Brighton,
Glasgow,
London,
North Wales,
Wessex,
Birmingham,
Darlington & NE,
Manchester & Liverpool,
Nottingham & Derby,
South & West Wales,
West Yorkshire,
Bristol & West of England,
Edinburgh,
Lancaster, Barrow & Carlisle,
Milton Keynes,
Sheffield & Doncaster,
Thames Valley,
York | https://en.wikipedia.org/wiki?curid=24675 |
President of Ireland
The president of Ireland () is the head of state of Ireland and the supreme commander of the Irish Defence Forces.
The president holds office for seven years, and can be elected for a maximum of two terms. The president is directly elected by the people, although there is no poll if only one candidate is nominated, which has occurred on six occasions to date. The presidency is largely a ceremonial office, but the president does exercise certain limited powers with absolute discretion. The president acts as a representative of the Irish state and guardian of the constitution. The president's official residence is in Phoenix Park, Dublin. The office was established by the Constitution of Ireland in 1937, the first president took office in 1938, and became internationally recognised as head of state in 1949 following the coming into force of the Republic of Ireland Act.
The current president is Michael D. Higgins, who was first elected on 29 October 2011. His inauguration was held on 11 November 2011. He was re-elected for a second term on 26 October 2018.
The Constitution of Ireland provides for a parliamentary system of government, under which the role of the head of state is largely a ceremonial one. The president is formally one of three parts of the Oireachtas (national parliament), which also comprises Dáil Éireann (the house of representatives or lower house) and Seanad Éireann (the Senate or upper house).
Unlike most parliamentary republics, the president is not even the "nominal" chief executive. Rather, executive authority in Ireland is expressly vested in the government (cabinet). The government is obliged, however, to keep the president generally informed on matters of domestic and foreign policy. Most of the functions of the president may be carried out only in accordance with the strict instructions of the Constitution, or the binding "advice" of the government. The president does, however, possess certain personal powers that may be exercised at his or her discretion.
The main functions are prescribed by the Constitution:
Other functions specified by statute or otherwise include:
The president possesses the following powers exercised "in his absolute discretion" according to the English version of the Constitution. The Irish version states that these powers are exercised "as a chomhairle féin" which is usually translated as "under his own counsel." Lawyers have suggested that a conflict may exist in this case between both versions of the constitution. In the event of a clash between the Irish and English versions of the constitution, the Irish one is given supremacy. While "absolute discretion" appears to leave some freedom for manoeuvre for a president in deciding whether to initiate contact with the opposition, "own counsel" has been interpreted by some lawyers as suggesting that "no" contact whatsoever can take place. As a result, it is considered controversial for the president to be contacted by the leaders of any political parties in an effort to influence a decision made using the discretionary powers. It is required that, before exercising certain reserve powers, the president consult the Council of State. However, the president is not compelled to act in accordance with the council's advice.
The Taoiseach is required to resign if he has "ceased to retain the support of a majority in Dáil Eireann," unless he asks the president to dissolve the Dáil. The president has the right to refuse such a request, in which case the Taoiseach must resign immediately. This power has never been invoked. However, the necessary circumstances existed in 1944, 1982 and 1994. The apparent discrepancy, referred to above, between the Irish and English versions of the Constitution has discouraged presidents from contemplating the use of the power. On the three occasions when the necessary circumstances existed, presidents have adopted an ultra-strict policy of non-contact with the opposition. The most notable instance of this was in January 1982, when Patrick Hillery instructed an aide, Captain Anthony Barber, to ensure that no telephone calls from the opposition were to be passed on to him. Nevertheless, three opposition figures, including Fianna Fáil leader Charles Haughey, demanded to be connected to Hillery, with Haughey threatening to end Barber's career if the calls weren't put through. Hillery, as Supreme Commander of the Defence Forces, recorded the threat in Barber's military personnel file and recorded that Barber had been acting on his instructions in refusing the call. Even without this consideration, refusing such a request would arguably create a constitutional crisis, as it is considered a fairly strong constitutional convention that the head of state always grants a parliamentary dissolution.
If requested to do so by a petition signed by a majority of the membership of the Seanad, and one-third of the membership of the Dáil, the president may, after consultation with the Council of State, decline to sign into law a bill (other than a bill to amend the constitution) they consider to be of great "national importance" until it has been approved by either the people in a referendum or the Dáil reassembling after a general election, held within eighteen months. This power has never been used, and no such petition has been invoked. Of the 60 senators, 11 are nominated by the Taoiseach, so there is rarely a majority opposed to a government bill.
The president may appoint up to seven members of the Council of State, and remove or replace such appointed members. (See list of presidential appointees to the Council of State.) The following powers all require prior consultation with the Council of State, although the president need not take its advice:
The president is directly elected by secret ballot using the instant-runoff voting, the single-winner analogue of the Single Transferable Vote. Under the Presidential Elections Act, 1993 a candidate's election formally takes place in the form of a 'declaration' by the returning officer. Where more than one candidate is nominated, the election is 'adjourned' so that a ballot can take place, allowing the electors to choose between candidates. A presidential election is held in time for the winner to take office the day after the end of the incumbent's seven-year term. In the event of premature vacancy, an election must be held within sixty days.
Only resident Irish citizens aged eighteen or more may vote; a 1983 bill to extend the right to resident British citizens was ruled unconstitutional.
Candidates must be Irish citizens and over 35 years old. However, there is a discrepancy between the English- and Irish-language texts of Article 12.4.1º. According to the English text, an eligible candidate "has reached his thirty-fifth year of age", whereas the Irish text has this as "ag a bhfuil cúig bliana tríochad slán" ("has completed his thirty-five years"). Because a person's thirty-fifth year of life begins on their thirty-fourth birthday, this means there is a year's difference between the minimum ages as stated in the two texts. Various proposals have been made to amend the Constitution so as to eliminate this discrepancy. At present, however, the Irish version of the subsection prevails in accordance with the rule stated in Article 25.5.4º. The 29th government introduced the Thirty-fifth Amendment of the Constitution (Age of Eligibility for Election to the Office of President) Bill 2015 to reduce the age of candidacy from 35 to 21, which was put to referendum in May 2015 but the bill was heavily defeated, with approximately 73% of voters voting against reducing the age of eligibility.
Presidents can serve a maximum of two terms, consecutive or otherwise. They must be nominated by one of the following:
Where only one candidate is nominated, he or she is deemed elected without the need for a ballot. For this reason, where there is a consensus among political parties not to have a contest, the president may be 'elected' without the occurrence of an actual ballot. Since the establishment of the office this has occurred on six occasions.
The most recent presidential election was held on 26 October 2018.
There is no office of vice president of Ireland. In the event of a premature vacancy a successor must be elected within sixty days. In a vacancy or where the president is unavailable, the duties and functions of the office are carried out by a presidential commission, consisting of the chief justice, the ceann comhairle (speaker) of the Dáil, and the cathaoirleach (chairperson) of the Seanad. Routine functions, such as signing bills into law, have often been fulfilled by the presidential commission when the president is abroad on a state visit. The government's power to prevent the president leaving the state is relevant in aligning the diplomatic and legislative calendars.
Technically each president's term of office expires at midnight on the day before the new president's inauguration. Therefore, between midnight and the inauguration the following day the presidential duties and functions are carried out by the presidential commission. The constitution also empowers the Council of State, acting by a majority of its members, to "make such provision as to them may seem meet" for the exercise of the duties of the president in any contingency the constitution does not foresee. However, to date, it has never been necessary for the council to take up this role. Though an outgoing president of Ireland who has been re-elected is usually described in the media as "president" before the taking of the Declaration of Office, that is actually incorrect. The Irish Constitution makes it clear that a president's term of office expires on the day before the inauguration of their successor. In the interregnum period, the presidential commission acts as president, though given that it is usually for less than 11 hours no presidential commission has ever been called on to do anything in that period. Technically for that period the outgoing president is a "former" president and, if re-elected, "president-elect".
Vacancies in the presidency have occurred three times: on the death of Erskine Hamilton Childers in 1974, and on the resignations of Cearbhall Ó Dálaigh in 1976 and Mary Robinson in 1997.
The official residence of the president is Áras an Uachtaráin, located in the Phoenix Park in Dublin. The ninety-two-room building formerly served as the 'out-of-season' residence of the Irish Lord Lieutenant and the residence of two of the three Irish Governors-General: Tim Healy and James McNeill. The president is normally referred to as 'President' or 'Uachtarán', rather than 'Mr/Madam President' or similar forms. The style used is normally "His Excellency/Her Excellency" (); sometimes people may orally address the president as 'Your Excellency' ( ), or simply 'President' ( (vocative case)). The Presidential Salute is taken from the National Anthem, "Amhrán na bhFiann". It consists of the first four bars followed by the last five, without lyrics.
The inauguration ceremony takes place on the day following the expiry of the term of office of the preceding president. No location is specified in the constitution, but all inaugurations have taken place in Saint Patrick's Hall in the State Apartments in Dublin Castle. The ceremony is transmitted live by national broadcaster RTÉ on its principal television and radio channels, typically from around 11 am. To highlight the significance of the event, all key figures in the executive (the government of Ireland), the legislature (Oireachtas) and the judiciary attend, as do members of the diplomatic corps and other invited guests.
During the period of the Irish Free State (1922 to 1937), the governor-general had been installed into office as the representative of the Crown in a low-key ceremony, twice in Leinster House (the seat of the Oireachtas), but in the case of the last governor-general, Domhnall Ua Buachalla, in his brother's drawing room. By contrast, the Constitution of Ireland adopted in 1937 requires the president's oath of office be taken in public.
Under the Constitution, in assuming office the president must subscribe to a formal declaration, made publicly and in the presence of members of both Houses of the Oireachtas, judges of the Supreme Court and the High Court, and other "public personages". The inauguration of the president takes place in St Patrick's Hall in Dublin Castle. The declaration is specified in Article 12.8:
To date every president has subscribed to the declaration in Irish. Erskine H. Childers, who never learnt Irish and spoke with a distinctive Oxbridge accent that made pronouncing Irish quite difficult, opted with some reluctance for the Irish version in 1973. Pictures of the event show Childers reading from an exceptionally large board where it had been written down phonetically for him. At his second inauguration in 2018, Michael D. Higgins first made the declaration in Irish, then repeated it in English.
In 1993 the United Nations Human Rights Committee expressed concern that, because of its religious language, the declaration amounts to a religious test for office. The Oireachtas Committee in 1998 recommended that the religious references be made optional.
Having taken the Declaration of Office, the new president traditionally delivers an address to the guests. Constitutionally all addresses or messages to 'the Nation' or to 'the Oireachtas' are supposed to have prior government approval. Some lawyers have questioned whether the speech at the inauguration should fall into the category requiring government approval. However, as it is impractical to get approval given that the new president is only president for a matter of moments before delivering the speech and so has not had a time to submit it, any constitutional questions as to its status are ignored.
Inauguration Day involves a lot of ritual and ceremonial. Until 1983 the morning saw the president-elect, accompanied by his spouse, escorted by the Presidential Motorcycle Escort to one of Dublin's cathedrals. If they were Catholic they were brought to St Mary's Pro-Cathedral for a Pontifical High Mass. If they were Church of Ireland, they were brought to St Patrick's Cathedral for a Divine Service. In the 1970s instead of separate denominational ceremonies a single ecumenical multi-faith service was held in the cathedral of the faith of the president-elect. Some additional religious ceremonies also featured: President-elect Cearbhall Ó Dálaigh attended a prayer ceremony in a synagogue in Dublin to reflect his longstanding relationship with the Jewish Community in Ireland.
In 1983, to reduce the costs of the day in a period of economic retrenchment, the separate religious blessing ceremony was incorporated into the inauguration ceremony itself, with the president-elect blessed by representatives of the Roman Catholic Church, the Church of Ireland, the Presbyterian Church, Methodism, the Society of Friends, and the Jewish and Islamic faiths. This inter-faith service has featured in the inaugurations since 1983. Since 2011, a representative from the Humanist Association of Ireland, representing humanism and the non-religious population of Ireland, has appeared alongside ministers of a religion.
For the first inauguration in 1938 President-elect Douglas Hyde wore a morning suit, with black silk top hat. Morning suits continued to be a standard feature of Irish presidential inaugurations until 1997 when Mary McAleese, whose husband disliked wearing formal suits, abolished their use for inaugurations (and for all other presidential ceremonial). From then, guests were required to wear plain business suits, and judges were prohibited from wearing their distinctive wigs and gowns. Ambassadors were also discouraged from wearing national dress.
The president-elect (unless they are already a serving president, in which case they will already be living in the presidential residence) are usually driven to the inauguration from their private home. After the ceremony they are driven through the streets of Dublin to Áras an Uachtaráin, the official presidential residence, where they are welcomed by the secretary-general to the president, the head of the presidential secretariat.
That evening, the Irish government hosts a reception in their honour in the State Apartments (the former Royal Apartments) in Dublin Castle. Whereas the dress code was formerly white tie affair, it is now more usually black tie.
The president can be removed from office in two ways, neither of which has ever been invoked. The Supreme Court, in a sitting of at least five judges, may find the president "permanently incapacitated", while the Oireachtas may remove the president for "stated misbehaviour". Either house of the Oireachtas may instigate the latter process by passing an impeachment resolution, provided at least thirty members move it and at least two-thirds support it. The other house will then either investigate the stated charges or commission a body to do so; following which at least two-thirds of members must agree both that the president is guilty and that the charges warrant removal.
As head of state of Ireland, the president receives the highest level of protection in the state. Áras an Uachtaráin is protected by armed guards from the Garda Síochána and Defence Forces at all times, and is encircled by security fencing and intrusion detection systems. At all times the president travels with an armed security detail in Ireland and overseas, which is provided by the Special Detective Unit (SDU), an elite wing of the Irish police force. Protection is increased if there is a known threat. The presidential limousine is a Mercedes-Benz S-Class LWB. The Presidential Limousine is dark navy blue and carries the presidential standard on the left front wing and the tricolour on the right front wing. When travelling the presidential limousine is always accompanied by support cars (normally BMW 5 Series, Audi A6 and Volvo S60 driven by trained drivers from the SDU) and several Garda motorcycle outriders from the Garda Traffic Corps which form a protective convoy around the car.
The president-elect is usually escorted to and from the ceremony by the Presidential Motorcycle Escort ceremonial outriders. Until 1947 they were a cavalry mounted escort, wearing light blue hussar-style uniforms. However to save money the first Inter-Party Government replaced the Irish horses by Japanese motorbikes, which the then Minister for Defence believed would be "much more impressive."
At the presidential inauguration in 1945, alongside the mounted escort on horseback, president-elect Seán T. O'Kelly rode in the old state landau of Queen Alexandra the Queen Mother. The use of the state carriage was highly popular with crowds. However an accident with a later presidential carriage at the Royal Dublin Society Horse show led to the abolition of the carriage and its replacement by a Rolls-Royce Silver Wraith in 1947. The distinctive 1947 Rolls-Royce is still used to bring the president to and from the inauguration today.
The Presidential State Car is a 1947 Rolls-Royce Silver Wraith landaulette, which is used only for ceremonial occasions.
The president also has the full use of all Irish Air Corps aircraft at his/her disposal if so needed, including helicopters and private jets.
The office of president was established in 1937, in part as a replacement for the office of governor-general that existed during the 1922–37 Irish Free State. The seven-year term of office of the president was inspired by that of the presidents of Weimar Germany. At the time the office was established critics warned that the post might lead to the emergence of a dictatorship. However, these fears were not borne out as successive presidents played a limited, largely apolitical role in national affairs.
During the period of 1937 to 1949 it was unclear whether the Irish head of state was actually the president of Ireland or George VI, the king of Ireland. This period of confusion ended in 1949 when the state was declared to be a republic. The 1937 constitution did not mention the king, but neither did it state that the president was head of state, saying rather that the president "shall take precedence over all other persons in the State". The president exercised some powers that could be exercised by heads of state but which could also be exercised by governors or governors-general, such as appointing the government and promulgating the law.
However, in 1936, George VI had been declared "King of Ireland" and, under the External Relations Act of the same year, it was this king who represented the state in its foreign affairs. Treaties, therefore, were signed in the name of the King of Ireland, who also accredited ambassadors and received the letters of credence of foreign diplomats. This role meant, in any case, that George VI was the Irish head of state in the eyes of foreign nations. The Republic of Ireland Act 1948, which came into force in April 1949, proclaimed a republic and transferred the role of representing the state abroad from the monarch to the president. No change was made to the constitution.
After the inaugural presidency of Douglas Hyde, who was an interparty nominee for the office, the nominees of the Fianna Fáil political party won every presidential election until 1990. The party traditionally used the nomination as a reward for its most senior and prominent members, such as party founder and longtime Taoiseach Éamon de Valera and European Commissioner Patrick Hillery. Most of its occupants to that time followed Hyde's precedent-setting conception of the presidency as a conservative, low-key institution that used its ceremonial prestige and few discretionary powers sparingly. In fact, the presidency was such a quiet position that Irish politicians sought to avoid contested presidential elections as often as possible, feeling that the attention such elections would bring to the office was an unnecessary distraction, and office-seekers facing economic austerity would often suggest the elimination of the office as a money-saving measure.
Despite the historical meekness of the presidency, however, it has been at the centre of some high-profile controversies. In particular, the fifth president, Cearbhall Ó Dálaigh, faced a contentious dispute with the government in 1976 over the signing of a bill declaring a state of emergency, which ended in Ó Dálaigh's resignation. His successor, Patrick Hillery, was also involved in a controversy in 1982, when then-Taoiseach Garret FitzGerald requested a dissolution of the Dáil Éireann. Hillery was bombarded with phone calls from opposition members urging him to refuse the request, an action that Hillery saw as highly inappropriate interference with the president's constitutional role and resisted the political pressure.
The presidency began to be transformed in the 1990s. Hillery's conduct regarding the dissolution affair in 1982 came to light in 1990, imbuing the office with a new sense of dignity and stability. However, it was Hillery's successor, seventh president Mary Robinson, who ultimately revolutionized the presidency. The winner of an upset victory in the highly controversial election of 1990, Robinson was the Labour nominee, the first president to defeat Fianna Fáil in an election and the first female president. Upon election, however, Robinson took steps to de-politicize the office. She also sought to widen the scope of the presidency, developing new economic, political and cultural links between the state and other countries and cultures, especially those of the Irish diaspora. Robinson used the prestige of the office to activist ends, placing emphasis during her presidency on the needs of developing countries, linking the history of the Great Irish Famine to today's nutrition, poverty and policy issues, attempting to create a bridge of partnership between developed and developing countries.
After the 2018 presidential election the official salary or "personal remuneration" of the president will be €249,014. The incumbent, Michael D. Higgins, chooses to receive the same salary although he is entitled to a higher figure of €325,507. The president's total "emoluments and allowances" includes an additional €317,434 for expenses. The Office of the President's total budget estimate for 2017 was €3.9 million, of which €2.6 million was for pay and running costs, and the balance for the "President's Bounty" paid to centenarians on their hundredth birthday.
The salary was fixed at IR£5000 from 1938 to 1973, since when it has been calculated as 10% greater than that of the Chief Justice. After the post-2008 Irish economic downturn most public-sector workers took significant pay cuts, but the Constitution prohibited a reduction in the salary of the president and the judiciary during their terms of office, in order to prevent such a reduction being used by the government to apply political pressure on them. While a 2011 Constitutional amendment allows judges' pay to be cut, it did not extend to the president, although incumbent Mary McAleese offered to take a voluntary cut in solidarity.
The text of the Constitution of Ireland, as originally enacted in 1937, made reference in its Articles 2 and 3 to two geopolitical entities: a thirty-two county 'national territory' (i.e., the island of Ireland), and a twenty-six county 'state' formerly known as the Irish Free State. The implication behind the title 'president of Ireland' was that the president would function as the head of all Ireland. However, this implication was challenged by the Ulster Unionists and the United Kingdom of Great Britain and Northern Ireland which was the state internationally acknowledged as having jurisdiction over Northern Ireland. Articles 2 and 3 were substantially amended in consequence of the 1998 Good Friday Agreement.
Ireland in turn challenged the proclamation in the United Kingdom of Queen Elizabeth II in 1952 as '[Queen] of the United Kingdom of Great Britain and Northern Ireland'. The Irish government refused to attend royal functions as a result; for example, Patrick Hillery declined on government advice to attend the wedding of the Prince of Wales to Lady Diana Spencer in 1981, to which he had been invited by Queen Elizabeth, just as Seán T. O'Kelly had declined on government advice to attend the 1953 Coronation Garden Party at the British Embassy in Dublin. Britain in turn insisted on referring to the president as 'president of the Republic of Ireland' or 'president of the Irish Republic'. Letters of Credence from Queen Elizabeth, on the British government's advice, appointing United Kingdom ambassadors to Ireland were not addressed to the 'president of Ireland' but to the president personally (for example: 'President Hillery').
The naming dispute and consequent avoidance of contact at head of state level has gradually thawed since 1990. President Robinson (1990–97) chose unilaterally to break the taboo by regularly visiting the United Kingdom for public functions, frequently in connection with Anglo-Irish Relations or to visit the Irish emigrant community in Great Britain. In another breaking of precedent, she accepted an invitation to Buckingham Palace by Queen Elizabeth II. Palace accreditation supplied to journalists referred to the "visit of the president of Ireland". Between 1990 and 2010, both Robinson and her successor President McAleese (1997–2011) visited the Palace on numerous occasions, while senior members of the British royal family – the Prince of Wales, the Duke of York, the Earl of Wessex and the Duke of Edinburgh – all visited both presidents of Ireland at Áras an Uachtaráin. The presidents also attended functions with the Princess Royal. President Robinson jointly hosted a reception with the queen at St. James's Palace, London, in 1995, to commemorate the one hundred and fiftieth anniversary of the foundation of the Queen's Colleges in 1845 (the Queen's Colleges are now known as the Queen's University of Belfast, University College, Cork, and National University of Ireland, Galway). These contacts eventually led to a state visit of Queen Elizabeth to Ireland in 2011.
Though the president's title implicitly asserted authority in Northern Ireland, in reality the Irish president needed government permission to visit there. (The Constitution of Ireland in Article 3 explicitly stated that "[p]ending the re-integration of the national territory" the authority of the Irish state did not extend to Northern Ireland. Presidents prior to the presidency of Mary Robinson were regularly refused permission by the Irish government to visit Northern Ireland.)
However, since the 1990s and in particular since the Good Friday Agreement of 1998, the president has regularly visited Northern Ireland. President McAleese, who was the first president to have been born in Northern Ireland, continued on from President Robinson in this regard. In a sign of the warmth of modern British-Irish relations, she has even been warmly welcomed by most leading unionists. At the funeral for a child murdered by the Real IRA in Omagh she symbolically walked up the main aisle of the church hand-in-hand with the Ulster Unionist Party leader and then First Minister of Northern Ireland, David Trimble. But in other instances, Mary McAleese had been criticised for certain comments, such as a reference to the way in which Protestant children in Northern Ireland had been brought up to hate Catholics just as German children had been encouraged to hate Jews under the Nazi regime, on 27 January 2005, following her attendance at the ceremony commemorating the sixtieth anniversary of the liberation of Auschwitz concentration camp. These remarks caused outrage among Northern Ireland's unionist politicians, and McAleese later apologised and conceded that her statement had been unbalanced.
There have been many suggestions for reforming the office of president over the years. In 1996, the Constitutional Review Group recommended that the office of President should remain largely unchanged. However, it suggested that the Constitution should be amended to explicitly declare the president to be head of state (at present that term does not appear in the text), and that consideration be given to the introduction of a constructive vote of no confidence system in the Dáil, along the lines of that in Germany. If this system were introduced then the power of the president to refuse a Dáil dissolution would be largely redundant and could be taken away. The All-party Oireachtas Committee on the Constitution's 1998 Report made similar recommendations.
In an October 2009 poll, concerning support for various potential candidates in the 2011 presidential election conducted by the "Sunday Independent", a "significant number" of people were said to feel that the presidency is a waste of money and should be abolished.
The functions of the president were exercised by the Presidential Commission from the coming into force of the Constitution on 29 December 1937 until the election of Douglas Hyde in 1938, and during the vacancies of 1974, 1976, and 1997.
Currently, there are two living former presidents: Mary Robinson and Mary McAleese. Former presidents who are able and willing to act are members of the Council of State. | https://en.wikipedia.org/wiki?curid=24676 |
Premier of Western Australia
The Premier of Western Australia is the head of the executive branch of government in the Australian state of Western Australia. The Premier has similar functions in Western Australia to those performed by the Prime Minister of Australia at the national level, subject to the different Constitutions.
The incumbent Premier of Western Australia is Mark McGowan who won the 2017 state election and was sworn in on 17 March 2017 by Governor Kerry Sanderson as the 30th Premier of Western Australia.
The premier must be a member of one of the two Houses of the Parliament of Western Australia; and by convention the premier is a member of the lower house, the Legislative Assembly. He or she is appointed by the governor on the advice of the lower house, and must resign if he or she loses the support of the majority of that house. Consequently, the premier is almost always the leader of the political party or coalition of parties with the majority of seats in the lower house.
The office of premier of Western Australia was first formed in 1890, after Western Australia was officially granted responsible government by Britain in 1889. The Constitution of Western Australia does not explicitly provide for a premier, and the office was not formally listed as one of the executive offices until the appointment of Ross McLarty in 1947. Nonetheless, John Forrest immediately adopted the title on taking office as first premier of Western Australia in 1890, and it has been used ever since.
John Forrest was the only premier of Western Australia as a self-governing colony. Following the Federation of Australia in 1901, Western Australia became an Australian state and the responsibilities of the office of premier were diminished.
Party politics began in Western Australia with the rise of the Labor party in 1901. By 1904, the party system was entrenched in Western Australian politics. Since then the premiers have been associated with political parties.
Western Australia's constitution contains nothing to preclude the premier being a member of the upper house, the Western Australian Legislative Council. Historically and by convention, however, the premier is a member of the Assembly. The only exception has been Hal Colebatch, a member of the Legislative Council who accepted the premiership in April 1919 on the understanding that an Assembly seat would be found for him, only to resign a month later when no seat could be found.
During the economic boom of the 1980s, the Western Australian government became closely involved with a number of large businesses. A succession of deals were made between the government and businesses, and these ultimately caused great losses for the state. A subsequent royal commission found evidence of widespread corruption. Three former premiers were found to have acted improperly and two of them, Ray O'Connor and Brian Burke, were jailed. This scandal became popularly known as WA Inc.
As of , seven former premiers are alive, the oldest being Peter Dowding (born 1943), who served from 1988 to 1990. The most recent premier to die was Ray O'Connor, on 25 February 2013, aged 86.
The only premier to serve in the upper house while premier was Sir Hal Colebatch, who was elected by the Nationalist Party to fill the vacancy presented by the resignation of Henry Lefroy, on the condition that a seat in the lower house would be found for him. He served as premier for a month before resigning after no seat could be found. | https://en.wikipedia.org/wiki?curid=24680 |
Pigeonhole sort
Pigeonhole sorting is a sorting algorithm that is suitable for sorting lists of elements where the number of elements ("n") and the length of the range of possible key values ("N") are approximately the same. It requires O("n" + "N") time. It is similar to counting sort, but differs in that it "moves items twice: once to the bucket array and again to the final destination [whereas] counting sort builds an auxiliary array then uses the array to compute each item's final destination and move the item there."
The pigeonhole algorithm works as follows:
Suppose one were sorting these value pairs by their first element:
For each value between 3 and 8 we set up a pigeonhole, then move each element to its pigeonhole:
The pigeonhole array is then iterated over in order, and the elements are moved back to the original list.
The difference between pigeonhole sort and counting sort is that in counting sort, the auxiliary array does not contain lists of input elements, only counts:
Using this information, one could perform a series of exchanges on the input array that would put it in order, moving items only once.
For arrays where "N" is much larger than "n", bucket sort is a generalization that is more efficient in space and time.
def pigeonhole_sort(a) -> None: | https://en.wikipedia.org/wiki?curid=24681 |
Pope Innocent XIII
Pope Innocent XIII (; 13 May 1655 – 7 March 1724), born as Michelangelo dei Conti, was head of the Catholic Church and ruler of the Papal States from 8 May 1721 to his death in 1724. He is the last pope to date to take the pontifical name of "Innocent" upon his election.
Pope Innocent XIII was reform-oriented, and he imposed new standards of frugality, abolishing excessive spending. He took steps to finally end the practice of nepotism by issuing a decree which forbade his successors from granting land, offices or income to any relatives - something opposed by many cardinals who hoped that they might become pope and benefit their families.
Michelangelo dei Conti was born on 13 May 1655 in Poli, near Rome as the son of Carlo II, Duke of Poli, and Isabella d'Monti. Like Pope Innocent III (1198–1216), Pope Gregory IX (1227–1241) and Pope Alexander IV (1254–1261), he was a member of the land-owning family of the Conti, who held the titles of counts and dukes of Segni. He included the family crest in his pontifical coats of arms.
Conti commenced his studies in Ancona and then with the Jesuits in Rome at the Collegio Romano and then later at La Sapienza University. After he received his doctorate in canon law and civil law, he was ordained to the priesthood. Conti also served as the Referendary of the Apostolic Signatura in 1691, later to be appointed as the Governor of Ascoli until 1692. Conti was also the Governor of Campagna and Marittima from 1692 to 1693 and the Governor of Viterbo from 1693 to 1695.
Pope Innocent XII selected Conti as the Titular Archbishop of Tarso on 13 June 1695 and he received his episcopal consecration on 16 June 1695 in Rome. Conti was also the nuncio to both Switzerland and Portugal.
On 7 June 1706, Conti was elevated to the cardinalate and was made the Cardinal-Priest of Santi Quirico e Giulitta under Pope Clement XI (1700–21). His appointment came about as the replacement of Gabriele Filippucci who declined the cardinalate. He would receive his titular church on 23 February 1711. From 1697 to 1710 he acted as papal nuncio to the Kingdom of Portugal, where he is believed to have formed those unfavourable impressions of the Jesuits which afterwards influenced his conduct towards them. While in Portugal, he was witness to Father Bartolomeu de Gusmão's early aerostat experiments.
He was also transferred to Osimo as its archbishop in 1709 and was later translated one last time to Viterbo e Toscanella in 1712. He also served as Camerlengo of the Sacred College of Cardinals from 1716 to 1717 and resigned his position in his diocese due to illness in 1719.
After the death of Pope Clement XI in 1721, a conclave was called to choose a new pope. It took 75 ballots just to reach a decision and choose Conti as the successor of Clement XI. After all candidates seemed to slip, support turned to Conti. The curial factions also turned their attention to him. In the morning of 8 May 1721, he was elected. He chose the name of Innocent XIII in honour of Pope Innocent III. On the following 18 May, he was solemnly crowned by the protodeacon, Cardinal Benedetto Pamphili.
In 1721 his high reputation for ability, learning, purity, and a kindly disposition secured his election to succeed Clement XI as Pope Innocent XIII. His pontificate was prosperous, but comparatively uneventful. He held two consistories that saw three new cardinals elevated on 16 June 1721 and 16 July 1721.
The Chinese Rites controversy that started under his predecessor continued during his reign. Innocent XIII prohibited the Jesuits from prosecuting their mission in China, and ordered that no new members should be received into the order. This indication of his sympathies encouraged some French bishops to approach him with a petition for the recall of the bull "Unigenitus" by which Jansenism had been condemned; the request, however, was peremptorily denied.
The pope also assisted the Venetians in their struggles and also assisted Malta in its struggles against the Turks.
Innocent XIII, like his predecessor, showed much favour to James Francis Edward Stuart, the "Old Pretender" to the British throne and liberally supported him. The pope's cousin, Francesco Maria Conti, from Siena, became chamberlain of James' little court in the Roman Muti Palace.
Innocent XIII held two consistories in which he named three cardinals. One of those new cardinals was his own brother, Bernardo Maria.
Innocent XIII beatified three individuals during his pontificate: John of Nepomuk (31 May 1721), Dalmazio Moner (13 August 1721), and Andrea dei Conti (11 December 1723).
In 1722, he named Saint Isidore of Seville as a Doctor of the Church.
Innocent XIII fell ill in 1724. He was tormented by a hernia of which he spoke to nobody but his valet. At one point, it had burst and caused inflammation and fever. Innocent XIII asked for the last rites, made his profession of faith, and died on 7 March 1724, at the age of 68. His pontificate was unremarkable, given that he was hampered by physical suffering. He was interred in the grottoes at Saint Peter's Basilica.
In 2005 upon the occasion of the 350 years since the birth of the late pontiff, the citizens in the late pope's village of birth asked the Holy See to introduce the cause of beatification for Innocent XIII. | https://en.wikipedia.org/wiki?curid=24683 |
Pope Julius I
Pope Julius I was the bishop of Rome from 6 February 337 to his death on 12 April 352. He is notable for asserting the authority of the pope over the Arian Eastern bishops, as well as setting 25 December as the official birthdate of Jesus.
Julius was a native of Rome and was chosen as successor of Pope Mark after the Roman seat had been vacant for four months.
Julius is chiefly known by the part he took in the Arian controversy. After the followers of Eusebius of Nicomedia, who had become the patriarch of Constantinople, renewed their deposition of Athanasius of Alexandria at a synod held in Antioch in 341, they resolved to send delegates to Constans, emperor of the West, and also to Julius, setting forth the grounds on which they had proceeded. Julius, after expressing an opinion favourable to Athanasius, adroitly invited both parties to lay the case before a synod to be presided over by himself. This proposal, however, the Arian Eastern bishops declined to accept.
On this second banishment from Alexandria, Athanasius came to Rome, and was recognised as a regular bishop by the synod presided over by Julius in 342. Julius sent a letter to the Eastern bishops that is an early instance of the claims of primacy for the bishop of Rome. Even if Athanasius and his companions were somewhat to blame, the letter runs, the Alexandrian Church should first have written to the pope. "Can you be ignorant," writes Julius, "that this is the custom, that we should be written to first, so that from here what is just may be defined" (Epistle of Julius to Antioch, c. xxii).
It was through the influence of Julius that, at a later date, the council of Sardica in Illyria was held, which was attended only by seventy-six Eastern bishops, who speedily withdrew to Philippopolis and deposed Julius at the council of Philippopolis, along with Athanasius and others. The three hundred Western bishops who remained, confirmed the previous decisions of the Roman synod and issued a number of decrees regarding church discipline. The first canon forbade the transfer of bishops from one see to another, for if frequently made, it was seen to encourage covetousness and ambition.
By its 3rd, 4th, and 5th decrees relating to the rights of revision claimed by Julius, the council of Sardica perceptibly helped forward the claims of the bishop of Rome. Julius built several basilicas and churches.
Around 350, Julius I declared December 25 as the official date of the birth of Jesus, around the same time as the festival of Saturnalia; the actual date of Jesus's birth is unknown. Some have speculated that part of the reason why he chose this date may have been because he was trying to create a Christian alternative to Saturnalia. Another reason for the decision may have been because, in 274 AD, the Roman emperor Aurelian had declared 25 December the birthdate of Sol Invictus and Julius I may have thought that he could attract more converts to Christianity by allowing them to continue to celebrate on the same day. He may have also been influenced by the idea that Jesus had died on the anniversary of his conception; because Jesus died during Passover and, in the third century AD, Passover was celebrated on 25 March, he may have assumed that Jesus's birthday must have come nine months later, on 25 December.
Julius I died in Rome on 12 April 352. He was succeeded by Liberius. Julius is venerated as a saint by the Catholic Church. His feast day is on 12 April. | https://en.wikipedia.org/wiki?curid=24684 |
Pope Julius III
Pope Julius III (; 10 September 1487 – 23 March 1555), born Giovanni Maria Ciocchi del Monte, was head of the Catholic Church and ruler of the Papal States from 7 February 1550 to his death in 1555.
After a career as a distinguished and effective diplomat, he was elected to the papacy as a compromise candidate after the death of Paul III. As pope, he made only reluctant and short-lived attempts at reform, mostly devoting himself to a life of personal pleasure. His reputation, and that of the Catholic Church, were greatly harmed by his scandal-ridden relationship with his adopted nephew.
Giovanni Maria Ciocchi del Monte was born in Monte San Savino. He was educated by the humanist Raffaele Brandolini Lippo, and later studied law at Perugia and Siena. During his career, he distinguished himself as a brilliant canonist rather than as a theologian.
Del Monte was the nephew of Antonio Maria Ciocchi del Monte, Archbishop of Manfredonia (1506–1511). When his uncle exchanged this see for a position as a Cardinal in 1511, Giovanni Maria Ciocchi del Monte succeeded in Manfredonia in 1512. In 1520, del Monte also became Bishop of Pavia. Popular for his affable manner and respected for his administrative skills, he was twice Governor of Rome and was entrusted by the papal curia with several duties. At the Sack of Rome (1527) he was one of the hostages given by Pope Clement VII to the Emperor's forces, and barely escaped execution. Pope Paul III made him Cardinal-bishop of Palestrina in 1536 and employed him in several important legations, notably as papal legate and first president of the Council of Trent (1545/47) and then at Bologna (1547/48).
Paul III died on 10 November 1549, and in the ensuing conclave the forty-eight cardinals were divided into three factions: of the primary factions, the Imperial faction wished to see the Council of Trent reconvened, the French faction wished to see it dropped. The Farnese faction, loyal to the family of the previous Pope, supported the election of Paul III's grandson, Cardinal Alessandro Farnese, and also the family's claim to the Duchy of Parma, which was contested by Emperor Charles V.
Neither the French nor the Germans favoured del Monte, and the Emperor had expressly excluded him from the list of acceptable candidates, but the French were able to block the other two factions, allowing del Monte to promote himself as a compromise candidate and be elected on 7 February 1550. Ottavio Farnese, whose support had been crucial to the election, was immediately confirmed as Duke of Parma. But, when Farnese applied to France for aid against the emperor, Julius allied himself with the emperor, declared Farnese deprived of his fief, and sent troops under the command of his nephew Giambattista del Monte to co-operate with Duke Gonzaga of Milan in the capture of Parma.
At the start of his reign Julius had seriously desired to bring about a reform of the Catholic Church and to reconvene the Council of Trent, but very little was actually achieved during his five years in office. In 1551, at the request of Emperor Charles V, he consented to the reopening of the council of Trent and entered into a league against the duke of Parma and Henry II of France (1547–59), causing the War of Parma. However, Julius soon came to terms with the duke and France and in 1553 suspended the meetings of the council.
King Henry II of France had threatened to withdraw recognition from the Pope if the new Pope was pro-Habsburg in orientation, and when Julius III reconvened the Council of Trent, Henry blocked French bishops from attending and did not enforce the papal decrees in France. Even after Julius III suspended the Council again he proceeded to bully the pope into taking his side against the Habsburgs by threatening schism.
Julius increasingly contented himself with Italian politics and retired to his luxurious palace at the Villa Giulia, which he had built for himself close to the Porta del Popolo. From there he passed the time in comfort, emerging from time to time to make timid efforts to reform the Church through the reestablishment of the reform commissions. He was a friend of the Jesuits, to whom he granted a fresh confirmation in 1550; and through the papal bull, "Dum sollicita" of August 1552, he founded the Collegium Germanicum, and granted an annual income.
During his pontificate, Catholicism was restored in England under Queen Mary in 1553. Julius sent Cardinal Reginald Pole as legate with powers that he could use at his discretion to help the restoration succeed. In February 1555, an envoy was dispatched from the English Parliament to Julius to inform him of the country's formal submission, but the pope died before the envoy reached Rome.
Shortly before his death, Julius dispatched Cardinal Giovanni Morone to represent the interests of the Holy See at the Peace of Augsburg. His inactivity during the last three years of his pontificate may have been caused by the frequent and severe attacks of the gout to which he was subject.
Julius' papacy was marked by scandals, the most notable of which is centered around the pope's adoptive nephew, Innocenzo Ciocchi Del Monte. Innocenzo del Monte was a teenaged beggar found in the streets of Parma who was hired by the family as a lowly hall boy in their primary residence, the boy's age being variously given as 14, 15, or 17 years. After the elevation of Julius to the papacy, Innocenzo Del Monte was adopted into the family by the pope's brother and, by Julius, was then promptly created cardinal-nephew. Julius showered his favourite with benefices, including the "commendatario" of the abbeys of Mont Saint-Michel in Normandy and Saint Zeno in Verona, and, later, of the abbeys of Saint Saba, Miramondo, Grottaferrata and Frascati, among others. As rumours began to circle about the particular relationship between the pope and his adoptive nephew, Julius refused to take advice. The cardinals Reginald Pole and Giovanni Carafa warned the pope of the "evil suppositions to which the elevation of a fatherless young man would give rise".
Poet Joachim du Bellay, who lived in Rome through this period in the retinue of his relative, Cardinal Jean du Bellay, expressed his scandalized opinion of Julius in two sonnets in his series Les regrets (1558), hating to see, he wrote, "a Ganymede with the red hat on his head". The courtier and poet Girolamo Muzio in a letter of 1550 to Ferrante Gonzaga, governor of Milan, wrote: "They write many bad things about this new pope; that he is vicious, proud, and odd in the head", and the Pope's enemies made capital of the scandal, Thomas Beard, in the "Theatre of God's judgement" (1597) saying it was Julius' "custome ... to promote none to ecclesiastical livings, save only his buggerers”. In Italy, it was said that Julius showed the impatience of a "lover awaiting a mistress" while awaiting Innocenzo's arrival in Rome and boasted of the boy's prowess in bed, while the Venetian ambassador reported that Innocenzo Del Monte shared the pope's bed "as if he [Innocenzo] were his [Julius'] own son or grandson." "The charitably-disposed told themselves that the boy might after all be simply his bastard son."
Despite the damage which the scandal was inflicting on the church, it was not until after Julius' death in 1555 that anything could be done to curb Innocenzo's visibility. He underwent temporary banishment following the murder of two men who had insulted him, and then again following the rape of two women. He tried to use his connections in the College of Cardinals to plead his cause, but his influence waned, and he died in obscurity. He was buried in Rome in the Del Monte family chapel. One outcome of the cardinal-nephew scandal, however, was the upgrading of the position of Papal Secretary of State, as the incumbent had to take over the duties Innocenzo Del Monte was unfit to perform: the Secretary of State eventually replaced the cardinal-nephew as the most important official of the Holy See.
The pope's lack of interest in political or ecclesiastical affairs caused dismay among his contemporaries. He spent the bulk of his time, and a great deal of papal money, on entertainments at the Villa Giulia, created for him by Vignola, but more significant and lasting was his patronage of the great Renaissance composer Giovanni Pierluigi da Palestrina, whom he brought to Rome as his "maestro di cappella", Giorgio Vasari, who supervised the design of the Villa Giulia, and Michelangelo, who worked there.
In the novel "Q" by Luther Blissett, Julius appears toward the end of the book as a moderate cardinal favouring religious tolerance, in the upheavals caused by the Reformation and the Roman Church's response during the 16th century. His election as pope and the subsequent unleashing of the Inquisition form the last chapters of the novel. | https://en.wikipedia.org/wiki?curid=24685 |
Pope Eugene I
Pope Eugene I (; died 2 June 657) was the bishop of Rome from 10 August 654 to his death. He was chosen to become pope after the deposition and banishment of Martin I by Emperor Constans II over the dispute about Monothelitism.
Eugene was a Roman from the Aventine, son of Rufinianus. He was brought up in the Church's ministry, and was already an elderly priest when a dispute flared up between the papacy in Rome, which opposed the monothelite teachings, and the imperial government in Constantinople, which supported it. As a result, Pope Martin I was deposed by Emperor Constans II and carried off from Rome on 18 June 653, eventually ending up banished to Cherson. Little is known about what happened in Rome after Martin's departure, but it was typical in those days for the Holy See to be governed by the archpriest and archdeacon. Martin hoped that a successor would not be elected while he lived, but the imperial court exerted pressure on Rome through the exarch of Ravenna. On 10 August 654, Eugene was appointed the new pope. Martin, though disappointed, seems to have acceded. The imperial government believed that Eugene would be cooperative and ratified his election.
As pope, Eugene consecrated twenty-one bishops for different parts of the world and received the youthful Wilfrid on the occasion of his first visit to Rome (c. 654).
Eugene I showed greater deference than his predecessor to the emperor's wishes and made no public stand against the Monothelitism of the patriarchs of Constantinople. One of the first acts of the new pope was to send legates to Constantinople with letters to Emperor Constans II informing him of his election and professing his faith. The legates were deceived, or bribed, and brought back a synodical letter from Patriarch Peter of Constantinople (656–666), while the emperor's envoy, who accompanied them, brought offerings for Saint Peter and a request from the emperor that the pope would enter into communion with the patriarch of Constantinople. Peter's letter proved to be written in a difficult and obscure style and avoided making any specific declaration as to the number of "wills or operations" in Christ. When its contents were read to the clergy and people in the church of St. Mary Major in 656, they not only rejected the letter with indignation, but would not allow the pope to leave the basilica until he had promised that he would not on any account accept it.
The imperial officials were furious at this harsh rejection of the wishes of the emperor and patriarch. Constans threatened to dispose of Eugene just as he had disposed of Martin, but was preoccupied by defending the empire from the Muslim conquests.
Eugene I died on 2 June 657, before Constans II could act against him. He was buried in Old St. Peter's Basilica. He was acclaimed a saint, his day being 2 June. He is commemorated as the patron and namesake of the Cathedral of Saint Eugene in the Diocese of Santa Rosa in California. | https://en.wikipedia.org/wiki?curid=24686 |
Pope Eugene II
Pope Eugene II (; died 27 August 827) was the bishop of Rome and ruler of the Papal States from 6 June 824 to his death. A native of Rome, he was chosen by nobles to succeed Paschal I as pope despite the clergy and the people favoring Zinzinnus. The influence of the Carolingian Franks on the selection of popes was then firmly established. Pope Eugene convened a council at Rome in 826 to condemn simony and suspend untrained clergy. It was decreed that schools were to be established at cathedral churches and other places to give instruction in sacred and secular literature. His involvement in the Byzantine Iconoclasm controversy was largely inconsequential.
In earlier editions of the "Liber Pontificalis" Eugene is said to have been the son of Boemund, but in the more recent and more accurate editions, his father's name is not given. While he was archpriest of St Sabina on the Aventine, and was said to have fulfilled most conscientiously the duties of his position. Eugene is described by his biographer as simple and humble, learned and eloquent, handsome and generous, a lover of peace, and wholly occupied with the thought of doing what was pleasing to God.
Eugene was elected pope on 6 June 824, after the death of Paschal I. Paschal had attempted to curb the rapidly increasing power of the Roman nobility, who had turned for support to the Franks to strengthen their positions against him. When Paschal died, these nobles made strenuous efforts to replace him with a candidate of their own. The clergy put forward Zinzinnus, a candidate likely to continue the policy of Paschal. Even though the Roman Council of 769 under Stephen IV had decreed that the nobles had no right to a real share in a papal election, the nobles were successful in securing the consecration of Eugene. Eugene's candidacy was endorsed by Abbot Walla, who was then in Rome and served as a councilor to both the current emperor, Louis the Pious, and his predecessor, Charlemagne.
The election of Eugene II was a triumph for the Franks, and they subsequently resolved to improve their position. Emperor Louis the Pious accordingly sent his son Lothair I to Rome to strengthen the Frankish influence. The Roman nobles who had been banished during the preceding reign and fled to France were recalled, and their property was restored to them. A "Constitutio Romana" was then agreed upon between the pope and the emperor in 824 which advanced the imperial pretensions in the city of Rome, but also checked the power of the nobles. This constitution included the statute that no pope should be consecrated until his election had the approval of the Frankish emperor. It decreed that those who were under the special protection of the pope or emperor were to be inviolable, and that church property not be plundered after the death of a pope.
Seemingly before Lothair left Rome, there arrived ambassadors from Emperor Louis and from the Greeks concerning the controversy of Byzantine Iconoclasm. At first the iconoclast Eastern Roman Emperor Michael II showed himself tolerant towards the icon worshippers, and their great champion, Theodore the Studite, wrote to him to exhort him "to unite us [the Church of Constantinople] to the head of the Churches of God, Rome, and through it with the three patriarchs" and to refer any doubtful points to the decision of Old Rome in accordance with ancient custom. But Michael soon forgot his tolerance, bitterly persecuted the icon worshippers, and endeavoured to secure the co-operation of Louis the Pious. He also sent envoys to the pope to consult him on certain points connected with the worship of icons. Before taking any steps to meet the wishes of Michael, Louis asked the pope's permission for a number of his bishops to assemble and make a selection of passages from the Fathers to elucidate the question that the Greeks had put before them. The leave was granted, but the bishops who met at Paris in 825 were incompetent for the task. Their collection of extracts from the Fathers was a mass of confused and ill-digested lore, and both their conclusions and the letters they wished the pope to forward to the Greeks were based on a complete misunderstanding of the decrees of the Second Council of Nicaea. Their labours do not appear to have accomplished much; nothing is known of the result of their researches.
In 826 Eugene held an important council at Rome of 62 bishops, in which 38 disciplinary decrees were issued. The council passed several enactments for the restoration of church discipline, and took measures for the foundation of schools or chapters. The decrees are noteworthy as showing that Eugene had at heart the advancement of learning. Not only were ignorant bishops and priests to be suspended till they had acquired sufficient learning to perform their sacred duties, but it was decreed that, as in some localities there were neither masters nor zeal for learning, masters were to be attached to the episcopal palaces, cathedral churches and other places to give instruction in sacred and polite literature. It also ruled against priests wearing secular dress or engaging in secular occupations. Simony was forbidden. Eugene also adopted various provisions for the care of the poor, widows and orphans, and on that account received the name of "father of the people".
To help in the work of the conversion of the North, Eugene wrote commending St. Ansgar, the Apostle of the Scandinavians, and his companions "to all the sons of the Catholic Church".
Eugene II died on 27 August 827. It is supposed that he was buried in St. Peter's in accordance with the custom of the time, even though there is no documentary record to confirm it. Coins of this pope are extant bearing his name and that of Emperor Louis. As pope, Eugene beautified his ancient church of St. Sabina with mosaics and metalwork bearing his name that were still intact as late as the 16th century. | https://en.wikipedia.org/wiki?curid=24687 |
Pope Eugene III
Pope Eugene III (; c. 1080 – 8 July 1153), born Bernardo Pignatelli, or possibly Paganelli, called Bernardo da Pisa, was head of the Catholic Church and ruler of the Papal States from 15 February 1145 to his death in 1153. He was the first Cistercian to become pope. In response to the fall of Edessa to the Muslims in 1144, Eugene proclaimed the Second Crusade. The crusade failed to recapture Edessa, which was the first of many failures by the Christians in the crusades to recapture lands won in the First Crusade. He was beatified in 1872 by Pope Pius IX.
Bernardo was born in the vicinity of Pisa. Little is known about his origins and family except that he was son of a certain Godius. From the 16th century he is commonly identified as member of the family of Paganelli di Montemagno, which belonged to the Pisan aristocracy, but this has not been proven and contradicts earlier testimonies that suggest he was a man of rather humble origins. In 1106 he was a canon of the cathedral chapter in Pisa and from 1115 is attested as subdeacon. 1133–1138 he acted as "vicedominus" of the archdiocese of Pisa.
Between May 1134 and February 1137 he was ordained to the priesthood by Pope Innocent II, who resided at that time in Pisa. Under the influence of Bernard of Clairvaux he entered the Cistercian Order in the monastery of Clairvaux in 1138. A year later he returned to Italy as leader of the Cistercian community in Scandriglia. In Autumn 1140, Pope Innocent II named him abbot of the monastery of S. Anastasio alle Tre Fontane outside Rome. Some chronicles indicate that he was also elevated to the College of Cardinals, but these testimonies probably resulted from a confusion because Bernardo is not attested as cardinal in any document and from the letter of Bernard of Clairvaux addressed to the cardinals shortly after his election clearly appears that he was not a cardinal.
Bernardo was elected pope on 15 February 1145, the same day as the death of his predecessor, Lucius II, who had unwisely decided to take the offensive against the Roman Senate and was killed by a "heavy stone" thrown at him during an attack on the Capitol. He took the pontifical name Eugene III. He was "a simple character, gentle and retiring - not at all, men thought, the material of which Popes are made". He owed his elevation partly to the fact that no one was eager to accept an office the duties of which were at the time so difficult and dangerous and because the election was "held on safe Frangipani territory".
Bernardo's election was assisted by being a friend and pupil of Bernard of Clairvaux, the most influential ecclesiastic of the Western Church and a strong assertor of the pope's temporal authority. The choice did not have the approval of Bernard, however, who remonstrated against the election, writing to the entire Curia:"May God forgive you what you have done! ... What reason or counsel, when the Supreme Pontiff was dead, made you rush upon a mere rustic, lay hands on him in his refuge, wrest from his hands the axe, pick or hoe, and lift him to a throne?"Bernard was equally forthright in his views directly to Eugene, writing:"Thus does the finger of God raise up the poor out of the dust and lift up the beggar from the dunghill that he may sit with princes and inherit the throne of glory."Despite these criticisms, Eugene seems to have borne no resentment to Bernard and notwithstanding these criticisms, after the choice was made, Bernard took advantage of the qualities in Eugene III which he objected to, so as virtually to rule in his name.
During nearly the whole of his pontificate, Eugene III was unable to reside in Rome. Hardly had he left the city to be consecrated in the Farfa Abbey (about 40 km north of Rome), when the citizens, under the influence of Arnold of Brescia, the great opponent of the Pope's temporal power, established the old Roman constitution, the Commune of Rome and elected Giordano Pierleoni to be patrician. Eugene III appealed for help to Tivoli, Italy, to other cities at feud with Rome, and to King Roger II of Sicily (who sent his general Robert of Selby), and with their aid was successful in making such conditions with the Roman citizens as enabled him for a time to hold the semblance of authority in his capital. But as he would not agree to a treacherous compact against Tivoli, he was compelled to leave the city in March 1146. He stayed for some time at Viterbo, and then at Siena, but went ultimately to France.
On hearing of the fall of Edessa (now the modern day city of Urfa, the first of the Crusader states established in the Levant) to the Turks, which occurred in 1144, he had, in December 1145, addressed the bull "Quantum praedecessores" to Louis VII of France, calling on him to take part in another crusade. At a great diet held at Speyer in 1146, King Conrad III of Germany and many of his nobles were also incited to dedicate themselves to the crusade by the eloquence of Bernard of Clairvaux, preached to an enormous crowd at Vézelay. The Second Crusade turned out to be "an ignominious fiasco" and, after travelling for a year, the army abandoned their campaign after just five days of siege "having regained not one inch of Muslim territory." The crusaders suffered immense losses in both men and materiel and suffered, in the view of one modern historian, "the ultimate humiliation which neither they, nor their enemies, would forget".
Eugene III held synods in northern Europe at Paris, Rheims (March 1148), and Trier in 1147 that were devoted to the reform of clerical life. He also considered and approved the works of Hildegard of Bingen.
In June 1148, Eugene III returned to Italy and took up his residence at Viterbo. He was unable to return to Rome due to the popularity of Arnold of Brescia, who opposed papal temporal authority, in the city. He established himself at Ptolemy II's fortress in Tusculum, the closest town to Rome at which he could safely install himself, on 8 April 1149. There he met the returning Crusader couple Louis VII of France and Eleanor of Aquitaine, who were by then barely on speaking terms given the strains of the failed Crusade and the rumors of Eleanor's incestuous adultery during the Crusade. Eugene, "a gentle, kind-hearted man who hated to see people unhappy" attempted to assuage the pain of the failed Crusade and their failing marriage by insisting that they slept in the same bed and "by daily converse to restore the love between them". His efforts were unsuccessful, and two years later Eugene agreed to annul the marriage on the grounds of consanguinity.
Eugene stayed at Tusculum until 7 November. At the end of November 1149, through the aid of the king of Sicily, he was again able to enter Rome, but the atmosphere of open hostility from the Comune soon compelled him to retire (June 1150). Emperor Frederick I Barbarossa promised to aid Eugene against his subjects who had revolted but the support never came. Eugene III died at Tivoli on 8 July 1153. Though the citizens of Rome resented Eugene III's effort to assert his temporal authority, they recognized him as their spiritual lord. Until the day of his death he continued to wear the coarse habit of a Cistercian monk under his robe. He was buried in the Vatican with every mark of respect.
The people of Rome were quick to recognize Eugene III as a pious figure who was meek and spiritual. His tomb acquired considerable fame owing to the miracle purported to have occurred there and his cause for sainthood commenced. Pope Pius IX beatified him in 1872. | https://en.wikipedia.org/wiki?curid=24688 |
Plaintiff
A plaintiff (Π in legal shorthand) is the party who initiates a lawsuit (also known as an "action") before a court. By doing so, the plaintiff seeks a legal remedy; if this search is successful, the court will issue judgment in favor of the plaintiff and make the appropriate court order (e.g., an order for damages). "Plaintiff" is the term used in civil cases in most English-speaking jurisdictions, the notable exception being England and Wales, where a plaintiff has, since the introduction of the Civil Procedure Rules in 1999, been known as a "claimant", but that term also has other meanings. In criminal cases, the prosecutor brings the case against the defendant, but the key complaining party is often called the "complainant".
In some jurisdictions, a lawsuit is commenced by filing a summons, claim form or a complaint. These documents are known as pleadings, that set forth the alleged wrongs committed by the defendant or defendants with a demand for relief. In other jurisdictions, the action is commenced by service of legal process by delivery of these documents on the defendant by a process server; they are only filed with the court subsequently with an affidavit from the process server that they had been given to the defendant according to the rules of civil procedure.
In most English-speaking jurisdictions, including Hong Kong, Nigeria, Australia, Canada and the United States, as well as in both Northern Ireland and the Republic of Ireland, the legal term "plaintiff" is used as a general term for the party taking action in a civil case.
The word "plaintiff" can be traced to the year 1278, and stems from the Anglo-French word "pleintif" meaning "complaining". It was identical to "plaintive" at first and receded into legal usage with the -iff spelling in the 15th century.
A plaintiff identified by name in a class action is called a named plaintiff.
In most common-law jurisdictions, the term "claimant" used in England and Wales since 1999 (see below) is used only in specific, often non-judicial contexts. In particular, in American usage, terms such as "claimant" and "claim form" are limited to extrajudicial process in insurance and administrative law. After exhausting remedies available through an insurer or government agency, an American claimant in need of further relief would turn to the courts, file a complaint (thus establishing a real court case under judicial supervision) and become a plaintiff.
In England and Wales, the term "claimant" replaced "plaintiff" after the Civil Procedure Rules came into force on 26 April 1999. The move, which brings England and Wales out of line with general usage in English-speaking jurisdictions, was reportedly based on an assessment that the word "claimant" is more acceptable as "plain English" than the word "plaintiff". In Scottish law a plaintiff is referred to as a "pursuer" and a defendant as a "defender".
The party against whom the complaint is made is the defendant; or, in the case of a petition, a respondent. Case names are usually given with the plaintiff first, as in "Plaintiff v. Defendant".
The similar term "complainant" denotes the complaining witness in a criminal proceeding. | https://en.wikipedia.org/wiki?curid=24690 |
Philosophy of law
Philosophy of law is a branch of philosophy that examines the nature of law and law's relationship to other systems of norms, especially ethics and political philosophy. It asks questions like "What is law?", "What are the criteria for legal validity?", and "What is the relationship between law and morality?" Philosophy of law and jurisprudence are often used interchangeably, though jurisprudence sometimes encompasses forms of reasoning that fit into economics or sociology.
Philosophy of law can be sub-divided into analytical jurisprudence and normative jurisprudence. Analytical jurisprudence aims to define what law is and what it is not by identifying law's essential features. Normative jurisprudence investigates both the non-legal norms that shape law and the legal norms that are generated by law and guide human action.
Analytical jurisprudence seeks to provide a general account of the nature of law through the tools of conceptual analysis. The account is general in the sense of targeting universal features of law that hold at all times and places. Whereas lawyers are interested in what the law is on a specific issue in a specific jurisdiction, philosophers of law are interested in identifying the features of law shared across cultures, times, and places. Taken together, these foundational features of law offer the kind of universal definition philosophers are after. The general approach allows philosophers to ask questions about, for example, what separates law from morality, politics, or practical reason. Often, scholars in the field presume that law has a unique set of features that separate it from other phenomena, though not all share the presumption.
While the field has traditionally focused on giving an account of law's nature, some scholars have begun to examine the nature of domains within law, e.g. tort law, contract law, or criminal law. These scholars focus on what makes certain domains of law distinctive and how one domain differs from another. A particularly fecund area of research has been the distinction between tort law and criminal law, which more generally bears on the difference between civil and criminal law.
Several schools of thought have developed around the nature of law, the most influential of which are:
In recent years, debates about the nature of law have become increasingly fine-grained. One important debate exists within legal positivism about the separability of law and morality. Exclusive legal positivists claim that the legal validity of a norm never depends on its moral correctness. Inclusive legal positivists claim that moral considerations "may" determine the legal validity of a norm, but that it is not necessary that this is the case. Positivism began as an inclusivist theory; but influential exclusive legal positivists, including Joseph Raz, John Gardner, and Leslie Green, later rejected the idea.
A second important debate, often called the "Hart-Dworkin Debate," concerns the battle between the two most dominant schools in the late 20th and early 21st century, legal interpretivism and legal positivism.
In addition to analytic jurisprudence, legal philosophy is also concerned with normative theories of law. "Normative jurisprudence involves normative, evaluative, and otherwise prescriptive questions about the law." For example, What is the goal or purpose of law? What moral or political theories provide a foundation for the law? Three approaches have been influential in contemporary moral and political philosophy, and these approaches are reflected in normative theories of law:
There are many other normative approaches to the philosophy of law, including critical legal studies and libertarian theories of law.
Philosophers of law are also concerned with a variety of philosophical problems that arise in particular legal subjects, such as constitutional law, Contract law, Criminal law, and Tort law. Thus, philosophy of law addresses such diverse topics as theories of contract law, theories of criminal punishment, theories of tort liability, and the question of whether judicial review is justified. | https://en.wikipedia.org/wiki?curid=24694 |
Personal property
Personal property is property that is movable. In common law systems, personal property may also be called chattels or personalty. In civil law systems, personal property is often called movable property or movables – any property that can be moved from one location to another.
Personal property can be understood in comparison to real estate, immovable property or real property (such as land and buildings).
Movable property on land (larger livestock, for example) was not automatically sold with the land, it was "personal" to the owner and moved with the owner.
The word "cattle" is the Old Norman variant of Old French "chatel", chattel (derived from Latin "capitalis", “of the head”), which was once synonymous with general movable personal property.
Personal property may be classified in a variety of ways.
Intangible personal property or "intangibles" refers to personal property that cannot actually be moved, touched or felt, but instead represents something of value such as negotiable instruments, securities, service (economics), and intangible assets including chose in action.
Tangible personal property refers to any type of property that can generally be moved (i.e., it is not attached to real property or land), touched or felt. These generally include items such as furniture, clothing, jewelry, art, writings, or household goods. In some cases, there can be formal title documents that show the ownership and transfer rights of that property after a person's death (for example, motor vehicles, boats, etcetera) In many cases, however, tangible personal property will not be "titled" in an owner's name and is presumed to be whatever property he or she was in possession of at the time of his or her death.
Accountants also distinguish personal property from real property because personal property can be depreciated faster than improvements (while land is not depreciable at all). It is an owner's right to get tax benefits for chattel, and there are businesses that specialize in appraising personal property, or chattel.
The distinction between these types of property is significant for a variety of reasons. Usually one's rights on movables are more attenuated than one's rights on immovables (or real property). The statutes of limitations or prescriptive periods are usually shorter when dealing with personal or movable property. Real property rights are usually enforceable for a much longer period of time and in most jurisdictions real estate and immovables are registered in government-sanctioned land registers. In some jurisdictions, rights (such as a lien or other security interest) can be registered against personal or movable property.
In the common law it is possible to place a mortgage upon real property. Such mortgage requires payment or the owner of the mortgage can seek foreclosure. Personal property can often be secured with similar kind of device, variously called a chattel mortgage, trust receipt, or security interest. In the United States, Article 9 of the Uniform Commercial Code governs the creation and enforcement of security interests in most (but not all) types of personal property.
There is no similar institution to the mortgage in the civil law, however a hypothec is a device to secure real rights against property. These real rights follow the property along with the ownership. In the common law a lien also remains on the property and it is not extinguished by alienation of the property; liens may be real or equitable.
Many jurisdictions levy a personal property tax, an annual tax on the privilege of owning or possessing personal property within the boundaries of the jurisdiction. Automobile and boat registration fees are a subset of this tax. Most household goods are exempt as long as they are kept or used within the household; the tax usually becomes a problem when the taxing authority discovers that expensive personal property like art is being regularly stored outside of the household.
The distinction between tangible and intangible personal property is also significant in some of the jurisdictions which impose sales taxes. In Canada, for example, provincial and federal sales taxes were imposed primarily on sales of tangible personal property whereas sales of intangibles tended to be exempt. The move to value added taxes, under which almost all transactions are taxable, has diminished the significance of the distinction.
In political/economic theory, notably socialist, Marxist, and most anarchist philosophies, the distinction between private and personal property is extremely important. Which items of property constitute which is open to debate. In some economic systems, such as capitalism, private and personal property are considered to be exactly equivalent. | https://en.wikipedia.org/wiki?curid=24695 |
Prima facie
Prima facie (; ) is a Latin expression meaning "on its first encounter" or "at first sight". The literal translation would be "at first face" or "at first appearance", from the feminine forms of "primus" ("first") and "facies" ("face"), both in the ablative case. In modern, colloquial and conversational English, a common translation would be "on the face of it". The term "prima facie" is used in modern legal English (including both civil law and criminal law) to signify that upon initial examination, sufficient corroborating evidence appears to exist to support a case. In common law jurisdictions, "prima facie" denotes evidence that, unless rebutted, would be sufficient to prove a particular proposition or fact. The term is used similarly in academic philosophy. Most legal proceedings, in most jurisdictions, require a "prima facie" case to exist, following which proceedings may then commence to test it, and create a ruling.
In most legal proceedings, one party has a burden of proof, which requires it to present "prima facie" evidence for all of the essential facts in its case. If it cannot, its claim may be dismissed without any need for a response by other parties. A "prima facie" case might not stand or fall on its own; if an opposing party introduces other evidence or asserts an affirmative defense it can only be reconciled with a full trial. Sometimes the introduction of "prima facie" evidence is informally called "making a case" or "building a case".
For example, in a trial under criminal law the prosecution has the burden of presenting "prima facie" evidence of each element of the crime charged against the defendant. In a murder case, this would include evidence that the victim was in fact dead, that the defendant's act caused the death, and that the defendant acted with malice aforethought. If no party introduces new evidence, the case stands or falls just by the "prima facie" evidence or lack thereof, respectively.
"Prima facie" evidence does not need to be conclusive or irrefutable: at this stage, evidence rebutting the case is not considered, only whether any party's case has enough merit to take it to a full trial.
In common law jurisdictions such as the United Kingdom and the United States, the prosecution in a criminal trial must disclose all evidence to the defense. This includes the "prima facie" evidence.
An aim of the doctrine of "prima facie" is to prevent litigants from bringing spurious charges which simply waste all other parties' time.
"Prima facie" is often confused with "res ipsa loquitur" ("the thing speaks for itself", or literally "the thing itself speaks"), the common law doctrine that when the facts make it self-evident that negligence or other responsibility lies with a party, it is not necessary to provide extraneous details, since any reasonable person would immediately find the facts of the case.
The difference between the two is that "prima facie" is a term meaning there is enough evidence for there to be a case to answer, while "Res ipsa loquitur" means that the facts are so obvious a party does not need to explain any more. For example: "There is a "prima facie" case that the defendant is liable. They controlled the pump. The pump was left on and flooded the plaintiff's house. The plaintiff was away and had left the house in the control of the defendant. "Res ipsa loquitur"."
In Canadian tort law, this doctrine has been subsumed by general negligence law.
The phrase is also used in academic philosophy. Among its most notable uses is in the theory of ethics first proposed by W. D. Ross, often called the "Ethic of Prima Facie Duties", as well as in epistemology, as used, for example, by Robert Audi. It is generally used in reference to an obligation. "I have a "prima facie" obligation to keep my promise and meet my friend" means that I am under an obligation, but this may yield to a more pressing duty. A more modern usage prefers the title "pro tanto obligation": an obligation that may be later overruled by another more pressing one; it exists only "pro tempore".
The phrase "prima facie" is sometimes misspelled ' in the mistaken belief that ' is the actual Latin word; however, "faciē" is in fact the ablative case of "faciēs", a fifth declension Latin noun.
In policy debate theory, "prima facie" is used to describe the mandates or planks of an affirmative case, or, in some rare cases, a negative counterplan. When the negative team appeals to "prima facie", it appeals to the fact that the affirmative team cannot add or amend anything in its plan after being stated in the first affirmative constructive.
A common usage of the phrase is the concept of a ""prima facie" speed limit", which has been used in Australia and the United States. A "prima facie" speed limit is a default speed limit that applies when no other specific speed limit is posted, and may be exceeded by a driver. However, if the driver is detected, and cited by police for exceeding the limit, the onus of proof is on the driver, to show that the speed at which the driver was travelling was safe under the circumstances. In most jurisdictions, this type of speed limit has been replaced by absolute speed limits. | https://en.wikipedia.org/wiki?curid=24696 |
Product liability
Product liability is the area of law in which manufacturers, distributors, suppliers, retailers, and others who make products available to the public are held responsible for the injuries those products cause. Although the word "product" has broad connotations, product liability as an area of law is traditionally limited to products in the form of tangible personal property.
The overwhelming majority of countries have strongly preferred to address product liability through legislative means. In most countries, this occurred either by enacting a separate product liability act, adding product liability rules to an existing civil code, or including strict liability within a comprehensive Consumer Protection Act. In the United States, product liability law was developed primarily through case law from state courts as well as the "Restatements of the Law" produced by the American Law Institute (ALI).
The United States and the European Union's product liability regimes are the two leading models for how to impose strict liability for defective products, meaning that "[v]irtually every product liability regime in the world follows one of these two models."
The United States was the birthplace of modern product liability law during the 20th century, due to the 1963 "Greenman" decision which led to the emergence of product liability as a distinct field of private law. In 1993, Geraint Howells explained: "No other country can match the United States for the number and diversity of its product liability cases, nor for the prominence of the subject in the eyes of the general public and legal practitioners." According to Mathias Reimann, this was still true as of 2015: "In the United States, product liability continues to play a big role: litigation is much more frequent there than anywhere else in the world, awards are higher, and publicity is significant."
In the United States, the majority of product liability laws are determined at the state level and vary widely from state to state. Each type of product liability claim requires proof of different elements in order to present a valid claim.
For a variety of complex historical reasons beyond the scope of this article, personal injury lawsuits in tort for monetary damages were virtually nonexistent before the Second Industrial Revolution of the 19th century. As a subset of personal injury cases, product liability cases were extraordinarily rare, but it appears that in the few that were brought, the general rule at early common law was probably what modern observers would call no-fault or strict liability. In other words, the plaintiff only needed to prove causation and damages.
Common law courts began to shift towards a no-liability regime for products (except for cases of fraud or breach of express warranty) by developing the doctrine of "caveat emptor" (buyer beware) in the early 1600s. As personal injury and product liability claims began to slowly increase during the early First Industrial Revolution (due to increased mobility of both people and products), common law courts in both England and the United States in the 1840s erected further barriers to plaintiffs by requiring them to prove negligence on the part of the defendant (i.e., that the defendant was at fault because its conduct had failed to meet the standard of care expected of a reasonable person), and to overcome the defense of lack of privity of contract in cases where the plaintiff had not dealt directly with the manufacturer (as exemplified by "Winterbottom v. Wright" (1842)). During the Second Industrial Revolution of the mid-to-late 19th century, consumers increasingly became several steps removed from the original manufacturers of products and the unjust effects of all these doctrines became widely evident.
State courts in the United States began to look for ways to ameliorate the harsh effects of such legal doctrines, as did the British Parliament. For example, one method was to find implied warranties implicit in the nature of certain contracts; by the end of the 19th century, enough U.S. states had adopted an implied warranty of merchantable quality that this warranty was restated in statutory form in the U.S. Uniform Sales Act of 1906, which drew inspiration from the British Sale of Goods Act 1893.
During the 1940s, 1950s, and 1960s, American law professors Fleming James Jr. and William Prosser published competing visions for the future of the nascent field of product liability. James acknowledged that traditional negligence and warranty law were inadequate solutions for the problems presented by defective products, but argued in 1955 those issues could be resolved by a modification of warranty law "tailored to meet modern needs," while Prosser argued in 1960 that strict liability in tort ought to be "declared outright" without "an illusory contract mask." Ultimately, it was Prosser's view which prevailed.
The first step towards modern product liability law occurred in the landmark New York case of "MacPherson v. Buick Motor Co." (1916), which demolished the privity bar to recovery in negligence actions. By 1955, James was citing "MacPherson" to argue that "[t]he citadel of privity has crumbled," although Maine, the last holdout, would not adopt "MacPherson" until 1982.
The second step was the landmark New Jersey case of "Henningsen v. Bloomfield Motors, Inc." (1960), which demolished the privity bar to recovery in actions for breach of implied warranty. Prosser cited "Henningsen" in 1960 as the "fall of the citadel of privity." The "Henningsen" court helped articulate the rationale for the imminent shift from breach of warranty (sounding in contract) to strict liability (sounding in tort) as the dominant theory in product liability cases, but did not actually impose strict liability for defective products.
The third step was the landmark California case of "Greenman v. Yuba Power Products, Inc." (1963), in which the Supreme Court of California openly articulated and adopted the doctrine of strict liability in tort for defective products. "Greenman" heralded a fundamental shift in how Americans thought about product liability towards a theory of enterprise liability—instead of basing liability on the defendant's "fault" or "warranty", the defendant's liability should be predicated, as a matter of public policy, on the simple question of whether it was part of a business enterprise responsible for inflicting injuries on human beings. The theoretical foundation for enterprise liability had been laid by James as well as another law professor, Leon Green. As noted above, it was "Greenman" which led to the actual emergence of product liability as a distinct field of private law in its own right. Before this point, products had appeared in case law and scholarly literature only in connection with the application of existing doctrines in contract and tort.
The "Greenman" majority opinion was authored by then-Associate Justice Roger J. Traynor, who cited to his own earlier concurring opinion in "Escola v. Coca-Cola Bottling Co." (1944). In "Escola", now also widely recognized as a landmark case, Justice Traynor laid the foundation for "Greenman" with these words:
Even if there is no negligence, however, public policy demands that responsibility be fixed wherever it will most effectively reduce the hazards to life and health inherent in defective products that reach the market. It is evident that the manufacturer can anticipate some hazards and guard against the recurrence of others, as the public cannot. Those who suffer injury from defective products are unprepared to meet its consequences. The cost of an injury and the loss of time or health may be an overwhelming misfortune to the person injured, and a needless one, for the risk of injury can be insured by the manufacturer and distributed among the public as a cost of doing business. It is to the public interest to discourage the marketing of products having defects that are a menace to the public. If such products nevertheless find their way into the market it is to the public interest to place the responsibility for whatever injury they may cause upon the manufacturer, who, even if he is not negligent in the manufacture of the product, is responsible for its reaching the market. However intermittently such injuries may occur and however haphazardly they may strike, the risk of their occurrence is a constant risk and a general one. Against such a risk there should be general and constant protection and the manufacturer is best situated to afford such protection.
The year after "Greenman", the Supreme Court of California proceeded to extend strict liability to "all" parties involved in the manufacturing, distribution, and sale of defective products (including retailers) and in 1969 made it clear that such defendants were liable not only to direct customers and users, but also to any innocent bystanders randomly injured by defective products.
Prosser was able to propagate the "Greenman" holding to a nationwide audience because the American Law Institute had appointed him as the official reporter of the Restatement of Torts, Second. The Institute approved the Restatement's final draft in 1964 and published it in 1965; the Restatement codified the "Greenman" doctrine in Section 402A. "Greenman" and Section 402A "spread like wildfire across America". The highest courts of nearly all U.S. states and territories (and a few state legislatures) embraced this "bold new doctrine" (in the words of David Owen) during the late 1960s and 1970s. As of 2018, the five exceptions who have rejected strict liability are Delaware, Massachusetts, Michigan, North Carolina, and Virginia. In four of those states, warranty law has been so broadly construed in favor of plaintiffs that only North Carolina truly lacks anything resembling strict liability in tort for defective products. (North Carolina's judiciary never attempted to adopt the doctrine, and the state legislature enacted a statute expressly banning strict liability for defective products in 1995.) In a landmark 1986 decision, the U.S. Supreme Court also embraced strict liability for defective products by adopting it as part of federal admiralty law.
In the conventional narrative, there are two main factors that explain the rapid embrace of "Greenman" and Section 402A. First, they came along just as Americans were coalescing around a consensus in favor of consumer protection, which would eventually cause Congress to enact several landmark federal product safety and vehicle safety statutes. Second, American academic experts in the field of law and economics developed new theories that helped to justify strict liability, such as those articulated by Guido Calabresi in "The Costs of Accidents" (1970).
To this, Kyle Graham adds three more factors: (3) the rise of attorneys specializing exclusively in plaintiffs' personal injury cases and their professional associations like the organization now known as the American Association for Justice; (4) the ubiquity of so-called "bottle cases" (personal injury cases arising from broken glass bottles) before aluminum cans and plastic bottles displaced glass bottles as the primary beverage container during the 1970s; and (5) the resistance of the Uniform Commercial Code's editorial board to extending warranties to bystander victims before 1966—in states whose legislatures had not already acted, state courts were more receptive to extending the common law to grant bystanders a strict liability tort claim.
Prosser inexplicably imposed in Section 402A a requirement that a product defect must be "unreasonably dangerous." Since the "unreasonably dangerous" qualifier implicitly connotes some sense of the idea of "fault" which Traynor was trying to exorcise from product liability, it was subsequently rejected as incompatible with strict liability for defective products by Alaska, California, Georgia, New Jersey, New York, Puerto Rico and West Virginia.
Early proponents of strict liability believed its economic impact would be minor because they were focused on manufacturing defects. They failed to foresee the logical implications of applying the rule to other types of product defects. Only in the late 1960s did Americans begin to draw a clear analytical distinction between manufacturing and design defects, and since the early 1980s, defective design claims "have formed the overwhelming bulk" of American product liability lawsuits. It was "the unintended application of [Section] 402A to the design context" which resulted in the explosion of mass tort product liability cases during the 1980s throughout the United States.
Among the factors which led to the large numbers of product liability cases seen today in the United States are relatively low fees for filing lawsuits, the availability of class actions, the strongest right to a jury trial in the world, the highest awards of monetary damages in the world (frequently in the millions of dollars for pain and suffering noneconomic damages and in rare cases soaring into the billions for punitive damages), and the most extensive right to discovery in the world. No other country has adopted the U.S. standard of disclosure of information that is "reasonably calculated to lead to the discovery of admissible evidence." American reported cases are replete with plaintiffs whose counsel artfully exploited this standard to obtain so-called "smoking gun" evidence of product defects and made defendants pay "a tremendous price" for their callous disregard for product safety.
In response to these developments, a tort reform movement appeared in the 1980s which persuaded many state legislatures to enact various limitations like damage caps and statutes of repose. However, the majority of states left untouched the basic rule of strict liability for defective products, and all efforts at the federal level to enact a uniform federal product liability regime were unsuccessful.
From the mid-1960s onward, state courts struggled for over four decades to develop a coherent test for design defects, either phrased in terms of consumer expectations or whether risks outweigh benefits or both (i.e., a hybrid test in which the first does not apply to defects that are too complex). Risk-benefit analysis, of course, can be seen as a way of measuring the reasonableness of the defendant's conduct—or in other words, negligence. A neo-conservative turn among many American courts and tort scholars during the 1980s led to a recognition that liability in design defect and failure-to-warn cases had never been entirely strict, or had been operating in some respects as a "de facto" fault-based regime all along, and the American Law Institute expressly backed a return to tests associated with negligence for design and warning defects with the 1998 publication of the "Restatement of Torts, Third: Products Liability". This attempt to resurrect negligence and to limit strict liability to its original home in manufacturing defects "has been highly controversial among courts and scholars." Although Professors Howells and Owen argued in 2018 that U.S. product liability law as restated in 1998 had come full circle back to where it started in 1964, they also conceded that "some courts" continue to "tenaciously cling[] to the rationale and doctrine of [Section] 402A."
Section 2 of the "Restatement (Third) of Torts: Products Liability" distinguishes between three major types of product liability claims:
However, in most states, these are not legal claims in and of themselves, but are pleaded in terms of the legal theories mentioned above. For example, a plaintiff might plead negligent failure to warn or strict liability for defective design.
The three types of product liability claims are defined as follows:
In the United States, the claims most commonly associated with product liability are negligence, strict liability, breach of warranty, and various consumer protection claims.
Warranties are statements by a manufacturer or seller concerning a product during a commercial transaction. Warranty claims historically required privity between the injured party and the manufacturer or seller; in plain English, they must be dealing directly with one another. As noted above, this requirement was demolished in the landmark "Henningsen" case.
Breach of warranty-based product liability claims usually focus on one of three types:
Express warranty claims focus on express statements by the manufacturer or the seller concerning the product (e.g., "This chainsaw is useful to cut turkeys").
The various implied warranties cover those expectations common to all products (e.g., that a tool is not unreasonably dangerous when used for its proper purpose), unless specifically disclaimed by the manufacturer or the seller. Claims involving real estate may also be brought under a theory of implied warranty of habitability.
A basic negligence claim consists of proof of
As demonstrated in cases such as "Winterbottom v. Wright", the scope of the duty of care was limited to those with whom one was in privity. Later cases like "MacPherson v. Buick Motor Co." broadened the duty of care to all who could be foreseeably injured by one's conduct.
Over time, negligence concepts have arisen to deal with certain specific situations, including negligence "per se" (using a manufacturer's violation of a law or regulation, in place of proof of a duty and a breach) and res ipsa loquitur (an inference of negligence under certain conditions).
Rather than focus on the behavior of the manufacturer (as in negligence), strict liability claims focus on the product itself. Under strict liability, the manufacturer is liable if the product is defective, even if the manufacturer was not negligent in making that product defective.
Under a strict liability theory, the plaintiff merely needs to prove:
In addition to common law remedies, many states have enacted consumer protection statutes that provide specific remedies for certain specific types of product defects. One reason for the appearance of such statutes is that under the "economic loss rule", strict liability in tort is unavailable for products that cause damage only to themselves. In other words, strict liability is unavailable for defects that merely render the product unusable (or less useful), and hence cause only economic injury, but do not cause personal injury or damage to other property. Breach of warranty actions governed by Article 2 of the Uniform Commercial Code also often fail to provide adequate remedies in such situations.
The best-known examples of consumer protection statutes for product defects are lemon laws, which provide protection to purchasers of defective new vehicles and, in a small number of states, used vehicles. In the United States, "cars are typically the second most valuable asset most people own, outranked only by their home."
Although European observers followed "Greenman" and Section 402A "with great interest", European countries did not initially adopt such a doctrine. For example, after the landmark case of "Donoghue v Stevenson" [1932] (which followed "MacPherson"), UK law did not change, despite "trenchant academic criticism". Strict liability for defective products finally came to Europe as a result of the thalidomide disaster and the victims' ensuing struggle during the 1960s to obtain adequate compensation, especially in the UK and West Germany.
In Europe, a movement towards strict liability began with the Council of Europe Convention on Products Liability in regard to Personal Injury and Death (the Strasbourg Convention) in 1977, which never entered into force.
On July 25, 1985, the then-European Economic Community adopted the Product Liability Directive. In language resembling what Traynor wrote in "Escola" and "Greenman", the Directive's preface states that "liability without fault on the part of the producer is the sole means of adequately solving the problem, peculiar to our age of increasing technicality, of a fair apportionment of the risks inherent in modern technological production." The Directive gave each member state the option of imposing a liability cap of 70 million euros per defect. Unlike the United States, the Directive only imposed strict liability upon "producers"—that is, manufacturers of raw materials, component parts, and finished products, as well as importers—and deviated significantly from the American model by deciding not to impose strict liability on purely domestic distributors or retailers. By using the 20-year-old Section 402A as their model, the Directive's drafters decided not to include a number of changes such as the subsequent differentiation between three major types of product defects used in the US.
As Reimann reported in 2003, on the one hand, product liability had expanded around the world within the past two decades to become a "global phenomenon," and therefore, "the United States is no longer the only country with tough product liability rules." On the other hand, the picture looked very different when one "turn[ed] from the law on the books to the law in action." In the real world, the actual protection afforded to consumers by product liability law "depends heavily on whether claims are realistically enforceable," and that depends upon whether the procedural law of the forum state is actually able to facilitate access to justice.
Traditionally, European courts have provided no discovery or rather minimal discovery (by American standards). Where available, European discovery is rarely self-executing (that is, automatically effective by operation of law), meaning that the defendant and third parties have no obligation to disclose anything unless and until the plaintiff obtains a court order. Civil law countries strongly dislike and oppose the American principle of broad discovery in civil litigation. For example, since 1968, it has been a crime for a French company to produce commercial information in foreign legal proceedings without express authorization from a French court, and in turn, this has been raised as a defense to discovery by French defendants in American product liability cases. Since the defendant usually possesses most of the extant evidence of a product defect, in most European countries it is "very difficult, if not impossible, for a victim or her lawyer to investigate a product liability case."
Other obstacles—especially in civil law countries—include high filing fees, no right to a jury trial, low damages for pain and suffering, the unavailability of punitive damages, and the unavailability (before the 2010s) of class actions. As of 2003, there was "no" country outside of the United States where plaintiffs were able to recover noneconomic damages above US$300,000 for even the most catastrophic injuries. As of 2015, product liability in Europe "has remained a fairly minor field which generates fewer cases, more modest awards, and rarely makes it into the headlines" (in comparison to its American cousin).
The legislatures of many other countries outside the EU (then: EEC) subsequently enacted strict liability regimes based on the European model (that is, generally applying only to manufacturers and importers), including Israel (March 1980, based on an early proposed draft of the Directive), Brazil (September 1990), Peru (November 1991), Australia (July 1992), Russia (February 1992), Switzerland (December 1992), Argentina (October 1993), Japan (June 1994), Taiwan (June 1994), Malaysia (August 1999), South Korea (January 2000), Thailand (December 2007), and South Africa (April 2009).
As of 2015, in most countries outside of the United States and European Union, "product liability remains largely a regime of paper rules with little practical impact[.]"
The law that needs to be applied in product liability cases is governed by the Convention on the Law Applicable to Products Liability of 1971 for the 11 countries that are party to it. The country where the damage occurred determines the applicable law, if that country is also the residence of the person suffering damage, the principle place of business of the person held liable or the place where the product was bought. If that is not the case, the law of the country of residence is used, provided the product was bought there, or it was the principle place of business of the person held liable.
Advocates of strict liability laws argue that strict products liability causes manufacturers to internalize costs they would normally externalize. Strict liability thus requires manufacturers to evaluate the full costs of their products. In this way, strict liability provides a mechanism for ensuring that a product's absolute good outweighs its absolute harm.
Between two parties who are not negligent (manufacturer and consumer), one will necessarily shoulder the costs of product defects. Proponents say it is preferable to place the economic costs on the manufacturer because it can better absorb them and pass them on to other consumers. The manufacturer thus becomes a de facto insurer against its defective products, with premiums built into the product's price.
Strict liability also seeks to diminish the impact of information asymmetry between manufacturers and consumers. Manufacturers have better knowledge of their own products' dangers than do consumers. Therefore, manufacturers properly bear the burden of finding, correcting, and warning consumers of those dangers.
Strict liability reduces litigation costs, because a plaintiff need only prove causation, not imprudence. Where causation is easy to establish, parties to a strict liability suit will most likely settle, because only damages are in dispute.
Critics charge that strict liability creates risk of moral hazard. They claim that strict liability causes consumers to under invest in care even when they are the least-cost avoiders. This, they say, results in a lower aggregate level of care than under a negligence standard. Proponents counter that people have enough natural incentive to avoid inflicting serious harm on themselves to mitigate this concern.
Critics charge that the requiring manufacturers to internalize costs they would otherwise externalize increases the price of goods. Critics claim that in elastic, price-sensitive markets, price increases cause some consumers to seek substitutes for that product. As a result, they say, manufacturers may not produce the socially optimal level of goods. Proponents respond that these consumer opt outs reflect a product whose absolute harm outweighs its absolute value; products that do more harm than good ought not be produced.
In the law and economics literature, there is a debate about whether liability and regulation are substitutes or complements. If they are substitutes, then either liability or regulation should be used. If they are complements, then the joint use of liability and regulation is optimal. | https://en.wikipedia.org/wiki?curid=24697 |
Proximate cause
In law, a proximate cause is an event sufficiently related to an injury that the courts deem the event to be the cause of that injury. There are two types of causation in the law: cause-in-fact, and proximate (or legal) cause. Cause-in-fact is determined by the "but for" test: But for the action, the result would not have happened. (For example, but for running the red light, the collision would not have occurred.) The action is a necessary condition, but may not be a sufficient condition, for the resulting injury. A few circumstances exist where the but for test is ineffective (see But-for test). Since but-for causation is very easy to show (but for stopping to tie your shoe, you would not have missed the train and would not have been mugged), a second test is used to determine if an action is close enough to a harm in a "chain of events" to be legally valid. This test is called proximate cause. Proximate cause is a key principle of Insurance and is concerned with how the loss or damage actually occurred. There are several competing theories of proximate cause (see Other factors). For an act to be deemed to cause a harm, both tests must be met; proximate cause is a legal limitation on cause-in-fact.
The formal Latin term for "but for" (cause-in-fact) causation, is sine qua non causation.
A few circumstances exist where the "but for" test is complicated, or the test is ineffective. The primary examples are:
Since but-for causation is very easy to show and does not assign culpability (but for the rain, you would not have crashed your carthe rain is not morally or legally culpable but still constitutes a cause), there is a second test used to determine if an action is close enough to a harm in a "chain of events" to be a legally culpable cause of the harm. This test is called proximate cause, from the Latin "proxima causa".
There are several competing theories of proximate cause.
The most common test of proximate cause under the American legal system is foreseeability. It determines if the harm resulting from an action could reasonably have been predicted. The test is used in most cases only in respect to the type of harm. It is foreseeable, for example, that throwing a baseball at someone could cause them a blunt-force injury. But proximate cause is still met if a thrown baseball misses the target and knocks a heavy object off a shelf behind them, which causes a blunt-force injury. Evident in Corrigan v HSE (2011 IEHC 305).
This is also known as the "extraordinary in hindsight" rule.
Direct causation is a minority test, which addresses only the metaphysical concept of causation. It does not matter how foreseeable the result as long as what the negligent party's physical activity can be tied to what actually happened. The main thrust of direct causation is that there are no intervening causes between an act and the resulting harm. An intervening cause has several requirements: it must 1) be independent of the original act, 2) be a voluntary human act or an abnormal natural event, and 3) occur in time between the original act and the harm.
Direct causation is the only theory that addresses only causation and does not take into account the culpability of the original actor.
The plaintiff must demonstrate that the defendant's action increased the risk that the particular harm suffered by the plaintiff would occur. If the action were repeated, the likelihood of the harm would correspondingly increase. This is also called foreseeable risk.
The harm within the risk (HWR) test determines whether the victim was among the class of persons who could foreseeably be harmed, and whether the harm was foreseeable within the class of risks. It is the strictest test of causation, made famous by Benjamin Cardozo in "Palsgraf v. Long Island Railroad Co." case under New York state law.
The first element of the test is met if the injured person was a member of a class of people who could be expected to be put at risk of injury by the action. For example, a pedestrian, as an expected user of sidewalks, is among the class of people put at risk by driving on a sidewalk, whereas a driver who is distracted by another driver driving on the sidewalk, and consequently crashes into a utility pole, is not.
The HWR test is no longer much used, outside of New York law. When it is used, it is used to consider the class of people injured, not the type of harm. The main criticism of this test is that it is preeminently concerned with culpability, rather than actual causation.
Referred to by the Reporters of the Second and Third Restatements of the Law of Torts as the "scope-of-the-risk" test, the term "Risk Rule" was coined by the University of Texas School of Law's Dean Robert Keeton. The rule is that “[a]n actor’s liability is limited to those physical harms that result from the risks that made the actor’s conduct tortious.” Thus, the operative question is "what were the particular risks that made an actor's conduct negligent?" If the injury suffered is not the result of one of those risks, there can be no recovery. Two examples will illustrate this principle:
The most obvious objection to this approach is that it requires courts to consider an arguably endless possibility of hypothetical situations. Not only can such an undertaking be an exercise in futility, but this approach lacks even a minimal amount of precision such that parties might be able to predict outcomes and results during litigation. Notwithstanding the already-complex nature of this and other questions relating to proximate or legal cause, this fluid standard could be misused by plaintiff-friendly or defense-favoring judges in attempts to vindicate their own personal philosophies regarding the appropriate reach of tort law.
The doctrine of proximate cause is notoriously confusing. The doctrine is phrased in the language of causation, but in most of the cases in which proximate cause is actively litigated, there is not much real dispute that the defendant but-for caused the plaintiff's injury. The doctrine is actually used by judges in a somewhat arbitrary fashion to limit the scope of the defendant's liability to a subset of the total class of potential plaintiffs who may have suffered some harm from the defendant's actions.
For example, in the two famous "Kinsman Transit" cases from the 2nd Circuit (exercising admiralty jurisdiction over a New York incident), it was clear that mooring a boat improperly could lead to the risk of that boat drifting away and crashing into another boat, and that both boats could crash into a bridge, which collapsed and blocked the river, and in turn, the wreckage could flood the land adjacent to the river, as well as prevent any traffic from traversing the river until it had been cleared. But under proximate cause, the property owners adjacent to the river could sue ("Kinsman I"), but not the owners of the boats or cargoes which could not move until the river was reopened ("Kinsman II").
Therefore, in the final version of the "Restatement (Third), Torts: Liability for Physical and Emotional Harm", published in 2010, the American Law Institute argued that proximate cause should be replaced with scope of liability. Chapter 6 of the Restatement is titled "Scope of Liability (Proximate Cause)." It begins with a special note explaining the Institute's decision to reframe the concept in terms of "scope of liability" because it does not involve true causation, and to also include "proximate cause" in the chapter title in parentheses to help judges and lawyers understand the connection between the old and new terminology. The Institute added that it "fervently hopes" the parenthetical will be unnecessary in a future fourth Restatement of Torts.
A related doctrine is the insurance law doctrine of efficient proximate cause. Under this rule, in order to determine whether a loss resulted from a cause covered under an insurance policy, a court looks for the predominant cause which sets into motion the chain of events producing the loss, which may not necessarily be the "last" event that immediately preceded the loss. Many insurers have attempted to contract around efficient proximate cause through the use of "anti-concurrent causation" (ACC) clauses, under which if a covered cause and a noncovered cause join together to cause a loss, the loss is not covered.
ACC clauses frequently come into play in jurisdictions where property insurance does not normally include flood insurance and expressly excludes coverage for floods. The classic example of how ACC clauses work is where a hurricane hits a building with wind and flood hazards "at the same time." If the evidence later shows that the wind blew off a building's roof and then water damage resulted only because there was no roof to prevent rain from entering, there would be coverage, but if the building was simultaneously flooded (i.e., because the rain caused a nearby body of water to rise or simply overwhelmed local sewers), an ACC clause would completely block coverage for the "entire" loss (even if the building owner could otherwise attribute damage to wind v. flood).
A minority of jurisdictions have ruled ACC clauses to be unenforceable as against public policy, but they are generally enforceable in the majority of jurisdictions. | https://en.wikipedia.org/wiki?curid=24698 |
Peace
Peace is a concept of societal friendship and harmony in the absence of hostility and violence. In a social sense, peace is commonly used to mean a lack of conflict (such as war) and freedom from fear of violence between individuals or groups. Throughout history leaders have used peacemaking and diplomacy to establish a certain type of behavioral restraint that has resulted in the establishment of regional peace or economic growth through various forms of agreements or peace treaties. Such behavioral restraint has often resulted in the reduction of conflicts, greater economic interactivity, and consequently substantial prosperity.
"Psychological peace" (such as a peaceful thinking and emotions) is perhaps less well defined yet often a necessary precursor to establishing "behavioral peace." Peaceful behavior sometimes results from a "peaceful inner disposition." Some have expressed the belief that peace can be initiated with a certain quality of inner tranquility that does not depend upon the uncertainties of daily life for its existence. The acquisition of such a "peaceful internal disposition" for oneself and others can contribute to resolving of otherwise seemingly irreconcilable competing interests.
The Anglo-French term "Pes" itself comes from the Latin "pax", meaning "peace, compact, agreement, treaty of peace, tranquility, absence of hostility, harmony." The English word came into use in various personal greetings from c.1300 as a translation of the Hebrew word shalom, which, according to Jewish theology, comes from a Hebrew verb meaning 'to be complete, whole'. Although 'peace' is the usual translation, however, it is an incomplete one, because 'shalom,' which is also cognate with the Arabic "salaam", has multiple other meanings in addition to peace, including justice, good health, safety, well-being, prosperity, equity, security, good fortune, and friendliness, as well as simply the greetings, "hello" and "goodbye". At a personal level, peaceful behaviors are kind, considerate, respectful, just, and tolerant of others' beliefs and behaviors – tending to manifest goodwill. The term-'peace' originates most recently from the Anglo-French "pes," and the Old French "pais", meaning "peace, reconciliation, silence, agreement" (11th century).
This latter understanding of peace can also pertain to an individual's introspective sense or concept of her/himself, as in being "at peace" in one's own mind, as found in European references from c.1200. The early English term is also used in the sense of "quiet", reflecting calm, serene, and meditative approaches to family or group relationships that avoid quarreling and seek tranquility — an absence of disturbance or agitation.
In many languages, the word for peace is also used as a greeting or a farewell, for example the Hawaiian word aloha, as well as the Arabic word "salaam". In English the word peace is occasionally used as a farewell, especially for the dead, as in the phrase "rest in peace".
Wolfgang Dietrich in his research project which led to the book "The Palgrave International Handbook of Peace Studies" (2011) maps the different meanings of peace in different languages and from different regions across the world. Later, in his "Interpretations of Peace in History and Culture" (2012), he groups the different meanings of peace into five peace families: Energetic/Harmony, Moral/Justice, Modern/Security, Postmodern/Truth, and Transrational, a synthesis of the positive sides of the four previous families and the society.
In ancient times and more recently, peaceful alliances between different nations were codified through royal marriages. Two examples, Hermodike I c.800BC and Hermodike II c.600BC were Greek princesses from the house of Agamemnon who married kings from what is now Central Turkey. The union of Phrygia / Lydia with Aeolian Greeks resulted in regional peace, which facilitated the transfer of ground-breaking technological skills into Ancient Greece; respectively, the phonetic written script and the minting of coinage (to use a token currency, where the value is guaranteed by the state). Both inventions were rapidly adopted by surrounding nations through further trade and cooperation and have been of fundamental benefit to the progress of civilization.
Since classical times, it has been noted that peace has sometimes been achieved by the victor over the vanquished by the imposition of ruthless measures. In his book "Agricola" the Roman historian Tacitus includes eloquent and vicious polemics against the rapacity and greed of Rome. One, that Tacitus says is by the Caledonian chieftain Calgacus, ends "Auferre trucidare rapere falsis nominibus imperium, atque ubi solitudinem faciunt, pacem appellant." (To ravage, to slaughter, to usurp under false titles, they call empire; and where they make a desert, they call it peace. — Oxford Revised Translation).
Discussion of peace is therefore at the same time a discussion on the form of such peace. Is it simple absence of mass organized killing (war) or does peace require a particular morality and justice? ("just peace").
A peace must be seen at least in two forms:
More recently, advocates for radical reform in justice systems have called for a public policy adoption of non-punitive, non-violent Restorative Justice methods, and many of those studying the success of these methods, including a United Nations working group on Restorative Justice, have attempted to re-define justice in terms related to peace. From the late 2000s on, a Theory of Active Peace has been proposed which conceptually integrates justice into a larger peace theory.
Another internationally important approach to peace is the international, national and local protection of cultural assets in the event of conflicts. United Nations, UNESCO and Blue Shield International deal with the protection of cultural heritage. This also applies to the integration of United Nations peacekeeping. UNESCO Director General Irina Bokova stated: "The protection of culture and heritage is a humanitarian and security policy imperative that also paves the way for resilience, reconciliation and peace." The protection of the cultural heritage should preserve the particularly sensitive cultural memory, the growing cultural diversity and the economic basis of a state, a municipality or a region. In many conflicts there is a deliberate attempt to destroy the opponent's cultural heritage. Whereby there is also a connection between cultural user disruption or cultural heritage and the cause of flight. But protection can only be implemented in a sustainable manner through the fundamental cooperation and training of military units and civilian personnel, together with the locals. The president of Blue Shield International Karl von Habsburg summed it up with the words: “Without the local community and without the local participants, that would be completely impossible”.
The United Nations (UN) is an international organization whose stated aims are to facilitate cooperation in international law, international security, economic development, social progress, human rights, and achieving world peace. The UN was founded in 1945 after World War II to replace the League of Nations, to stop wars between countries, and to provide a platform for dialogue.
The UN, after approval by the Security Council, sends peacekeepers to regions where armed conflict has recently ceased or paused to enforce the terms of peace agreements and to discourage combatants from resuming hostilities. Since the UN does not maintain its own military, peacekeeping forces are voluntarily provided by member states of the UN. The forces, also called the "Blue Helmets", who enforce UN accords are awarded United Nations Medals, which are considered international decorations instead of military decorations. The peacekeeping force as a whole received the Nobel Peace Prize in 1988.
The obligation of the state to provide for domestic peace within its borders in usually charged to the police and other general domestic policing activities. The police are a constituted body of persons empowered by a state to enforce the law, to protect the lives, liberty and possessions of citizens, and to prevent crime and civil disorder. Their powers include the power of arrest and the legitimized use of force. The term is most commonly associated with the police forces of a sovereign state that are authorized to exercise the police power of that state within a defined legal or territorial area of responsibility. Police forces are often defined as being separate from the military and other organizations involved in the defense of the state against foreign aggressors; however, gendarmerie are military units charged with civil policing. Police forces are usually public sector services, funded through taxes.
It is the obligation of national security to provide for peace and security in a nation against foreign threats and foreign aggression. Potential causes of national insecurity include actions by other states (e.g. military or cyber attack), violent non-state actors (e.g. terrorist attack), organised criminal groups such as narcotic cartels, and also the effects of natural disasters (e.g. flooding, earthquakes). Systemic drivers of insecurity, which may be transnational, include climate change, economic inequality and marginalisation, political exclusion, and militarisation. In view of the wide range of risks, the preservation of peace and the security of a nation state have several dimensions, including economic security, energy security, physical security, environmental security, food security, border security, and cyber security. These dimensions correlate closely with elements of national power.
The principal forerunner of the United Nations was the League of Nations. It was created at the Paris Peace Conference of 1919, and emerged from the advocacy of Woodrow Wilson and other idealists during World War I. The Covenant of the League of Nations was included in the Treaty of Versailles in 1919, and the League was based in Geneva until its dissolution as a result of World War II and replacement by the United Nations. The high hopes widely held for the League in the 1920s, for example amongst members of the League of Nations Union, gave way to widespread disillusion in the 1930s as the League struggled to respond to challenges from Nazi Germany, Fascist Italy, and Japan.
One of the most important scholars of the League of Nations was Sir Alfred Zimmern. Like many of the other British enthusiasts for the League, such as Gilbert Murray and Florence Stawell – the so-called "Greece and peace" set – he came to this from the study of the classics.
The creation of the League of Nations, and the hope for informed public opinion on international issues (expressed for example by the Union for Democratic Control during World War I), also saw the creation after World War I of bodies dedicated to understanding international affairs, such as the Council on Foreign Relations in New York and the Royal Institute of International Affairs at Chatham House in London. At the same time, the academic study of international relations started to professionalize, with the creation of the first professorship of international politics, named for Woodrow Wilson, at Aberystwyth, Wales, in 1919.
The late 19th century idealist advocacy of peace which led to the creation of the Nobel Peace Prize, the Rhodes Scholarships, the Carnegie Endowment for International Peace, and ultimately the League of Nations, also saw the re-emergence of the ancient Olympic ideal. Led by Pierre de Coubertin, this culminated in the holding in 1896 of the first of the modern Olympic Games.
The highest honour awarded to peace maker is the Nobel Prize in Peace, awarded since 1901 by the Norwegian Nobel Committee. It is awarded annually to internationally notable persons following the prize's creation in the will of Alfred Nobel. According to Nobel's will, the Peace Prize shall be awarded to the person who "...shall have done the most or the best work for fraternity between nations, for the abolition or reduction of standing armies and for the holding and promotion of peace congresses."
In creating the Rhodes Scholarships for outstanding students from the United States, Germany and much of the British Empire, Cecil Rhodes wrote in 1901 that 'the object is that an understanding between the three great powers will render war impossible and educational relations make the strongest tie'. This peace purpose of the Rhodes Scholarships was very prominent in the first half of the 20th century, and became prominent again in recent years under Warden of the Rhodes House Donald Markwell, a historian of thought about the causes of war and peace. This vision greatly influenced Senator J. William Fulbright in the goal of the Fulbright fellowships to promote international understanding and peace, and has guided many other international fellowship programs, including the Schwarzman Scholars to China created by Stephen A. Schwarzman in 2013.
The International Gandhi Peace Prize, named after Mahatma Gandhi, is awarded annually by the Government of India. It is launched as a tribute to the ideals espoused by Gandhi in 1995 on the occasion of the 125th anniversary of his birth. This is an annual award given to individuals and institutions for their contributions towards social, economic and political transformation through non-violence and other Gandhian methods. The award carries Rs. 10 million in cash, convertible in any currency in the world, a plaque and a citation. It is open to all persons regardless of nationality, race, creed or sex.
The Student Peace Prize is awarded biennially to a student or a student organization that has made a significant contribution to promoting peace and human rights.
The Culture of Peace News Network, otherwise known simply as CPNN, is a UN authorized interactive online news network, committed to supporting the global movement for a culture of peace.
Every year in the first week of November, the Sydney Peace Foundation presents the Sydney Peace Prize. The Sydney Peace Prize is awarded to an organization or an individual whose life and work has demonstrated significant contributions to:
The achievement of peace with justice locally, nationally or internationally
The promotion and attainment of human rights
The philosophy, language and practice of non-violence
A peace museum is a museum that documents historical peace initiatives. Many peace museums also provide advocacy programs for nonviolent conflict resolution. This may include conflicts at the personal, regional or international level.
Smaller institutions:
Religious beliefs often seek to identify and address the basic problems of human life, including the conflicts between, among, and within persons and societies. In ancient Greek-speaking areas the virtue of peace was personified as the goddess Eirene, and in Latin-speaking areas as the goddess Pax. Her image was typically represented by ancient sculptors as that of a full-grown woman, usually with a horn of plenty and scepter and sometimes with a torch or olive leaves.
Christians, who believe Jesus of Nazareth to be the Jewish Messiah called Christ (meaning Anointed One), interpret Isaiah 9:6 as a messianic prophecy of Jesus in which he is called the "Prince of Peace." In the Gospel of Luke, Zechariah celebrates his son John: And you, child, will be called prophet of the Most High, for you will go before the Lord to prepare his ways, to give his people knowledge of salvation through the forgiveness of their sins, because of the tender mercy of our God by which the daybreak from on high will visit us to shine on those who sit in darkness and death's shadow, to guide our feet into the path of peace.
Numerous pontifical documents on the Holy Rosary document a continuity of views of the Popes to have confidence in the Holy Rosary as a means to foster peace. Subsequently, to the Encyclical Mense maio,1965, in which he urged the practice of the Holy Rosary, "the prayer so dear to the Virgin and so much recommended by the Supreme Pontiffs," and as reaffirmed in the encyclical Christi Matri, 1966, to implore peace, Pope Paul VI stated in the apostolic Recurrens mensis, October 1969, that the Rosary is a prayer that favors the great gift of peace.
Islam derived from the root word salam which literally means peace. Muslims are called followers of Islam. Quran clearly stated "Those who have believed and whose hearts are assured by the remembrance of Allah. Unquestionably, by the remembrance of Allah, hearts are assured" and stated "O you who have believed, when you are told, "Space yourselves" in assemblies, then make space; Allah will make space for you. And when you are told, "Arise," then arise; Allah will raise those who have believed among you and those who were given knowledge, by degrees. And Allah is Acquainted with what you do."
Buddhists believe that peace can be attained once all suffering ends. They regard all suffering as stemming from cravings (in the extreme, greed), aversions (fears), or delusions. To eliminate such suffering and achieve personal peace, followers in the path of the Buddha adhere to a set of teachings called the Four Noble Truths — a central tenet in Buddhist philosophy.
Hindu texts contain the following passages:
Pacifism is the categorical opposition to the behaviors of war or violence as a means of settling disputes or of gaining advantage. Pacifism covers a spectrum of views ranging from the belief that international disputes can and should all be resolved via peaceful behaviors; to calls for the abolition of various organizations which tend to institutionalize aggressive behaviors, such as the military, or arms manufacturers; to opposition to any organization of society that might rely in any way upon governmental force. Such groups which sometimes oppose the governmental use of force include anarchists and libertarians. Absolute pacifism opposes violent behavior under all circumstance, including defense of self and others.
Pacifism may be based on moral principles (a deontological view) or pragmatism (a consequentialist view). Principled pacifism holds that all forms of violent behavior are inappropriate responses to conflict, and are morally wrong. Pragmatic pacifism holds that the costs of war and inter-personal violence are so substantial that better ways of resolving disputes must be found.
Psychological or inner peace (i.e. peace of mind) refers to a state of being internally or spiritually at peace, with sufficient knowledge and understanding to keep oneself calm in the face of apparent discord or stress. Being internally "at peace" is considered by many to be a healthy mental state, or homeostasis and to be the opposite of feeling stressful, mentally anxious, or emotionally unstable. Within the meditative traditions, the psychological or inward achievement of "peace of mind" is often associated with bliss and happiness.
Peace of mind, serenity, and calmness are descriptions of a disposition free from the effects of stress. In some meditative traditions, inner peace is believed to be a state of consciousness or enlightenment that may be cultivated by various types of meditation, prayer, t'ai chi ch'uan (太极拳, tàijíquán), yoga, or other various types of mental or physical disciplines. Many such practices refer to this peace as an experience of knowing oneself. An emphasis on finding one's inner peace is often associated with traditions such as Buddhism, Hinduism, and some traditional Christian contemplative practices such as monasticism, as well as with the New Age movement.
Satyagraha is a philosophy and practice of nonviolent resistance developed by Mohandas Karamchand Gandhi. He deployed satyagraha techniques in campaigns for Indian independence and also during his earlier struggles in South Africa.
The word "satyagraha" itself was coined through a public contest that Gandhi sponsored through the newspaper he published in South Africa, 'Indian Opinion', when he realized that neither the common, contemporary Hindu language nor the English language contained a word which fully expressed his own meanings and intentions when he talked about his nonviolent approaches to conflict. According to Gandhi's autobiography, the contest winner was Maganlal Gandhi (presumably no relation), who submitted the entry 'sadagraha', which Gandhi then modified to 'satyagraha'. Etymologically, this Hindic word means 'truth-firmness', and is commonly translated as 'steadfastness in the truth' or 'truth-force'.
Satyagraha theory also influenced Martin Luther King Jr. during the campaigns he led during the civil rights movement in the United States. The theory of satyagraha sees means and ends as inseparable. Therefore, it is contradictory to try to use violence to obtain peace. As Gandhi wrote: "They say, 'means are, after all, means'. I would say, 'means are, after all, everything'. As the means so the end..." A contemporary quote sometimes attributed to Gandhi, but also to A. J. Muste, sums it up: 'There is no way to peace; peace is the way.'
The following are monuments to peace:
Many different theories of "peace" exist in the world of peace studies, which involves the study of de-escalation, conflict transformation, disarmament, and cessation of violence. The definition of "peace" can vary with religion, culture, or subject of study.
The classical "realist" position is that the key to promoting order between states, and so of increasing the chances of peace, is the maintenance of a balance of power between states – a situation where no state is so dominant that it can "lay down the law to the rest". Exponents of this view have included Metternich, Bismarck, Hans Morgenthau, and Henry Kissinger. A related approach – more in the tradition of Hugo Grotius than Thomas Hobbes – was articulated by the so-called "English school of international relations theory" such as Martin Wight in his book "Power Politics" (1946, 1978) and Hedley Bull in "The Anarchical Society" (1977).
As the maintenance of a balance of power could in some circumstances require a willingness to go to war, some critics saw the idea of a balance of power as promoting war rather than promoting peace. This was a radical critique of those supporters of the Allied and Associated Powers who justified entry into World War I on the grounds that it was necessary to preserve the balance of power in Europe from a German bid for hegemony.
In the second half of the 20th century, and especially during the cold war, a particular form of balance of power – mutual nuclear deterrence – emerged as a widely held doctrine on the key to peace between the great powers. Critics argued that the development of nuclear stockpiles increased the chances of war rather than peace, and that the "nuclear umbrella" made it "safe" for smaller wars (e.g. the Vietnam war and the Soviet invasion of Czechoslovakia to end the Prague Spring), so making such wars more likely.
It was a central tenet of classical liberalism, for example among English liberal thinkers of the late 19th and early 20th century, that free trade promoted peace. For example, the Cambridge economist John Maynard Keynes (1883–1946) said that he was "brought up" on this idea and held it unquestioned until at least the 1920s. During the economic globalization in the decades leading up to World War I, writers such as Norman Angell argued that the growth of economic interdependence between the great powers made war between them futile and therefore unlikely. He made this argument in 1913. A year later Europe's economically interconnected states were embroiled in what would later become known as the First World War.
These ideas have again come to prominence among liberal internationalists during the globalization of the late 20th and early 21st century. These ideas have seen capitalism as consistent with, even conducive to, peace.
The "Peace & War Game" is an approach in game theory to understand the relationship between peace and conflicts.
The iterated game hypotheses was originally used by academic groups and computer simulations to study possible strategies of cooperation and aggression.
As peace makers became richer over time, it became clear that making war had greater costs than initially anticipated. One of the well studied strategies that acquired wealth more rapidly was based on Genghis Khan, i.e. a constant aggressor making war continually to gain resources. This led, in contrast, to the development of what's known as the "provokable nice guy strategy", a peace-maker until attacked, improved upon merely to win by occasional forgiveness even when attacked. By adding the results of all pairwise games for each player, one sees that multiple players gain wealth cooperating with each other while bleeding a constantly aggressive player.
Socialist, communist, and left-wing liberal writers of the 19th and 20th centuries (e.g., Lenin, J.A. Hobson, John Strachey) argued that capitalism caused war (e.g. through promoting imperial or other economic rivalries that lead to international conflict). This led some to argue that international socialism was the key to peace.
However, in response to such writers in the 1930s who argued that capitalism caused war, the economist John Maynard Keynes (1883–1946) argued that managed capitalism could promote peace. This involved international coordination of fiscal/monetary policies, an international monetary system that did not pit the interests of countries against each other, and a high degree of freedom of trade. These ideas underlay Keynes's work during World War II that led to the creation of the International Monetary Fund and the World Bank at Bretton Woods in 1944, and later of the General Agreement on Tariffs and Trade (subsequently the World Trade Organization).
Borrowing from the teachings of Norwegian theorist Johan Galtung, one of the pioneers of the field of Peace Research, on 'Positive Peace', and on the writings of Maine Quaker Gray Cox, a consortium of theorists, activists, and practitioners in the experimental John Woolman College initiative have arrived at a theory of "active peace". This theory posits in part that peace is part of a triad, which also includes justice and wholeness (or well-being), an interpretation consonant with scriptural scholarly interpretations of the meaning of the early Hebrew word "shalom". Furthermore, the consortium have integrated Galtung's teaching of the meanings of the terms peacemaking, peacekeeping, and peacebuilding, to also fit into a triadic and interdependent formulation or structure. Vermont Quaker John V. Wilmerding posits five stages of growth applicable to individuals, communities, and societies, whereby one transcends first the 'surface' awareness that most people have of these kinds of issues, emerging successively into acquiescence, pacifism, passive resistance, active resistance, and finally into "active peace", dedicating themselves to peacemaking, peacekeeping or peace building.
One of the most influential theories of peace, especially since Woodrow Wilson led the creation of the League of Nations at the Paris Peace Conference of 1919, is that peace will be advanced if the intentional anarchy of states is replaced through the growth of international law promoted and enforced through international organizations such as the League of Nations, the United Nations, and other functional international organizations. One of the most important early exponents of this view was Sir Alfred Zimmern, for example in his 1936 book "The League of Nations and the Rule of Law".
Many "idealist" thinkers about international relations – e.g. in the traditions of Kant and Karl Marx – have argued that the key to peace is the growth of some form of solidarity between peoples (or classes of people) spanning the lines of cleavage between nations or states that lead to war.
One version of this is the idea of promoting international understanding between nations through the international mobility of students – an idea most powerfully advanced by Cecil Rhodes in the creation of the Rhodes Scholarships, and his successors such as J. William Fulbright.
Another theory is that peace can be developed among countries on the basis of active management of water resources.
Following Wolfgang Dietrich, Wolfgang Sützl and the Innsbruck School of Peace Studies, some peace thinkers have abandoned any single and all-encompassing definition of peace. Rather, they promote the idea of "many peaces". They argue that since no singular, correct definition of peace can exist, peace should be perceived as a plurality. This post-modern understanding of peace(s) was based on the philosophy of Jean Francois Lyotard. It served as a fundament for the more recent concept of trans-rational peace(s) and elicitive conflict transformation.
In 2008 Dietrich enlarged his approach of the "many peaces" to the so-called "five families" of peace interpretations: the energetic, moral, modern, post-modern and trans-rational approach. Trans-rationality unites the rational and mechanistic understanding of modern peace in a relational and culture-based manner with spiritual narratives and energetic interpretations. The systemic understanding of trans-rational peaces advocates a client-centred method of conflict transformation, the so-called elicitive approach.
"Peace and conflict studies" is an academic field which identifies and analyses violent and nonviolent behaviours, as well as the structural mechanisms attending violent and non-violent social conflicts. This is to better understand the processes leading to a more desirable human condition. One variation,
"Peace studies" (irenology), is an interdisciplinary effort aiming at the prevention, de-escalation, and solution of conflicts. This contrasts with war studies (polemology), directed at the efficient attainment of victory in conflicts. Disciplines involved may include political science, geography, economics, psychology, sociology, international relations, history, anthropology, religious studies, and gender studies, as well as a variety of other disciplines.
Although peace is widely perceived as something intangible, various organizations have been making efforts to quantify and measure it. The Global Peace Index produced by the Institute for Economics and Peace is a known effort to evaluate peacefulness in countries based on 23 indicators of the absence of violence and absence of the fear of violence.
The last edition of the Index ranks 163 countries on their internal and external levels of peace. According to the 2017 Global Peace Index, Iceland is the most peaceful country in the world while Syria is the least peaceful one. Fragile States Index (formerly known as the Failed States Index) created by the Fund for Peace focuses on risk for instability or violence in 178 nations. This index measures how fragile a state is by 12 indicators and subindicators that evaluate aspects of politics, social economy, and military facets in countries. The 2015 Failed State Index reports that the most fragile nation is South Sudan, and the least fragile one is Finland. University of Maryland publishes the Peace and Conflict Instability Ledger in order to measure peace. It grades 163 countries with 5 indicators, and pays the most attention to risk of political instability or armed conflict over a three-year period. The most recent ledger shows that the most peaceful country is Slovenia on the contrary Afghanistan is the most conflicted nation. Besides indicated above reports from the Institute for Economics and Peace, Fund for Peace, and University of Maryland, other organizations including George Mason University release indexes that rank countries in terms of peacefulness.
The longest continuing period of peace and neutrality among currently existing states is observed in Sweden since 1814 and in Switzerland, which has had an official policy of neutrality since 1815. This was made possible partly by the periods of relative peace in Europe and the world known as Pax Britannica (1815–1914), Pax Europaea/Pax Americana (since 1950s), and Pax Atomica (also since the 1950s).
Other examples of long periods of peace are: | https://en.wikipedia.org/wiki?curid=24702 |
Portland Vase
The Portland Vase is a Roman cameo glass vase, which is dated to between AD 1 and AD 25, though low BC dates have some scholarly support. It is the best known piece of Roman cameo glass and has served as an inspiration to many glass and porcelain makers from about the beginning of the 18th century onwards. It is first recorded in Rome in 1600–1601, and since 1810 has been in the British Museum in London. It was bought by the museum in 1945 (GR 1945,0927.1) and is normally on display in Room 70.
The vase is about high and in circumference. It is made of violet-blue glass, and surrounded with a single continuous white glass cameo making two distinct scenes, depicting seven human figures, plus a large snake, and two bearded and horned heads below the handles, marking the break between the scenes.
The bottom of the vase was a cameo glass disc, also in blue and white, showing a head, presumed to be of Paris or Priam on the basis of the Phrygian cap it wears. This roundel clearly does not belong to the vase, and has been displayed separately since 1845. It may have been added to mend a break in antiquity or after, or the result of a conversion from an original amphora form (paralleled by a similar blue-glass cameo vessel from Pompeii) – it was attached to the bottom from at least 1826.
The meaning of the images on the vase is unclear, and none of the many theories put forward has been found generally satisfactory. They fall into two main groups: mythological and historical, though a historical interpretation of a myth is also a possibility. Historical interpretations focus on Augustus, his family and his rivals, especially given the quality and expense of the object, and the somewhat remote neo-classicism of the style, which compares with some Imperial gemstone cameos featuring Augustus and his family with divine attributes, such as the Gemma Augustea, the Great Cameo of France and the Blacas Cameo (the last also in the British Museum). Interpretations of the portrayals have included that of a marine setting (due to the presence of a ketos or sea-snake), and of a marriage theme/context, as the vase may have been a wedding gift. Many scholars (including Charles Towneley) have concluded that the figures do not fit into a single iconographic set.
Interpretations include:
Interpretations include:
Another variant theory is that the vase dates back to circa 32 BC, and was commissioned by Octavian (later Caesar Augustus), as an attempt to promote his case against his fellow-consuls, Mark Antony and Marcus Lepidus in the period after the death of Julius Caesar. It is based on the skill of the famous Greek carver of engraved gems Dioskourides, who is recorded as active and at his peak circa 40–15 BC and three of whose attributed cameos bear a close resemblance in line and quality to the Portland vase figures. This theory proposes that the first two figures are Gaius Octavius, father of the future emperor, and Atia Balba Caesonia, his mother (hence Cupid with the arrow) who had a dream of being impregnated by Apollo in the form of a sea serpent (ketos), note the snake's prominent teeth. The onlooker with his staff, could be Aeneas, a hero of the Trojan Wars who saved his father by carrying over his back (hence his hunched position, and his Trojan beard) and who is believed to have founded Rome, and from whom the Julian gens, including Julius Caesar and Attia, claimed descent, witnessing the conception of Rome's future savior as an Empire, and the greatest of all the Emperors.
On the reverse is Octavian, Octavia his sister, widow of Mark Anthony (downcast flambeau, broken tablets) and Livia, Octavian's third wife who outlived him. These two are looking directly at each other. Octavian commanded she divorce her then husband and marry him with a few weeks of meeting, she was mother to the future Emperor Tiberius.
This vase suggests Octavian was descended partly from Apollo (thus partly divine, shades of Achilles), whom he worshiped as a god, gave private parties in his honor together with Minerva, Roman Goddess of War, from the founder of Rome, and his connection to his uncle Julius Caesar, for whom as a young man he gave a remarkable funeral oratory, and who adopted him on his father's death, when he was only four. All the pieces and people fit in this theory and it explains most mysteries (apart from who actually made it). It would have been a fabulously expensive piece to commission, so that few men of the period could have afforded it. Several attempts at creating the vase must have been made, as modern reproduction trials show today (see below). Historians and archeologists dismiss this modern theory as gods and goddesses with mythical allegories were usually portrayed, but could this remarkable vase have broken convention, and shown realism in portraiture, known solely on coins of the period, before it, in turn, was broken?
Cameo-glass vessels were probably all made within about two generations, as experiments when the blowing technique (discovered in about 50 BC) was still in its infancy. Recent research suggests that the Portland vase, like the majority of cameo-glass vessels, was made by the dip-overlay method, whereby an elongated bubble of glass was partially dipped into a crucible of white glass, before the two were blown together. After cooling the white layer was cut away to form the design.
The work in making a 19th-century copy proved to be incredibly painstaking, and based on this it is believed that the Portland Vase must have taken its original artisan no less than two years to produce. The cutting was probably performed by a skilled gem-cutter. It is believed that the cutter may have been Dioskourides, as engraved gems thought to be cut by him of a similar period and signed by him (Vollenweider 1966, see Gem in the collection of the Duke of Devonshire "Diomedes stealing the Palladium") are extant. This is confirmed by the Corning Museum in their 190-page study of the vase – see above.
According to a controversial theory by Rosemarie Lierke, the vase, along with the rest of Roman cameo glass, was moulded rather than cold-cut, probably using white glass powder for the white layer.
Jerome Eisenberg has argued in "Minerva" that the vase was produced in the 16th century AD and not antiquity, because the iconography is incoherent, but this theory has not been widely accepted.
One story suggests that it was discovered by Fabrizio Lazzaro in what was then thought to be the sarcophagus of the Emperor Alexander Severus (died 235) and his mother, at Monte del Grano near Rome, and excavated some time around 1582.
The first historical reference to the vase is in a letter of 1601 from the French scholar Nicolas Claude Fabri de Peiresc to the painter Peter Paul Rubens, where it is recorded as in the collection of Cardinal Francesco Maria Del Monte in Italy. In 1626 it passed into the Barberini family collection (which also included sculptures such as the Barberini Faun and Barberini Apollo) where it remained for some two hundred years, being one of the treasures of Maffeo Barberini, later Pope Urban VIII (1623–1644). It was at this point that the Severan connection is first recorded. The vase was known as the "Barberini Vase" in this period.
Between 1778 and 1780, Sir William Hamilton, British ambassador in Naples, bought the vase from James Byres, a Scottish art dealer, who had acquired it after it was sold by Cornelia Barberini-Colonna, Princess of Palestrina. She had inherited the vase from the Barberini family. Hamilton brought it to England on his next leave, after the death of his first wife, Catherine. In 1784, with the assistance of his niece, Mary, he arranged a private sale of the vase to Margaret Cavendish-Harley, widow of William Bentinck, 2nd Duke of Portland, and dowager Duchess of Portland. It was sold at auction in 1786 and passed into the possession of the duchess's son, William Cavendish-Bentinck, 3rd Duke of Portland.
The 3rd Duke lent the original vase to Josiah Wedgwood and then to the British Museum for safe-keeping, by which point it was known as the "Portland Vase". It was deposited there permanently by the fourth Duke in 1810, after a friend of his broke its base. It has remained in the British Museum ever since 1810, apart from 1929–1932, when the 6th Duke put it up for sale at Christie's (where it failed to reach its reserve). It was finally purchased by the museum from the 7th Duke in 1945 with the aid of a bequest from James Rose Vallentin.
The 3rd Duke lent the vase to Josiah Wedgwood, who had already had it described to him by the sculptor John Flaxman as "the finest production of Art that has been brought to England and seems to be the very apex of perfection to which you are endeavoring". Wedgwood devoted four years of painstaking trials at duplicating the vase – not in glass but in black and white jasperware. He had problems with his copies ranging from cracking and blistering (clearly visible on the example at the Victoria and Albert Museum) to the sprigged reliefs 'lifting' during the firing, and in 1786 he feared that he could never apply the Jasper relief thinly enough to match the glass original's subtlety and delicacy. He finally managed to perfect it in 1790, with the issue of the "first-edition" of copies (with some of this edition, including the V&A one, copying the cameo's delicacy by a combination of undercutting and shading the reliefs in grey), and it marks his last major achievement.
Wedgwood put the first edition on private show between April and May 1790, with that exhibition proving so popular that visitor numbers had to be restricted by only printing 1,900 tickets, before going on show in his public London showrooms. (One ticket to the private exhibition, illustrated by Samuel Alkin and printed with 'Admission to see Mr Wedgwood's copy of The Portland Vase, Greek Street, Soho, between 12 o'clock and 5', was bound into the Wedgwood catalogue on view in the Victoria and Albert Museum's British Galleries.) As well as the V&A copy (said to have come from the collection of Wedgwood's grandson, the naturalist Charles Darwin), others are held at the Fitzwilliam Museum (this is the copy sent by Wedgwood to Erasmus Darwin which his descendants lent to the Museum in 1963 and later sold to them); the Indianapolis Museum of Art and the Department of Prehistory and Europe at the British Museum.
The vase also inspired a 19th-century competition to duplicate its cameo-work in glass, with Benjamin Richardson offering a £1,000 prize to anyone who could achieve that feat. Taking three years, glass maker Philip Pargeter made a copy and John Northwood engraved it, to win the prize. This copy is in the Corning Museum of Glass in Corning, New York.
The Wedgwood Museum, in Barlaston, near Stoke-on-Trent, contains a display describing the trials of replicating the vase, and several examples of the early experiments are shown.
At 3:45 p.m. on 7 February 1845, the vase was shattered by William Lloyd, who, after drinking all the previous week, threw a nearby sculpture on top of the case, smashing both it and the vase. He was arrested and charged with the crime of willful damage. When his lawyer pointed out an error in the wording of the act which seemed to limit its application to the destruction of objects worth no more than five pounds, he was convicted instead of the destruction of the glass case in which the vase had sat. He was ordered to pay a fine of three pounds (approximately 350 pounds equivalent in 2017) or spend two months in prison. He remained in prison until an anonymous benefactor paid the fine by mail. The name William Lloyd is thought to be a pseudonym. Investigators hired by the British Museum concluded that he was actually William Mulcahy, a student who had gone missing from Trinity College. Detectives reported that the Mulcahy family was impoverished. The owner of the vase declined to bring a civil action against William Mulcahy because he did not want his family to suffer for "an act of folly or madness which they could not control".
The vase was pieced together with fair success in 1845 by British Museum restorer John Doubleday, though he was unable to replace thirty-seven small fragments. It appears they had been put into a box and forgotten. On 5 October 1948, the keeper Bernard Ashmole received them in a box from Mr. G.A. Croker of Putney, who did not know what they were. After Doubleday's death, a fellow restorer from the British Museum took them to Mr. G.H. Gabb, a box maker, who was asked to make a box with thirty seven compartments, one for each fragment. However, the restorer also died and the box was never collected. After Gabb's death, his executrix, Miss Amy Reeves, brought in Croker to value Gabb's effects. This was how Crocker came to bring them to the museum to ask for help in identifying them.
By November 1948, the restoration appeared aged and it was decided to restore the vase again. It was dismantled by conservator J.W.R. Axtell in mid-November 1948. The pieces were examined by D.B. Harden and W.A. Thorpe, who confirmed that the circular glass base removed in 1845 was not original. Axtell then carried out a reconstruction, completed by 2 February 1949, in which he was only successful in replacing three of the 37 loose fragments. He reportedly used "new adhesives" for this restoration, which some thought might be epoxy resins or shellac, but were later discovered to simply be the same type of animal glue that Doubleday used in 1845. He also filled some areas with wax. No documentation of his work was produced.
By the late 1980s, the adhesive was again yellowing and brittle. Although the vase was shown at the British Museum as part of the "Glass of the Caesars" exhibition (November 1987 – March 1988), it was too fragile to travel to other locations afterwards. Instead, another reconstruction was performed between 1 June 1988 and 1 October 1989 by Nigel Williams and Sandra Smith. The pair was overseen by David Akehurst (CCO of Glass and Ceramics) who had assessed the vase's condition during the "Glass of the Caesars" exhibition and decided to go ahead with reconstruction and stabilization. The treatment had scholarly attention and press coverage. The vase was photographed and drawn to record the position of fragments before dismantling; the BBC filmed the conservation process. Conservation scientists at the museum tested many adhesives for long-term stability, choosing an epoxy resin with excellent ageing properties. Reassembly revealed some fragments had been filed down during the restorations, complicating the process. All but a few small splinters were integrated. Gaps were filled with blue or white resin.
Little sign of the original damage is visible, and, except for light cleaning, it is hoped that the vase should not require major conservation work for at least another century. | https://en.wikipedia.org/wiki?curid=24703 |
Pyrenees
The Pyrenees (; , ; , ; ; , ; , ; , ) is a mountain range between the Iberian Peninsula and France. Reaching a height of altitude at the peak of Aneto, it extends for about from its union with Cantabrian Mountains to the Mediterranean Sea (Cap de Creus).
For the most part, the main crest forms a divide between Spain and France, with the microstate of Andorra sandwiched in between. Historically, the Crown of Aragon and the Kingdom of Navarre extended on both sides of the mountain range.
In Greek mythology, Pyrene is a princess who gave her name to the Pyrenees. The Greek historian Herodotus says Pyrene is the name of a town in Celtic Europe. According to Silius Italicus, she was the virgin daughter of Bebryx, a king in Mediterranean Gaul by whom the hero Hercules was given hospitality during his quest to steal the cattle of Geryon during his famous Labours. Hercules, characteristically drunk and lustful, violates the sacred code of hospitality and rapes his host's daughter. Pyrene gives birth to a serpent and runs away to the woods, afraid that her father will be angry. Alone, she pours out her story to the trees, attracting the attention of wild beasts who tear her to pieces.
After his victory over Geryon, Hercules passes through the kingdom of Bebryx again, finding the girl's lacerated remains. As is often the case in stories of this hero, the sober Hercules responds with heartbroken grief and remorse at the actions of his darker self, and lays Pyrene to rest tenderly, demanding that the surrounding geography join in mourning and preserve her name: "struck by Herculean voice, the mountaintops shudder at the ridges; he kept crying out with a sorrowful noise 'Pyrene!' and all the rock-cliffs and wild-beast haunts echo back 'Pyrene!' … The mountains hold on to the wept-over name through the ages." Pliny the Elder connects the story of Hercules and Pyrene to Lusitania, but rejects it as "fabulosa", highly fictional.
Other classical sources derived the name from the Greek word for fire, (IPA: ). According to Greek historian Diodorus Siculus "in ancient times, we are told, certain herdsmen left a fire and the whole area of the mountains was entirely consumed; and due to this fire, since it raged continuously day after day, the surface of the earth was also burned and the mountains, because of what had taken place, were called the Pyrenees."
The Spanish Pyrenees are part of the following provinces, from east to west: Gerona, Barcelona, Lérida (all in Catalonia), Huesca (in Aragon), Navarra (in Navarre) and Gipuzkoa (in the Basque Country).
The French Pyrenees are part of the following "départements", from east to west: Pyrénées-Orientales, Aude, Ariège, Haute-Garonne, Hautes-Pyrénées, and Pyrénées-Atlantiques (the latter two of which include the Pyrenees National Park).
The independent principality of Andorra is sandwiched in the eastern portion of the mountain range between the Spanish Pyrenees and French Pyrenees.
Physiographically, the Pyrenees may be divided into three sections: the Atlantic (or Western), the Central, and the Eastern Pyrenees. Together, they form a distinct physiographic province of the larger Alpine System division.
In the Western Pyrenees, from the Basque mountains near the Bay of Biscay of the Atlantic Ocean, the average elevation gradually increases from west to east.
The Central Pyrenees extend eastward from the Somport pass to the Aran Valley, and they include the highest summits of this range:
In the Eastern Pyrenees, with the exception of one break at the eastern extremity of the "Pyrénées Ariègeoises" in the Ariège area, the mean elevation is remarkably uniform until a sudden decline occurs in the easternmost portion of the chain known as the Albères.
Most foothills of the Pyrenees are on the Spanish side, where there is a large and complex system of ranges stretching from Spanish Navarre, across northern Aragon and into Catalonia, almost reaching the Mediterranean coast with summits reaching . At the eastern end on the southern side lies a distinct area known as the Sub-Pyrenees.
On the French side the slopes of the main range descend abruptly and there are no foothills except in the Corbières Massif in the northeastern corner of the mountain system.
The Pyrenees are older than the Alps: their sediments were first deposited in coastal basins during the Paleozoic and Mesozoic eras. Between 100 and 150 million years ago, during the Lower Cretaceous Period, the Bay of Biscay fanned out, pushing present-day Spain against France and applying intense compressional pressure to large layers of sedimentary rock. The intense pressure and uplifting of the Earth's crust first affected the eastern part and moved progressively to the entire chain, culminating in the Eocene Epoch.
The eastern part of the Pyrenees consists largely of granite and gneissose rocks, while in the western part the granite peaks are flanked by layers of limestone. The massive and unworn character of the chain comes from its abundance of granite, which is particularly resistant to erosion, as well as weak glacial development.
The upper parts of the Pyrenees contain low-relief surfaces forming a peneplain. This peneplain originated no earlier than in Late Miocene times. Presumably it formed at height as extensive sedimentation raised the local base level considerably.
Conspicuous features of Pyrenean scenery are:
The highest waterfall is Gavarnie (462 m or 1,515 ft), at the head of the Gave de Pau; the Cirque de Gavarnie, in the same valley, together with the nearby Cirque de Troumouse and Cirque d'Estaubé, are notable examples of the cirque formation.
Low passes are lacking, and the principal roads and the railroads between France and Spain run only in the lowlands at the western and eastern ends of the Pyrenees, near sea level. The main passes of note are:
Because of the lack of low passes a number of tunnels have been created, beneath the passes at Somport, Envalira, and Puymorens and new routes in the center of the range at Bielsa and Vielha.
A notable visual feature of this mountain range is La Brèche de Roland, a gap in the ridge line, whichaccording to legendwas created by Roland.
The metallic ores of the Pyrenees are not in general of much importance now, though there were iron mines at several locations in Andorra, as well as at Vicdessos in Ariège, and the foot of Canigou in Pyrénées-Orientales long ago. Coal deposits capable of being profitably worked are situated chiefly on the Spanish slopes, but the French side has beds of lignite. The open pit of Trimoun near the commune of Luzenac (Ariège) is one of the greatest sources of talc in Europe.
Mineral springs are abundant and remarkable, and especially noteworthy are the hot springs. The hot springs, among which those of Les Escaldes in Andorra, Panticosa and Lles in Spain, Ax-les-Thermes, Bagnères-de-Luchon and Eaux-Chaudes in France may be mentioned, are sulfurous and mostly situated high, near the contact of the granite with the stratified rocks. The lower springs, such as those of Bagnères-de-Bigorre (Hautes-Pyrénées), Rennes-les-Bains (Aude), and Campagne-sur-Aude (Aude), are mostly selenitic and not hot.
The amount of precipitation the range receives, including rain and snow, is much greater in the western than in the eastern Pyrenees because of the moist air that blows in from the Atlantic Ocean over the Bay of Biscay. After dropping its moisture over the western and central Pyrenees, the air is left dry over the eastern Pyrenees. The winter average temperature is .
Sections of the mountain range vary in more than one respect. There are some glaciers in the western and snowy central Pyrenees, but there are no glaciers in the eastern Pyrenees because there is insufficient snowfall to cause their development. Glaciers are confined to the northern slopes of the central Pyrenees, and do not descend, like those of the Alps, far down into the valleys but rather have their greatest lengths along the direction of the mountain chain. They form, in fact, in a narrow zone near the crest of the highest mountains. Here, as in the other great mountain ranges of central Europe, there is substantial evidence of a much wider expanse of glaciation during the glacial periods. The best evidence of this is in the valley of Argeles Gazost, between Lourdes and Gavarnie, in the "" of Hautes-Pyrénées.
The annual snow-line varies in different parts of the Pyrenees from about above sea level. In average the seasonal snow is observed at least 50% of the time above between December and April.
A still more marked effect of the preponderance of rainfall in the western half of the chain is seen in the vegetation. The lower mountains in the extreme west are wooded, but the extent of forest declines as one moves eastwards. The eastern Pyrenees are peculiarly wild and barren, all the more since it is in this part of the chain that granitic masses prevail. Also moving from west to east, there is a change in the composition of the flora, with the change becoming most evident as one passes the centre of the mountain chain from which point the Corbières Massif stretch north-eastwards towards the central plateau of France. Though the difference in latitude is only about 1°, in the west the flora resembles that of central Europe while in the east it is distinctly Mediterranean in character. The Pyrenees are nearly as rich in endemic species as the Alps, and among the most remarkable instances of that endemism is the occurrence of the monotypic genus "Xatardia" (family Apiaceae), which grows only on a high alpine pass between the Val d'Eynes and Catalonia. Other examples include "Arenaria montana", "Bulbocodium vernum", and "Ranunculus glacialis". The genus most abundantly represented in the range is that of the saxifrages, several species of which are endemic here.
In their fauna the Pyrenees present some striking instances of endemism. The Pyrenean desman is found only in some of the streams of the northern slopes of these mountains; the only other desmans are confined to the rivers of the Caucasus in southern Russia. The Pyrenean euprocte ("Euproctus pyrenaicus"), an endemic relative of the salamander, also lives in streams and lakes located at high altitudes. Among other peculiarities of Pyrenean fauna are blind insects in the caverns of Ariège, the principal genera of which are "Anophthalmus" and "Adelops".
The Pyrenean ibex mysteriously became extinct in January 2000; the native Pyrenean brown bear was hunted to near-extinction in the 1990s, but it was re-introduced in 1996 when three bears were brought from Slovenia. The bear population has bred successfully, and there are now believed to be about 15 brown bears in the central region around Fos, but only four native ones are still living in the Aspe Valley.
Principal nature reserves and national parks:
The Pyrenean region possesses a varied ethnology, folklore and history: see Andorra; Aragon; Ariège; Basque Country; Béarn; Catalonia; Navarre; Roussillon. For their history, see also Almogavars, Marca Hispanica.
The principal languages spoken in the area are Spanish, French, Aragonese, Catalan (in Catalonia and Andorra), and Basque.
Also spoken, to a lesser degree, is the Occitan language, consisting of the Gascon and Languedocien dialects in France and the Aranese dialect in the Aran Valley.
An important feature of rural life in the Pyrenees is 'transhumance', the moving of livestock from the farms in the valleys up to the higher grounds of the mountains for the summer. In this way the farming communities could keep larger herds than the lowland farms could support on their own. The principal animals moved were cows and sheep, but historically most members of farming families also moved to the higher pastures along with their animals, so they also took with them pigs, horses and chickens. Transhumance thus took the form of a mass biannual migration, moving uphill in May or June and returning to the farms in September or October. During the summer period, the families would live in basic stone cabins in the high mountains.
Nowadays, industrialisation and changing agriculture practices have diminished the custom. However, the importance of transhumance continues to be recognised through its celebration in popular festivals.
The Pic du Midi Observatory is an astronomical observatory located at 2877 meters on top of the Pic du Midi de Bigorre in the French Pyrenees. Construction of the observatory began in 1878 and the 8 metres dome was completed in 1908.
The observatory housed a powerful mechanical equatorial reflector which was used in 1909 to formally discredit the Martian canal theory. A 1.06-meter (42-inch) telescope was installed in 1963, funded by NASA and was used to take detailed photographs of the surface of the Moon in preparation for the Apollo missions. Other studies conducted in 1965 provided a detailed analysis of the composition of the atmospheres on Mars and Venus, this served as a basis for Jet Propulsion Laboratory scientists to predict that these planets had no life.
Since 1980, the observatory has had a 2-metre telescope, which is the largest telescope in France. Overtaken by the giant telescopes built in recent decades, today the observatory is widely open to amateur astronomy.
The Odeillo solar furnace is the world's largest solar furnace. It is situated in Font-Romeu-Odeillo-Via, in the department of Pyrénées-Orientales, in south of France. Built between 1962 and 1968, it is 54 metres (177 ft) high and 48 metres (157 ft) wide, and includes 63 heliostats. The site was chosen because of the length and the quality of sunshine with direct light (more than 2,500 h/year) and the purity of its atmosphere (high altitude and low average humidity).
This furnace serves as a science research site studying materials at very high temperatures. Temperatures above 3,500 °C (6,330 °F) can be obtained in a few seconds, in addition it provides rapid temperature changes and therefore allow studying the effect of thermal shocks.
No big cities are in the range itself. The largest urban area close to the Pyrenees is Toulouse (Haute-Garonne), France with a population of 1,330,954 in its metropolitan area. On the Spanish side Pamplona, (Navarre) is the closest city with a population of 319,208 in its metropolitan area. Inside the Pyrenees the main towns are Andorra la Vella (22,256), Jaca (12,813) in Spain and Lourdes (13,976) and Foix (10,046) in France.
The following is the complete list of the summits of the Pyrenees above 3,000 meters:
Both sides of the Pyrenees are popular spots for winter sports such as alpine skiing and mountaineering. The Pyrenees are also a good place for athletes to do high-altitude training in the summertime, such as by bicycling and cross-country running.
In the summer and the autumn, the Pyrenees are usually featured in two of cycling's grand tours, the Tour de France held annually in July and the Vuelta a España held in September. The stages held in the Pyrenees are often crucial legs of both tours, drawing hundreds of thousands of spectators to the region.
Three main long-distance footpaths run the length of the mountain range: the GR 10 across the northern slopes, the GR 11 across the southern slopes, and the HRP which traverses peaks and ridges along a high altitude route. In addition, there are numerous marked and unmarked trails throughout the region.
"Pirena" is a dog-mushing competition held in the Pyrenees.
Ski resorts in the Pyrenees include: | https://en.wikipedia.org/wiki?curid=24707 |
Planetary nomenclature
Planetary nomenclature, like terrestrial nomenclature, is a system of uniquely identifying features on the surface of a planet or natural satellite so that the features can be easily located, described, and discussed. Since the invention of the telescope, astronomers have given names to the surface features they have discerned, especially on the Moon and Mars. To standardize planetary nomenclature, the International Astronomical Union (IAU) was assigned in 1919 the task of selecting official names for features on Solar System bodies.
When images are first obtained of the surface of a planet or satellite, a theme for naming features is chosen and a few important features are named, usually by members of the appropriate IAU task group (a commonly accepted planet-naming group). Later, as higher resolution images and maps become available, additional features are named at the request of investigators mapping or describing specific surfaces, features, or geologic formations. Anyone may suggest that a specific name be considered by a task group. If the members of the task group agree that the name is appropriate, it can be retained for use when there is a request from a member of the scientific community that a specific feature be named. Names successfully reviewed by a task group are submitted to the IAU Working Group for Planetary System Nomenclature (WGPSN). Upon successful review by the members of the WGPSN, names are considered provisionally approved and can be used on maps and in publications as long as the provisional status is clearly stated. Provisional names are then presented for adoption to the IAU's General Assembly, which met triennially in the past, and which now adopts nomenclature for planetary surface features as required. A name is not considered to be official — that is, "adopted" — until the General Assembly has given its approval.
Names adopted by the IAU must follow various rules and conventions established and amended through the years by the Union. These include:
In addition to these general rules, each task group develops additional conventions as it formulates an interesting and meaningful nomenclature for individual planetary bodies.
Names for all planetary features include a descriptor term, with the exception of two feature types. For craters, the descriptor term is implicit. Some features named on Io and Triton do not carry a descriptor term because they are ephemeral.
In general, the naming convention for a feature type remains the same regardless of its size. Exceptions to this rule are valleys and craters on Mars and Venus; naming conventions for these features differ according to size.
One feature classification, "regio", was originally used on early maps of the Moon and Mercury (drawn from telescopic observations) to describe vague albedo features. It is now used to delineate a broad geographic region.
Named features on bodies so small that coordinates have not yet been determined are identified on drawings of the body that are included in the IAU Transactions volume of the year when the names were adopted. Satellite rings and gaps in the rings are named for scientists who have studied these features; drawings that show these names are also included in the pertinent Transactions volume. Names for atmospheric features are informal at present; a formal system will be chosen in the future.
The boundaries of many large features (such as "terrae, regiones, planitiae" and "plana") are not topographically or geomorphically distinct; the coordinates of these features are identified from an arbitrarily chosen center point. Boundaries (and thus coordinates) may be determined more accurately from geochemical and geophysical data obtained by future missions.
During active missions, small surface features are often given informal names. These may include landing sites, spacecraft impact sites, and small topographic features, such as craters, hills, and rocks. Such names will not be given official status by the IAU, except as provided for by Rule 2 above. As for the larger objects, official names for any such small features would have to conform to established IAU rules and categories.
All but three features on Venus are named after females. These three exceptions were named before the convention was adopted, being respectively Alpha Regio, Beta Regio, and Maxwell Montes which is named after James Clerk Maxwell.
When space probes have landed on Mars, individual small features such as rocks, dunes, and hollows have often been given informal names. Many of these are frivolous: features have been named after ice cream (such as Cookies N Cream); cartoon characters (such as SpongeBob SquarePants and Patrick); and '70s music acts (such as ABBA and the Bee Gees).
Features on Deimos are named after authors who wrote about Martian satellites. There are currently two named features on Deimos - Swift crater and Voltaire crater - after Jonathan Swift and Voltaire who predicted the presence of Martian moons.
All features on Phobos are named after scientists involved with the discovery, dynamics, or properties of the Martian satellites or people and places from Jonathan Swift's "Gulliver's Travels".
People and places associated with the Amalthea myth
Features on Thebe are named after people and places associated with the Thebe myth. There is only one named feature on Thebe - Zethus Crater.
People from myth of Castor and Pollux (twins)
People from myth of Castor and Pollux (twins)
People and places from Malory's "Le Morte d'Arthur" legends (Baines translation)
People and places from Burton's "Arabian Nights"
People and places from Homer's "Odyssey"
People and places from Virgil's "Aeneid"
People and places from creation myths
Sun and Moon deities
People and places from Sayers' translation of "Chanson de Roland", the only exception is Cassini Regio, which is named after its discoverer, Giovanni Cassini.
Satellites of Uranus are named for characters from the works of William Shakespeare.
Mischievous (Pucklike) spirits (class)
Characters, places from Shakespeare's plays
Light spirits (individual and class)
Dark spirits (individual)
Female Shakespearean characters, places
Shakespearean tragic heroes and places
There are currently no named features on Uranian small satellites, however the naming convention is heroines from plays by Shakespeare and Pope.
Features on Proteus are to be named after water-related spirits, gods or goddesses who are neither Greek nor Roman. The only named feature on Proteus is crater Pharos.
Geological features on Triton should be assigned aquatic names, excluding those which are Roman and Greek in origin. Possible themes for individual descriptor terms include worldwide aquatic spirits, famous terrestrial fountains or fountain locations, terrestrial aquatic features, famous terrestrial geysers or geyser locations and terrestrial islands.
There are currently no named features on Nereid. When features are discovered, they are to be named after individual nereids.
Features on other satellites of Neptune, once discovered, should be named after gods and goddesses associated with Neptune/Poseidon mythology or generic mythological aquatic beings.
In February 2017, the IAU approved the following themes for surface features on Pluto and its satellites: | https://en.wikipedia.org/wiki?curid=24709 |
North American P-51 Mustang
The North American Aviation P-51 Mustang is an American long-range, single-seat fighter and fighter-bomber used during World War II and the Korean War, among other conflicts. The Mustang was designed in April 1940 by a design team headed by James Kindelberger of North American Aviation (NAA) in response to a requirement of the British Purchasing Commission. The Purchasing Commission approached North American Aviation to build Curtiss P-40 fighters under license for the Royal Air Force (RAF). Rather than build an old design from another company, North American Aviation proposed the design and production of a more modern fighter. The prototype NA-73X airframe was rolled out on 9 September 1940, 102 days after the contract was signed, and first flew on 26 October.
The Mustang was designed to use the Allison V-1710 engine, which had limited high-altitude performance in its earlier variants. The aircraft was first flown operationally by the Royal Air Force (RAF) as a tactical-reconnaissance aircraft and fighter-bomber (Mustang Mk I). Replacing the Allison with a Rolls-Royce Merlin resulted in the P-51B/C (Mustang Mk III) model and transformed the aircraft's performance at altitudes above (without sacrificing range), allowing it to compete with the Luftwaffe's fighters. The definitive version, the P-51D, was powered by the Packard V-1650-7, a license-built version of the two-speed two-stage-supercharged Merlin 66, and was armed with six .50 caliber (12.7 mm) AN/M2 Browning machine guns.
From late 1943, P-51Bs and P-51Cs (supplemented by P-51Ds from mid-1944) were used by the USAAF's Eighth Air Force to escort bombers in raids over Germany, while the RAF's Second Tactical Air Force and the USAAF's Ninth Air Force used the Merlin-powered Mustangs as fighter-bombers, roles in which the Mustang helped ensure Allied air superiority in 1944. The P-51 was also used by Allied air forces in the North African, Mediterranean, Italian and Pacific theaters. During World War II, Mustang pilots claimed to have destroyed 4,950 enemy aircraft.
At the start of the Korean War, the Mustang, by then redesignated F-51, was the main fighter of the United States until jet fighters, including North American's F-86, took over this role; the Mustang then became a specialized fighter-bomber. Despite the advent of jet fighters, the Mustang remained in service with some air forces until the early 1980s. After the Korean War, Mustangs became popular civilian warbirds and air racing aircraft.
In April 1940, the British government established a purchasing commission in the United States, headed by Sir Henry Self. Self was given overall responsibility for Royal Air Force (RAF) production, research and development, and also served with Sir Wilfrid Freeman, the Air Member for Development and Production. Self also sat on the British Air Council Sub-committee on Supply (or "Supply Committee") and one of his tasks was to organize the manufacturing and supply of American fighter aircraft for the RAF. At the time, the choice was very limited, as no U.S. aircraft then in production or flying met European standards, with only the Curtiss P-40 Tomahawk coming close. The Curtiss-Wright plant was running at capacity, so P-40s were in short supply.
North American Aviation (NAA) was already supplying its Harvard trainer to the RAF, but was otherwise underused. NAA President "Dutch" Kindelberger approached Self to sell a new medium bomber, the North American B-25 Mitchell. Instead, Self asked if NAA could manufacture P-40s under license from Curtiss. Kindelberger said NAA could have a better aircraft with the same Allison V-1710 engine in the air sooner than establishing a production line for the P-40. The Commission stipulated armament of four .303 in (7.7 mm) machine guns (as used on the Tomahawk), a unit cost of no more than $40,000 and delivery of the first production aircraft by January 1941. In March 1940, 320 aircraft were ordered by Freeman, who had become the executive head of the Ministry of Aircraft Production (MAP) and the contract was promulgated on 24 April.
The NA-73X, which was designed by a team led by lead engineer Edgar Schmued, followed the best conventional practice of the era, but included several new features. One was a wing designed using laminar flow airfoils, which were developed co-operatively by North American Aviation and the National Advisory Committee for Aeronautics (NACA). These airfoils generated low drag at high speeds. During the development of the NA-73X, a wind tunnel test of two wings, one using NACA five-digit airfoils and the other using the new NAA/NACA 45–100 airfoils, was performed in the University of Washington Kirsten Wind Tunnel. The results of this test showed the superiority of the wing designed with the NAA/NACA 45–100 airfoils.
The other feature was a new cooling arrangement positioned aft (single ducted water and oil radiators assembly) that reduced the fuselage drag and effects on the wing. Later, after much development, they discovered that the cooling assembly could take advantage of the Meredith effect: in which heated air exited the radiator with a slight amount of jet thrust. Because NAA lacked a suitable wind tunnel to test this feature, it used the GALCIT wind tunnel at the California Institute of Technology. This led to some controversy over whether the Mustang's cooling system aerodynamics were developed by NAA's engineer Edgar Schmued or by Curtiss, although NAA had purchased the complete set of P-40 and XP-46 wind tunnel data and flight test reports for US$56,000. The NA-73X was also one of the first aircraft to have a fuselage lofted mathematically using conic sections; this resulted in smooth, low drag surfaces. To aid production, the airframe was divided into five main sections—forward, center, rear fuselage, and two wing halves—all of which were fitted with wiring and piping before being joined.
The prototype NA-73X was rolled out in September 1940, just 102 days after the order had been placed; it first flew on 26 October 1940, 149 days into the contract, an uncommonly short development period, even during the war. With test pilot Vance Breese at the controls, the prototype handled well and accommodated an impressive fuel load. The aircraft's three-section, semi-monocoque fuselage was constructed entirely of aluminum to save weight. It was armed with four .30 caliber (7.62 mm) AN/M2 Browning machine guns in the wings and two .50 caliber (12.7 mm) AN/M2 Browning machine guns mounted under the engine and firing through the propeller arc using gun-synchronizing gear.
While the United States Army Air Corps could block any sales it considered detrimental to the interests of the US, the NA-73 was considered to be a special case because it had been designed at the behest of the British. In September 1940, a further 300 NA-73s were ordered by the MAP. To ensure uninterrupted delivery, Colonel Oliver P. Echols arranged with the Anglo-French Purchasing Commission to deliver the aircraft and NAA gave two examples (41-038 and 41-039) to the USAAC for evaluation.
The Mustang was initially developed for the RAF, which was its first user. As the first Mustangs were built to British requirements, these aircraft used factory numbers and were not P-51s; the order comprised 320 NA-73s, followed by 300 NA-83s, all of which were designated North American Mustang Mark I by the RAF. The first RAF Mustangs supplied under Lend-Lease were 93 P-51s, designated Mk Ia, followed by 50 P-51As used as Mustang Mk IIs. Aircraft supplied to Britain under Lend-Lease were required for accounting purposes to be on the USAAC's books before they could be supplied to Britain. However, the British Aircraft Purchasing Commission signed its first contract for the North American NA-73 on 24 April 1940, before Lend-Lease was in effect. Thus, the initial order for the P-51 Mustang (as it was later known) was placed by the British under the "Cash and Carry" program, as required by the US Neutrality Acts of the 1930s.
After the arrival of the initial aircraft in the UK in October 1941, the first Mustang Mk Is entered service in January 1942, the first unit being 26 Squadron RAF. Due to poor high-altitude performance, the Mustangs were used by Army Co-operation Command, rather than Fighter Command, and were used for tactical reconnaissance and ground-attack duties. On 10 May 1942, Mustangs first flew over France, near Berck-sur-Mer. On 27 July 1942, 16 RAF Mustangs undertook their first long-range reconnaissance mission over Germany. During the amphibious Dieppe Raid on the French coast (19 August 1942), four British and Canadian Mustang squadrons, including 26 Squadron, saw action covering the assault on the ground. By 1943–1944, British Mustangs were used extensively to seek out V-1 flying bomb sites. The last RAF Mustang Mk I and Mustang Mk II aircraft were struck off charge in 1945.
The RAF also operated 308 P-51Bs and 636 P-51Cs, which were known in RAF service as Mustang Mk IIIs; the first units converted to the type in late 1943 and early 1944. Mustang Mk III units were operational until the end of World War II, though many units had already converted to the Mustang Mk IV (P-51D) and Mk IVa (P-51K) (828 in total, comprising 282 Mk IV and 600 Mk IVa). As all except the earliest aircraft were obtained under Lend-Lease, all Mustang aircraft still on RAF charge at the end of the war were either returned to the USAAF "on paper" or retained by the RAF for scrapping. The last RAF Mustangs were retired from service in 1947.
Prewar doctrine was based on the idea "the bomber will always get through". Despite RAF and Luftwaffe experience with daylight bombing, the USAAF still incorrectly believed in 1942 that tightly packed formations of bombers would have so much firepower that they could fend off fighters on their own. Fighter escort was a low priority but when the concept was discussed in 1941, the Lockheed P-38 Lightning was considered to be most appropriate as it had the speed and range. Another school of thought favored a heavily up-armed "gunship" conversion of a strategic bomber. A single-engined, high-speed fighter with the range of a bomber was thought to be an engineering impossibility.
The 8th Air Force started operations from Britain in August 1942. At first, because of the limited scale of operations, no conclusive evidence showed American doctrine was failing. In the 26 operations flown to the end of 1942, the loss rate had been under 2%.
In January 1943, at the Casablanca Conference, the Allies formulated the Combined Bomber Offensive (CBO) plan for "round-the-clock" bombing – USAAF daytime operations complementing the RAF nighttime raids on industrial centers. In June 1943, the Combined Chiefs of Staff issued the Pointblank Directive to destroy the Luftwaffe's capacity before the planned invasion of Europe, putting the CBO into full implementation. German daytime fighter efforts were, at that time, focused on the Eastern Front and several other distant locations. Initial efforts by the 8th met limited and unorganized resistance, but with every mission, the Luftwaffe moved more aircraft to the west and quickly improved their battle direction. In fall 1943, the 8th Air Force's heavy bombers conducted a series of deep-penetration raids into Germany, beyond the range of escort fighters. The Schweinfurt–Regensburg mission in August lost 60 B-17s of a force of 376, the 14 October attack lost 77 of a force of 291—26% of the attacking force.
For the US, the very concept of self-defending bombers was called into question, but instead of abandoning daylight raids and turning to night bombing, as the RAF suggested, they chose other paths; at first, a bomber with more guns (the Boeing YB-40) was believed to be able to escort the bomber formations, but when the concept proved to be unsuccessful, thoughts then turned to the Lockheed P-38 Lightning. In early 1943, the USAAF also decided that the Republic P-47 Thunderbolt and P-51B be considered for the role of a smaller escort fighter, and in July, a report stated that the P-51B was "the most promising plane" with an endurance of 4 hours 45 minutes with the standard internal fuel of 184 gallons plus 150 gallons carried externally. In August, a P-51B was fitted with an extra internal 85-gallon tank, and although problems with longitudinal stability occurred and some compromises in performance with the tank full were made, and because the fuel from the fuselage tank would be used during the initial stages of a mission, the fuel tank would be fitted in all Mustangs destined for VIII Fighter Command.
The P-51 Mustang was a solution to the need for an effective bomber escort. It used a common, reliable engine and had internal space for a larger-than-average fuel load. With external fuel tanks, it could accompany the bombers from England to Germany and back.
However, the Allison engine in the P-51A had a single-stage supercharger that caused power to drop off rapidly above 15,000 ft. This made it unsuitable for combat at the altitudes where USAAF bombers planned to fly. Following the RAF's initial disappointing experience with the Mustang I (P-51A), Ronald Harker, a test pilot for Rolls-Royce, suggested fitting a Merlin 61, as fitted to the Spitfire Mk IX. The Merlin 61 had a two-speed, two-stage, intercooled supercharger, designed by Stanley Hooker of Rolls-Royce, and this gave an increase in horsepower from the Allison's , or in War Emergency Power, delivering an increase of top speed from , as well as raising the service ceiling to almost . Initial flights of what was known to Rolls-Royce as the Mustang Mk X were completed at Rolls-Royce's airfield at Hucknall in October 1942.
At the same time, the possibility of combining the P-51 airframe with the US license-built Packard version of the Merlin engine was being explored on the other side of the Atlantic. In July 1942 a contract was let for two prototypes, briefly designated XP-78 but soon to become the XP-51B. The first flight of the XP-51B took place in November 1942, but the USAAF was so interested in the possibility that an initial contract for 400 aircraft was placed three months beforehand in August. The conversion led to production of the P-51B beginning at North American's Inglewood, California, plant in June 1943, and P-51s started to become available to the 8th and 9th Air Forces in the winter of 1943–1944. During the conversion to the two-stage, supercharged Merlin engine, which was slightly heavier than the single-stage Allison and so moved the aircraft's center-of-gravity forward, North American's engineers took the opportunity to add a large additional fuselage fuel tank behind the pilot, greatly increasing the aircraft's range over that of the earlier P-51A.
By the time the "Pointblank" offensive resumed in early 1944, matters had changed. Bomber escort defenses were initially layered, using the shorter-range P-38s and P-47s to escort the bombers during the initial stages of the raid before handing over to the P-51s when they were forced to turn for home. This provided continuous coverage during the raid. The Mustang was so clearly superior to earlier US designs that the 8th Air Force began to steadily switch its fighter groups to the Mustang, first swapping arriving P-47 groups to the 9th Air Force in exchange for those that were using P-51s, then gradually converting its Thunderbolt and Lightning groups. By the end of 1944, 14 of its 15 groups flew the Mustang.
The Luftwaffe's twin-engined Messerschmitt Bf 110 heavy fighters brought up to deal with the bombers proved to be easy prey for the Mustangs, and had to be quickly withdrawn from combat. The Focke-Wulf Fw 190A, already suffering from poor high-altitude performance, was outperformed by the Mustang at the B-17's altitude, and when laden with heavy bomber-hunting weapons as a replacement for the more vulnerable twin-engined "Zerstörer" heavy fighters, it suffered heavy losses. The Messerschmitt Bf 109 had comparable performance at high altitudes, but its lightweight airframe was even more greatly affected by increases in armament. The Mustang's much lighter armament, tuned for antifighter combat, allowed it to overcome these single-engined opponents.
At the start of 1944, Major General James Doolittle, the new commander of the 8th Air Force, ordered many fighter pilots to stop flying in formation with the bombers and instead attack the "Luftwaffe" wherever it could be found. The aim was to achieve air supremacy. Mustang groups were sent far ahead of the bombers in a "fighter sweep" in order to intercept attacking German fighters.
The Luftwaffe answered with the "Gefechtsverband" ("battle formation"). This consisted of a "Sturmgruppe" of heavily armed and armored Fw 190As escorted by two "Begleitgruppen" of Messerschmitt Bf 109s, whose task was to keep the Mustangs away from the Fw 190As attacking the bombers. This strategy proved to be problematic, as the large German formation took a long time to assemble and was difficult to maneuver. It was often intercepted by the P-51 "fighter sweeps" before it could attack the bombers. However, German attacks against bombers could be effective when they did occur; the bomber-destroyer Fw 190As swept in from astern and often pressed their attacks to within .
While not always able to avoid contact with the escorts, the threat of mass attacks and later the "company front" (eight abreast) assaults by armored "Sturmgruppe" Fw 190As brought an urgency to attacking the Luftwaffe wherever it could be found, either in the air or on the ground. Beginning in late February 1944, 8th Air Force fighter units began systematic strafing attacks on German airfields with increasing frequency and intensity throughout the spring, with the objective of gaining air supremacy over the Normandy battlefield. In general these were conducted by units returning from escort missions but, beginning in March, many groups also were assigned airfield attacks instead of bomber support. The P-51, particularly with the advent of the K-14 Gyro gunsight and the development of "Clobber Colleges" for the training of fighter pilots in fall 1944, was a decisive element in Allied countermeasures against the "Jagdverbände".
The numerical superiority of the USAAF fighters, superb flying characteristics of the P-51, and pilot proficiency helped cripple the Luftwaffe's fighter force. As a result, the fighter threat to US, and later British, bombers was greatly diminished by July 1944. The RAF, long proponents of night bombing for protection, were able to reopen daylight bombing in 1944 as a result of the crippling of the Luftwaffe fighter arm. Reichsmarschall Hermann Göring, commander of the German Luftwaffe during the war, was quoted as saying, "When I saw Mustangs over Berlin, I knew the jig was up."
On 15 April 1944, VIII Fighter Command began "Operation Jackpot", attacks on Luftwaffe fighter airfields. As the efficacy of these missions increased, the number of fighters at the German airbases fell to the point where they were no longer considered worthwhile targets. On 21 May, targets were expanded to include railways, locomotives, and rolling stock used by the Germans to transport materiel and troops, in missions dubbed "Chattanooga". The P-51 excelled at this mission, although losses were much higher on strafing missions than in air-to-air combat, partially because the Mustang's liquid-cooled engine (particularly its coolant system) was vulnerable to small-arms fire, unlike the air-cooled R-2800 radials of its Republic P-47 Thunderbolt stablemates based in England, regularly tasked with ground-strafing missions.
Given the overwhelming Allied air superiority, the Luftwaffe put its effort into the development of aircraft of such high performance that they could operate with impunity, but which also made bomber attack much more difficult, merely from the flight velocities they achieved. Foremost among these were the Messerschmitt Me 163B point-defense rocket interceptors, which started their operations with JG 400 near the end of July 1944, and the longer-endurance Messerschmitt Me 262A jet fighter, first flying with the "Gruppe"-strength Kommando Nowotny unit by the end of September 1944. In action, the Me 163 proved to be more dangerous to the Luftwaffe than to the Allies, and was never a serious threat. The Me 262A was a serious threat, but attacks on their airfields neutralized them. The pioneering Junkers Jumo 004 axial-flow jet engines of the Me 262As needed careful nursing by their pilots, and these aircraft were particularly vulnerable during takeoff and landing. Lt. Chuck Yeager of the 357th Fighter Group was one of the first American pilots to shoot down an Me 262, which he caught during its landing approach. On 7 October 1944, Lt. Urban L. Drew of the 361st Fighter Group shot down two Me 262s that were taking off, while on the same day Lt. Col. Hubert Zemke, who had transferred to the Mustang-equipped 479th Fighter Group, shot down what he thought was a Bf 109, only to have his gun camera film reveal that it may have been an Me 262. On 25 February 1945, Mustangs of the 55th Fighter Group surprised an entire "Staffel" of Me 262As at takeoff and destroyed six jets.
The Mustang also proved useful against the V-1s launched toward London. P-51B/Cs using 150-octane fuel were fast enough to catch the V-1 and operated in concert with shorter-range aircraft such as advanced marks of the Supermarine Spitfire and Hawker Tempest.
By 8 May 1945, the 8th, 9th, and 15th Air Force's P-51 groups claimed some 4,950 aircraft shot down (about half of all USAAF claims in the European theater, the most claimed by any Allied fighter in air-to-air combat) and 4,131 destroyed on the ground. Losses were about 2,520 aircraft. The 8th Air Force's 4th Fighter Group was the top-scoring fighter group in Europe, with 1,016 enemy aircraft claimed destroyed. This included 550 claimed in aerial combat and 466 on the ground.
In air combat, the top-scoring P-51 units (both of which exclusively flew Mustangs) were the 357th Fighter Group of the 8th Air Force with 565 air-to-air combat victories and the 9th Air Force's 354th Fighter Group with 664, which made it one of the top-scoring fighter groups. The top Mustang ace was the USAAF's George Preddy, whose final tally stood at 26.83 victories (a number that includes shared one half- and one third victory credits), 23 of which were scored with the P-51. Preddy was shot down and killed by friendly fire on Christmas Day 1944 during the Battle of the Bulge.
In early 1945, P-51C, D, and K variants also joined the Chinese Nationalist Air Force. These Mustangs were provided to the 3rd, 4th, and 5th Fighter Groups and used to attack Japanese targets in occupied areas of China. The P-51 became the most capable fighter in China, while the Imperial Japanese Army Air Force used the Nakajima Ki-84 "Hayate" against it.
The P-51 was a relative latecomer to the Pacific Theater, due largely to the need for the aircraft in Europe, although the P-38's twin-engined design was considered a safety advantage for long, over-water flights. The first P-51s were deployed in the Far East later in 1944, operating in close-support and escort missions, as well as tactical photo reconnaissance. As the war in Europe wound down, the P-51 became more common; eventually, with the capture of Iwo Jima, it was able to be used as a bomber escort during Boeing B-29 Superfortress missions against the Japanese homeland.
The P-51 was often mistaken for the Japanese Kawasaki Ki-61 "Hien" in both China and Pacific because of its similar appearance.
Chief Naval Test Pilot and C.O. Captured Enemy Aircraft Flight Capt. Eric Brown, CBE, DSC, AFC, RN, tested the Mustang at RAE Farnborough in March 1944 and noted, "The Mustang was a good fighter and the best escort due to its incredible range, make no mistake about it. It was also the best American dogfighter. But the laminar-flow wing fitted to the Mustang could be a little tricky. It could not by any means out-turn a Spitfire. No way. It had a good rate-of-roll, better than the Spitfire, so I would say the plusses to the Spitfire and the Mustang just about equate. If I were in a dogfight, I'd prefer to be flying the Spitfire. The problem was I wouldn't like to be in a dogfight near Berlin, because I could never get home to Britain in a Spitfire!"
The U.S. Air Forces, Flight Test Engineering, assessed the Mustang B on 24 April 1944 thus: "The rate of climb is good and the high speed in level flight is exceptionally good at all altitudes, from sea level to 40,000 feet. The airplane is very maneuverable with good controllability at indicated speeds up to 400 MPH [sic]. The stability about all axes is good and the rate of roll is excellent; however, the radius of turn is fairly large for a fighter. The cockpit layout is excellent, but visibility is poor on the ground and only fair in level flight."
Kurt Bühligen, the third-highest scoring German fighter pilot of World War II's Western Front (with 112 confirmed victories, three against Mustangs), later stated, "We would out-turn the P-51 and the other American fighters, with the Bf 109 or the Fw 190. Their turn rate was about the same. The P-51 was faster than us, but our munitions and cannon were better." Heinz Bär said that the P-51 "was perhaps the most difficult of all Allied aircraft to meet in combat. It was fast, maneuverable, hard to see, and difficult to identify because it resembled the Me 109".
In the aftermath of World War II, the USAAF consolidated much of its wartime combat force and selected the P-51 as a "standard" piston-engined fighter, while other types, such as the P-38 and P-47, were withdrawn or given substantially reduced roles. As the more advanced (P-80 and P-84) jet fighters were introduced, the P-51 was also relegated to secondary duties.
In 1947, the newly formed USAF Strategic Air Command employed Mustangs alongside F-6 Mustangs and F-82 Twin Mustangs, due to their range capabilities. In 1948, the designation P-51 (P for pursuit) was changed to F-51 (F for fighter) and the existing F designator for photographic reconnaissance aircraft was dropped because of a new designation scheme throughout the USAF. Aircraft still in service in the USAF or Air National Guard (ANG) when the system was changed included: F-51B, F-51D, F-51K, RF-51D (formerly F-6D), RF-51K (formerly F-6K) and TRF-51D (two-seat trainer conversions of F-6Ds). They remained in service from 1946 through 1951. By 1950, although Mustangs continued in service with the USAF after the war, the majority of the USAF's Mustangs had become surplus to requirements and placed in storage, while some were transferred to the Air Force Reserve and the ANG.
From the start of the Korean War, the Mustang once again proved useful. A "substantial number" of stored or in-service F-51Ds were shipped, via aircraft carriers, to the combat zone, and were used by the USAF, the South African Air Force, and the Republic of Korea Air Force (ROKAF). The F-51 was used for ground attack, fitted with rockets and bombs, and photo reconnaissance, rather than being as interceptors or "pure" fighters. After the first North Korean invasion, USAF units were forced to fly from bases in Japan and the F-51Ds, with their long range and endurance, could attack targets in Korea that short-ranged F-80 jets could not. Because of the vulnerable liquid cooling system, however, the F-51s sustained heavy losses to ground fire. Due to its lighter structure and a shortage of spare parts, the newer, faster F-51H was not used in Korea.
Mustangs continued flying with USAF and ROKAF fighter-bomber units on close support and interdiction missions in Korea until 1953, when they were largely replaced as fighter-bombers by USAF F-84s and by United States Navy (USN) Grumman F9F Panthers. Other air forces and units using the Mustang included the Royal Australian Air Force's 77 Squadron, which flew Australian-built Mustangs as part of British Commonwealth Forces Korea. The Mustangs were replaced by Gloster Meteor F8s in 1951. The South African Air Force's 2 Squadron used U.S.-built Mustangs as part of the U.S. 18th Fighter Bomber Wing and had suffered heavy losses by 1953, after which 2 Squadron converted to the F-86 Sabre.
F-51s flew in the Air Force Reserve and ANG throughout the 1950s. The last American USAF Mustang was F-51D-30-NA AF serial no. 44-74936, which was finally withdrawn from service with the West Virginia Air National Guard's 167th Fighter Interceptor Squadron in January 1957 and retired to what was then called the Air Force Central Museum, although it was briefly reactivated to fly at the 50th anniversary of the Air Force Aerial Firepower Demonstration at the Air Proving Ground, Eglin AFB, Florida, on 6 May 1957. This aircraft, painted as P-51D-15-NA serial no. 44-15174, is on display at the National Museum of the United States Air Force, Wright-Patterson AFB, in Dayton, Ohio.
The final withdrawal of the Mustang from USAF dumped hundreds of P-51s onto the civilian market. The rights to the Mustang design were purchased from North American by the Cavalier Aircraft Corporation, which attempted to market the surplus Mustang aircraft in the U.S. and overseas. In 1967 and again in 1972, the USAF procured batches of remanufactured Mustangs from Cavalier, most of them destined for air forces in South America and Asia that were participating in the Military Assistance Program (MAP). These aircraft were remanufactured from existing original F-51D airframes fitted with new V-1650-7 engines, a new radio, tall F-51H-type vertical tails, and a stronger wing that could carry six machine guns and a total of eight underwing hardpoints. Two bombs and six rockets could be carried. They all had an original F-51D-type canopy, but carried a second seat for an observer behind the pilot. One additional Mustang was a two-seat, dual-control TF-51D (67-14866) with an enlarged canopy and only four wing guns. Although these remanufactured Mustangs were intended for sale to South American and Asian nations through the MAP, they were delivered to the USAF with full USAF markings. They were, however, allocated new serial numbers (67-14862/14866, 67-22579/22582 and 72-1526/1541).
The last U.S. military use of the F-51 was in 1968, when the U. S. Army employed a vintage F-51D (44-72990) as a chase aircraft for the Lockheed YAH-56 Cheyenne armed helicopter project. This aircraft was so successful that the Army ordered two F-51Ds from Cavalier in 1968 for use at Fort Rucker as chase planes. They were assigned the serials 68-15795 and 68-15796. These F-51s had wingtip fuel tanks and were unarmed. Following the end of the Cheyenne program, these two chase aircraft were used for other projects. One of them (68-15795) was fitted with a 106 mm recoilless rifle for evaluation of the weapon's value in attacking fortified ground targets. Cavalier Mustang 68-15796 survives at the Air Force Armament Museum, Eglin AFB, Florida, displayed indoors in World War II markings.
The F-51 was adopted by many foreign air forces and continued to be an effective fighter into the mid-1980s with smaller air arms. The last Mustang ever downed in battle occurred during Operation Power Pack in the Dominican Republic in 1965, with the last aircraft finally being retired by the Dominican Air Force in 1984.
After World War II, the P-51 Mustang served in the air arms of more than 25 nations. During the war, a Mustang cost about $51,000, while many hundreds were sold postwar for the nominal price of one dollar to signatories of the Inter-American Treaty of Reciprocal Assistance, ratified in Rio de Janeiro in 1947.
These countries used the P-51 Mustang:
Many P-51s were sold as surplus after the war, often for as little as $1,500. Some were sold to former wartime fliers or other aficionados for personal use, while others were modified for air racing.
One of the most significant Mustangs involved in air racing was a surplus P-51C-10-NT (44-10947) purchased by film stunt pilot Paul Mantz. The aircraft was modified by creating a "wet wing", sealing the wing to create a giant fuel tank in each wing, which eliminated the need for fuel stops or drag-inducing drop tanks. This Mustang, named "Blaze of Noon" after the film "Blaze of Noon", came in first in the 1946 and 1947 Bendix Air Races, second in the 1948 Bendix, and third in the 1949 Bendix. He also set a U.S. coast-to-coast record in 1947. The Mantz Mustang was sold to Charles F. Blair Jr (future husband of Maureen O'Hara) and renamed "Excalibur III". Blair used it to set a New York-to-London ("circa" ) record in 1951: 7 hr 48 min from takeoff at Idlewild to overhead London Airport. Later that same year, he flew from Norway to Fairbanks, Alaska, via the North Pole ("circa" ), proving that navigation via sun sights was possible over the magnetic north pole region. For this feat, he was awarded the Harmon Trophy and the Air Force was forced to change its thoughts on a possible Soviet air strike from the north. This Mustang now resides in the National Air and Space Museum at the Steven F. Udvar-Hazy Center.
The most prominent firm to convert Mustangs to civilian use was Trans-Florida Aviation, later renamed Cavalier Aircraft Corporation, which produced the Cavalier Mustang. Modifications included a taller tailfin and wingtip tanks. A number of conversions included a Cavalier Mustang specialty: a "tight" second seat added in the space formerly occupied by the military radio and fuselage fuel tank.
In 1958, 78 surviving RCAF Mustangs were retired from service's inventory and were ferried by Lynn Garrison, an RCAF pilot, from their varied storage locations to Canastota, New York, where the American buyers were based. In effect, Garrison flew each of the surviving aircraft at least once. These aircraft make up a large percentage of the aircraft presently flying worldwide.
In the late 1960s and early 1970s, when the United States Department of Defense wished to supply aircraft to South American countries and later Indonesia for close air support and counter insurgency, it turned to Cavalier to return some of their civilian conversions back to updated military specifications.
In the 21st century, a P-51 can command a price of more than $1 million, even for only partially restored aircraft. There were 204 privately owned P-51s in the U.S. on the FAA registry in 2011, most of which are still flying, often associated with organizations such as the Commemorative Air Force (formerly the Confederate Air Force).
In May 2013, Doug Matthews set an altitude record of in a P-51 named "The Rebel", for piston-powered aircraft weighing . Mathews departed from a grass runway at Florida's Indiantown airport and flew "The Rebel" over Lake Okeechobee. He set world records for time to reach altitudes of , 18 minutes and , 31 minutes. He achieved a new height record of in level flight and a maximum altitude. The previous record of had stood since 1954.
Over 20 variants of the P-51 Mustang were produced from 1940 to after the war.
Except for the small numbers assembled or produced in Australia, all Mustangs were built by North American initially at Inglewood, California, but then additionally in Dallas, Texas.
As indicative of the iconic nature of the P-51, manufacturers within the hobby industry have created scale plastic model kits of the P-51 Mustang, with varying degrees of detail and skill levels. The aircraft have also been the subject of numerous scale flying replicas. Aside from the popular radio-controlled aircraft, several kitplane manufacturers offer ½, ⅔, and ¾-scale replicas capable of comfortably seating one (or even two) and offering high performance combined with more forgiving flight characteristics. Such aircraft include the Titan T-51 Mustang, W.A.R. P-51 Mustang, Linn Mini Mustang, Jurca Gnatsum, Thunder Mustang, Stewart S-51D Mustang, Loehle 5151 Mustang and ScaleWings SW51 Mustang. | https://en.wikipedia.org/wiki?curid=24710 |
Precession
Precession is a change in the orientation of the rotational axis of a rotating body. In an appropriate reference frame it can be defined as a change in the first Euler angle, whereas the third Euler angle defines the rotation itself. In other words, if the axis of rotation of a body is itself rotating about a second axis, that body is said to be precessing about the second axis. A motion in which the second Euler angle changes is called "nutation". In physics, there are two types of precession: torque-free and torque-induced.
In astronomy, "precession" refers to any of several slow changes in an astronomical body's rotational or orbital parameters. An important example is the steady change in the orientation of the axis of rotation of the Earth, known as the precession of the equinoxes.
Torque-free precession implies that no external moment (torque) is applied to the body. In torque-free precession, the angular momentum is a constant, but the angular velocity vector changes orientation with time. What makes this possible is a time-varying moment of inertia, or more precisely, a time-varying inertia matrix. The inertia matrix is composed of the moments of inertia of a body calculated with respect to separate coordinate axes (e.g. , , ). If an object is asymmetric about its principal axis of rotation, the moment of inertia with respect to each coordinate direction will change with time, while preserving angular momentum. The result is that the component of the angular velocities of the body about each axis will vary inversely with each axis' moment of inertia.
The torque-free precession rate of an object with an axis of symmetry, such as a disk, spinning about an axis not aligned with that axis of symmetry can be calculated as follows:
where is the precession rate, is the spin rate about the axis of symmetry, is the moment of inertia about the axis of symmetry, is moment of inertia about either of the other two equal perpendicular principal axes, and is the angle between the moment of inertia direction and the symmetry axis.
When an object is not perfectly solid, internal vortices will tend to damp torque-free precession, and the rotation axis will align itself with one of the inertia axes of the body.
For a generic solid object without any axis of symmetry, the evolution of the object's orientation, represented (for example) by a rotation matrix that transforms internal to external coordinates, may be numerically simulated. Given the object's fixed internal moment of inertia tensor and fixed external angular momentum , the instantaneous angular velocity is
Precession occurs by repeatedly recalculating and applying a small rotation vector for the short time ; e.g.:
for the skew-symmetric matrix . The errors induced by finite time steps tend to increase the rotational kinetic energy:
this unphysical tendency can be counteracted by repeatedly applying a small rotation vector perpendicular to both and , noting that
Torque-induced precession (gyroscopic precession) is the phenomenon in which the axis of a spinning object (e.g., a gyroscope) describes a cone in space when an external torque is applied to it. The phenomenon is commonly seen in a spinning toy top, but all rotating objects can undergo precession. If the speed of the rotation and the magnitude of the external torque are constant, the spin axis will move at right angles to the direction that would intuitively result from the external torque. In the case of a toy top, its weight is acting downwards from its center of mass and the normal force (reaction) of the ground is pushing up on it at the point of contact with the support. These two opposite forces produce a torque which causes the top to precess.
The device depicted on the right (or above on mobile devices) is gimbal mounted. From inside to outside there are three axes of rotation: the hub of the wheel, the gimbal axis, and the vertical pivot.
To distinguish between the two horizontal axes, rotation around the wheel hub will be called "spinning", and rotation around the gimbal axis will be called "pitching". Rotation around the vertical pivot axis is called "rotation".
First, imagine that the entire device is rotating around the (vertical) pivot axis. Then, spinning of the wheel (around the wheelhub) is added. Imagine the gimbal axis to be locked, so that the wheel cannot pitch. The gimbal axis has sensors, that measure whether there is a torque around the gimbal axis.
In the picture, a section of the wheel has been named . At the depicted moment in time, section is at the perimeter of the rotating motion around the (vertical) pivot axis. Section , therefore, has a lot of angular rotating velocity with respect to the rotation around the pivot axis, and as is forced closer to the pivot axis of the rotation (by the wheel spinning further), because of the Coriolis effect, with respect to the vertical pivot axis, tends to move in the direction of the top-left arrow in the diagram (shown at 45°) in the direction of rotation around the pivot axis. Section of the wheel is moving away from the pivot axis, and so a force (again, a Coriolis force) acts in the same direction as in the case of . Note that both arrows point in the same direction.
The same reasoning applies for the bottom half of the wheel, but there the arrows point in the opposite direction to that of the top arrows. Combined over the entire wheel, there is a torque around the gimbal axis when some spinning is added to rotation around a vertical axis.
It is important to note that the torque around the gimbal axis arises without any delay; the response is instantaneous.
In the discussion above, the setup was kept unchanging by preventing pitching around the gimbal axis. In the case of a spinning toy top, when the spinning top starts tilting, gravity exerts a torque. However, instead of rolling over, the spinning top just pitches a little. This pitching motion reorients the spinning top with respect to the torque that is being exerted. The result is that the torque exerted by gravity – via the pitching motion – elicits gyroscopic precession (which in turn yields a counter torque against the gravity torque) rather than causing the spinning top to fall to its side.
Precession or gyroscopic considerations have an effect on bicycle performance at high speed. Precession is also the mechanism behind gyrocompasses.
Precession is the change of angular velocity and angular momentum produced by a torque. The general equation that relates the torque to the rate of change of angular momentum is:
where formula_7 and formula_8 are the torque and angular momentum vectors respectively.
Due to the way the torque vectors are defined, it is a vector that is perpendicular to the plane of the forces that create it. Thus it may be seen that the angular momentum vector will change perpendicular to those forces. Depending on how the forces are created, they will often rotate with the angular momentum vector, and then circular precession is created.
Under these circumstances the angular velocity of precession is given by:
where is the moment of inertia, is the angular velocity of spin about the spin axis, is the mass, is the acceleration due to gravity and is the perpendicular distance of the spin axis about the axis of precession. The torque vector originates at the center of mass. Using , we find that the period of precession is given by:
Where is the moment of inertia, is the period of spin about the spin axis, and is the torque. In general, the problem is more complicated than this, however.
There is an easy way to understand why gyroscopic precession occurs without using any mathematics. The behavior of a spinning object simply obeys laws of inertia by resisting any change in direction. A spinning object possesses a property known as rigidity in space, meaning the spin axis resists any change in orientation. It is the inertia of matter comprising the object as it resists any change in direction that provides this property. Of course, the direction this matter travels constantly changes as the object spins, but any further change in direction is resisted. If a force is applied to the surface of a spinning disc, for example, matter experiences no change in direction at the place the force was applied (or 180 degrees from that place). But 90 degrees before and 90 degrees after that place, matter is forced to change direction. This causes the object to behave as if the force was applied at those places instead. When a force is applied to anything, the object exerts an equal force back but in the opposite direction. Since no actual force was applied 90 degrees before or after, nothing prevents the reaction from taking place, and the object causes itself to move in response. A good way to visualize why this happens is to imagine the spinning object to be a large hollow doughnut filled with water, as described in the book "Thinking Physics" by Lewis Epstein. The doughnut is held still while water circulates inside it. As the force is applied, the water inside is caused to change direction 90 degrees before and after that point. The water then exerts its own force against the inner wall of the doughnut and causes the doughnut to rotate as if the force was applied 90 degrees ahead in the direction of rotation. Epstein exaggerates the vertical and horizontal motion of the water by changing the shape of the doughnut from round to square with rounded corners.
Now imagine the object to be a spinning bicycle wheel, held at both ends of its axle in the hands of a subject. The wheel is spinning clock-wise as seen from a viewer to the subject's right. Clock positions on the wheel are given relative to this viewer. As the wheel spins, the molecules comprising it are traveling exactly horizontal and to the right the instant they pass the 12-o'clock position. They then travel vertically downward the instant they pass 3 o'clock, horizontally to the left at 6 o'clock, vertically upward at 9 o’clock and horizontally to the right again at 12 o'clock. Between these positions, each molecule travels components of these directions. Now imagine the viewer applying a force to the rim of the wheel at 12 o’clock. For this example's sake, imagine the wheel tilting over when this force is applied; it tilts to the left as seen from the subject holding it at its axle. As the wheel tilts to its new position, molecules at 12 o’clock (where the force was applied) as well as those at 6 o’clock, still travel horizontally; their direction did not change as the wheel was tilting. Nor is their direction different after the wheel settles in its new position; they still move horizontally the instant they pass 12 and 6 o’clock. BUT, molecules passing 3 and 9 o’clock were forced to change direction. Those at 3 o’clock were forced to change from moving straight downward, to downward and to the right as viewed from the subject holding the wheel. Molecules passing 9 o’clock were forced to change from moving straight upward, to upward and to the left. This change in direction is resisted by the inertia of those molecules. And when they experience this change in direction, they exert an equal and opposite force in response AT THOSE LOCATIONS-3 AND 9 O’CLOCK. At 3 o’clock, where they were forced to change from moving straight down to downward and to the right, they exert their own equal and opposite reactive force to the left. At 9 o’clock, they exert their own reactive force to the right, as viewed from the subject holding the wheel. This makes the wheel as a whole react by momentarily rotating counter-clockwise as viewed from directly above. Thus, as the force was applied at 12 o’clock, the wheel behaved as if that force was applied at 3 o’clock, which is 90 degrees ahead in the direction of spin. Or, you can say it behaved as if a force from the opposite direction was applied at 9 o'clock, 90 degrees prior to the direction of spin.
In summary, when you apply a force to a spinning object to change the direction of its spin axis, you are not changing the direction of the matter comprising the object at the place you applied the force (nor at 180 degrees from it); matter experiences zero change in direction at those places. Matter experiences the maximum change in direction 90 degrees before and 90 degrees beyond that place, and lesser amounts closer to it. The equal and opposite reaction that occurs 90 degrees before and after then causes the object to behave as it does. This principle is demonstrated in helicopters. Helicopter controls are rigged so that inputs to them are transmitted to the rotor blades at points 90 degrees prior to and 90 degrees beyond the point at which the change in aircraft attitude is desired. The effect is dramatically felt on motorcycles. A motorcycle will suddenly lean and turn in the opposite direction the handle bars are turned.
Gyro precession causes another phenomenon for spinning objects such as the bicycle wheel in this scenario. If the subject holding the wheel removes a hand from one end of its axle, the wheel will not topple over, but will remain upright, supported at just the other end. However, it will immediately take on an additional motion; it will begin to rotate about a vertical axis, pivoting at the point of support as it continues spinning. If you allowed the wheel to continue rotating, you would have to turn your body in the same direction as the wheel rotated. If the wheel was not spinning, it would obviously topple over and fall when one hand is removed. The initial action of the wheel beginning to topple over is equivalent to applying a force to it at 12 o'clock in the direction toward the unsupported side (or a force at 6 o’clock toward the supported side). When the wheel is spinning, the sudden lack of support at one end of its axle is equivalent to this same force. So, instead of toppling over, the wheel behaves as if a continuous force is being applied to it at 3 or 9 o’clock, depending on the direction of spin and which hand was removed. This causes the wheel to begin pivoting at the one supported end of its axle while remaining upright. Although it pivots at that point, it does so only because of the fact that it is supported there; the actual axis of precessional rotation is located vertically through the wheel, passing through its center of mass. Also, this explanation does not account for the effect of variation in the speed of the spinning object; it only illustrates how the spin axis behaves due to precession. More correctly, the object behaves according to the balance of all forces based on the magnitude of the applied force, mass and rotational speed of the object. Once it is visualized why the wheel remains upright and rotates, it can easily be seen why the axis of a spinning top slowly rotates while the top spins as shown in the illustration on this page. A top behaves exactly like the bicycle wheel due to the force of gravity pulling downward. The point of contact with the surface it spins on is equivalent to the end of the axle the wheel is supported at. As the top's spin slows, the reactive force that keeps it upright due to inertia is overcome by gravity. Once the reason for gyro precession is visualized, the mathematical formulas start to make sense.
The special and general theories of relativity give three types of corrections to the Newtonian precession, of a gyroscope near a large mass such as Earth, described above. They are:
In astronomy, precession refers to any of several gravity-induced, slow and continuous changes in an astronomical body's rotational axis or orbital path. Precession of the equinoxes, perihelion precession, changes in the tilt of Earth's axis to its orbit, and the eccentricity of its orbit over tens of thousands of years are all important parts of the astronomical theory of ice ages. "(See Milankovitch cycles.)"
Axial precession is the movement of the rotational axis of an astronomical body, whereby the axis slowly traces out a cone. In the case of Earth, this type of precession is also known as the "precession of the equinoxes", "lunisolar precession", or "precession of the equator". Earth goes through one such complete precessional cycle in a period of approximately 26,000 years or 1° every 72 years, during which the positions of stars will slowly change in both equatorial coordinates and ecliptic longitude. Over this cycle, Earth's north axial pole moves from where it is now, within 1° of Polaris, in a circle around the ecliptic pole, with an angular radius of about 23.5°.
The ancient Greek astronomer Hipparchus (c. 190–120 BC) is generally accepted to be the earliest known astronomer to recognize and assess the precession of the equinoxes at about 1° per century (which is not far from the actual value for antiquity, 1.38°), although there is some minor dispute about whether he was. In ancient China, the Jin-dynasty scholar-official Yu Xi (fl. 307-345 AD) made a similar discovery centuries later, noting that the position of the Sun during the winter solstice had drifted roughly one degree over the course of fifty years relative to the position of the stars. The precession of Earth's axis was later explained by Newtonian physics. Being an oblate spheroid, Earth has a non-spherical shape, bulging outward at the equator. The gravitational tidal forces of the Moon and Sun apply torque to the equator, attempting to pull the equatorial bulge into the plane of the ecliptic, but instead causing it to precess. The torque exerted by the planets, particularly Jupiter, also plays a role.
The orbits of planets around the Sun do not really follow an identical ellipse each time, but actually trace out a flower-petal shape because the major axis of each planet's elliptical orbit also precesses within its orbital plane, partly in response to perturbations in the form of the changing gravitational forces exerted by other planets. This is called perihelion precession or apsidal precession.
In the adjunct image, Earth's apsidal precession is illustrated. As the Earth travels around the Sun, its elliptical orbit rotates gradually over time. The eccentricity of its ellipse and the precession rate of its orbit are exaggerated for visualization. Most orbits in the Solar System have a much smaller eccentricity and precess at a much slower rate, making them nearly circular and nearly stationary.
Discrepancies between the observed perihelion precession rate of the planet Mercury and that predicted by classical mechanics were prominent among the forms of experimental evidence leading to the acceptance of Einstein's Theory of Relativity (in particular, his General Theory of Relativity), which accurately predicted the anomalies. Deviating from Newton's law, Einstein's theory of gravitation predicts an extra term of , which accurately gives the observed excess turning rate of 43″ every 100 years.
The gravitational forces due to the Sun and the Moon induce the precession in the terrestrial orbit. This precession is the major cause of the climate oscillation on the Earth having a period of 19,000 to 23,000 years. It follows that the changes in Earth's orbital parameters (e.g., orbital inclination, the angle between Earth's rotation axis and its plane of orbit) are important for the study of Earth's climate, in particular for the study of past ice ages.
Orbital nodes also precess over time. | https://en.wikipedia.org/wiki?curid=24714 |
Punjab
Punjab (, , , ; native pronunciation: ), also spelled and romanised as Panjāb, is a geopolitical, cultural, and historical region in South Asia, specifically in the northern part of the Indian subcontinent, comprising areas of eastern Pakistan and northern India. The boundaries of the region are ill-defined and focus on historical accounts.
The geographical definition of the term "Punjab" has changed over time. In the 16th century Mughal Empire it referred to a relatively smaller area between the Indus and the Sutlej rivers. In British India, until the Partition of Punjab in 1947, the Punjab Province encompassed the present-day Indian states and union territories of Punjab, Haryana, Himachal Pradesh, Chandigarh and Delhi and the Pakistani provinces of Punjab and Islamabad Capital Territory. It bordered the Balochistan and Khyber-Pakhtunkhwa regions to the west, Kashmir to the north, the Hindi Belt to the east, and Rajasthan and Sindh to the south.
The people of the Punjab today are called Punjabis, and their principal language is Punjabi. The main religion of the Pakistani Punjab region is Islam. The main religion of the Indian Punjab region is Sikhism and Hinduism. Other religious groups are Christianity, Jainism, Zoroastrianism, Buddhism, and Ravidassia. The Punjab region was the cradle for the Indus Valley Civilisation. The region had numerous migration by the Indo-Aryan peoples. The land was later contested by the Persians, Indo-Greeks, Indo-Scythians, Kushans, Macedonians, Ghaznavids, Turkic, Mongols, Timurids, Mughals, Marathas, Arabs, Pashtuns, British and other peoples. Historic foreign invasions mainly targeted the most productive central region of the Punjab known as the Majha region, which is also the bedrock of Punjabi culture and traditions. The Punjab region is often referred to as the breadbasket in both India and Pakistan.
The region was originally called Sapta Sindhu, the Vedic land of the seven rivers flowing into the ocean.
The origin of the word Punjab can probably be traced to the Sanskrit "panca-nada" , which literally means "five rivers", and is used as the name of a region in "Mahabharata". The later name for the region, "Punjab" (Persian: پنجآب) is a compound of two Persian words: پنج "panj" —meaning ""five""—and آب "âb" —meaning ""water"", introduced to the region by the Turko-Persian conquerors of India, and more formally popularised during the Mughal Empire. Punjab thus means "The Land of Five Waters", referring to the rivers Jhelum, Chenab, Ravi, Sutlej, and Beas. All are tributaries of the Indus River, the Sutlej being the largest.
The ancient Greeks referred to the region as "Pentapotamía" (), which has the same etymology as the original Persian word.
In the 16th century, during the reign of the Mughal emperor Akbar, the term "Punjab" was synonymous with the Lahore province. It covered a relatively smaller area lying between the Indus and the Sutlej rivers.
The 19th century definition of the Punjab region focuses on the collapse of the Sikh Empire and the creation of the British Punjab province between 1846 and 1849. According to this definition, the Punjab region incorporates, in Pakistan, Azad Kashmir including Bhimber and Mirpur and parts of Khyber Pakhtunkhwa (especially Peshawar known in the Punjab region as Pishore). In India the wider definition includes parts of Delhi and Jammu Division.
Using the older definition of the Punjab region, the Punjab region covers a large territory and can be divided into five natural areas:
The formation of the Himalayan Range of mountains to the east and north-east of the Punjab is the result of a collision between the north-moving Indo-Australian Plate and the Eurasian Plate. The plates are still moving together, and the Himalayas are rising by about per year.
The upper regions are snow-covered the whole year. Lower ranges of hills run parallel to the mountains. The Lower Himalayan Range runs from north of Rawalpindi through Jammu and Kashmir, Himachal Pradesh and further south. The mountains are relatively young, and are eroding rapidly. The Indus and the five rivers of the Punjab have their sources in the mountain range and carry loam, minerals and silt down to the rich alluvial plains, which consequently are very fertile.
According to the older definition, some of the major cities include Jammu, Peshawar and parts of Delhi.
The 1947 definition defines the Punjab region with reference to the dissolution of British India whereby the then British Punjab Province was partitioned between India and Pakistan. In Pakistan, the region now includes the Punjab province and Islamabad Capital Territory. In India, it includes the Punjab state, Chandigarh, Haryana, and Himachal Pradesh.
Using the 1947 definition, the Punjab borders the Balochistan and Pashtunistan regions to the west, Kashmir to the north, the Hindi Belt to the east, and Rajasthan and Sindh to the south. Accordingly, the Punjab region is very diverse and stretches from the hills of the Kangra Valley to the plains and to the Cholistan Desert.
Using the 1947 definition of the Punjab region, some of the major cities of the area include Lahore, Faisalabad, Ludhiana and Amritsar.
Another definition of the Punjab region adds to the definitions cited above and includes parts of Rajasthan on linguistic lines and takes into consideration the location of the Punjab rivers in ancient times. In particular, the Sri Ganganagar and Hanumangarh districts are included in the Punjab region.
The climate is a factor contributing to the economy of the Punjab. It is not uniform over the whole region, with the sections adjacent to the Himalayas receiving heavier rainfall than those at a distance.
There are three main seasons and two transitional periods. During the hot season from mid-April to the end of June, the temperature may reach . The monsoon season, from July to September, is a period of heavy rainfall, providing water for crops in addition to the supply from canals and irrigation systems. The transitional period after the monsoon is cool and mild, leading to the winter season, when the temperature in January falls to at night and by day. During the transitional period from winter to the hot season, sudden hailstorms and heavy showers may occur, causing damage to crops.
The Punjab region of India and Pakistan has a historical and cultural link to Indo-Aryan peoples as well as partially to various indigenous communities. As a result of several invasions from Central Asia and the Middle East, many ethnic groups and religions make up the cultural heritage of the Punjab.
In prehistoric times, one of the earliest known cultures of South Asia, the Indus Valley civilisation was located in the region.
The epic battles described in the "Mahabharata" are described as being fought in what is now the State of Haryana and historic Punjab. The Gandharas, Kambojas, Trigartas, Andhra, Pauravas, Bahlikas (Bactrian settlers of the Punjab), Yaudheyas and others sided with the Kauravas in the great battle fought at Kurukshetra. According to DrFauja Singh and DrL.M. Joshi: "There is no doubt that the Kambojas, Daradas, Kaikayas, Andhra, Pauravas, Yaudheyas, Malavas, Saindhavas and Kurus had jointly contributed to the heroic tradition and composite culture of ancient Punjab".
In 326 BCE, Alexander the Great invaded Pauravas and defeated King Porus. His armies entered the region via the Hindu Kush in northwest Pakistan and his rule extended up to the city of Sagala (present-day Sialkot in northeast Pakistan). In 305BCE the area was ruled by the Maurya Empire. In a long line of succeeding rulers of the area, Chandragupta Maurya and Ashoka stand out as the most renowned. The Maurya presence in the area was then consolidated in the Indo-Greek Kingdom in 180BCE. Menander I Soter "The Saviour" (known as Milinda in Indian sources) is the most renowned leader of the era, he conquered the Punjab and made Sagala the capital of his Empire. Menander carved out a Greek kingdom in the Punjab and ruled the region till his death in 130BCE. The neighbouring Seleucid Empire rule came to an end around 12BCE, after several invasions by the Yuezhi and the Scythian people.
In 711–713 CE, the 18-year-old Arab general Muhammad bin Qasim of Taif, a city in what is now Saudi Arabia, came by way of the Arabian Sea with Arab troops to defeat Raja Dahir. BinQasim then led his troops to conquer the Sindh and Punjab regions for the Islamic Umayyad Caliphate, making him the first to bring Islam to the region.
During the establishment and consolidation of the Muslim Turkic Mughal Empire prosperity, growth, and relative peace were established, particularly under the reign of Jahangir. Muslim empires ruled the Punjab for approximately 1,000 years. The period was also notable for the emergence of Guru Nanak (1469–1539), the founder of Sikhism.
The Afghan forces of Durrani Empire also known as Afghan Empire under the command of Ahmad Shah Durrani entered Punjab in 1749, and captured Punjab, with Lahore being governed by Pashtuns, and Kashmir regions. In 1758, Punjab came under the rule of Marathas, who captured the region by defeating the Afghan forces of Ahmad Shah Abdali. Following Third Battle of Panipat against Marathas, Durranis reconsolidated its power and dominion over the Punjab region, and Kashmir Valley. Abdali's Indian invasion weakened the Maratha influence. After the death of Ahmad Shah, the Punjab was freed from the Afghan rule by Sikhs for a brief period between 1773 and 1818. At the time of the formation of the Dal Khalsa in 1748 at Amritsar, the Punjab had been divided into 36 areas and 12 separate Sikh principalities, called misl. From this point onward, the beginnings of a Punjabi Sikh Empire emerged. Out of the 36 areas, 22 were united by Maharaja Ranjit Singh. The other 14 accepted British sovereignty. After Ranjit Singh's death, assassinations and internal divisions severely weakened the empire. Six years later the British East India Company was given an excuse to declare war, and in 1849, after two Anglo-Sikh wars, the Punjab was annexed by the British.
In the Indian Rebellion of 1857 the Sikh rulers backed the East India Company, providing troops and support, but in Jhelum 35 British soldiers of HMXXIV regiment were killed by the local resistance, and in Ludhiana a rebellion was crushed with the assistance of the Punjab chiefs of Nabha and Malerkotla.
The British Raj had political, cultural, philosophical, and literary consequences in the Punjab, including the establishment of a new system of education. During the independence movement, many Punjabis played a significant role, including Madan Lal Dhingra, Sukhdev Thapar, Ajit Singh Sandhu, Bhagat Singh, Udham Singh, Kartar Singh Sarabha, Bhai Parmanand, Chaudhary Rehmat Ali, and Lala Lajpat Rai.
At the time of partition in 1947, the province was split into East and West Punjab. East Punjab (48%) became part of India, while West Punjab (52%) became part of Pakistan. The Punjab bore the brunt of the civil unrest following the end of the British Raj, with casualties estimated to be in the millions.
The major language is Punjabi, (ਪੰਜਾਬੀ / پنجابی) written in India with the Gurmukhi script, and in Pakistan using the Shahmukhi script. It has official status and is widely used in education and administration in Indian Punjab, whereas in Pakistani Punjab these roles are instead played by Urdu. In the western half of the Pakistani province, the major native languages are Saraiki, Hindko and Pothwari, all of which are closely related to Punjabi.
The vast majority of Pakistani Punjabis are Sunni Muslim by faith, but also include large minority faiths mostly Shia Muslim, Ahmadis and Christians.
Sikhism, founded by Guru Nanak is the main religion practised in the post-1966 Indian Punjab state. About 57.7% of the population of Punjab state is Sikh, 38.5% is Hindu, and the rest are Muslims, Christians, and Jains. Punjab state contains the holy Sikh cities of Amritsar, Anandpur Sahib, Tarn Taran Sahib, Fatehgarh Sahib and Chamkaur Sahib.
The Punjab was home to several Sufi saints, and Sufism is well established in the region. Also, Kirpal Singh revered the Sikh Gurus as saints.
Punjabis celebrate different festivals based on their following culture, season and religion:
Sikhism and Hinduism
Islam
Others
Traditional Punjabi clothing differs depending on the region. It includes the following:
The historical region of Punjab is considered to be one of the most fertile regions on Earth. Both east and west Punjab produce a relatively high proportion of India and Pakistan's food output respectively.
The region has been used for extensive wheat farming. In addition, rice, cotton, sugarcane, fruit, and vegetables are also grown.
The agricultural output of the Punjab region in Pakistan contributes significantly to Pakistan's GDP. Both Indian and Pakistani Punjab are considered to have the best infrastructure of their respective countries. Indian Punjab has been estimated to be the second richest state in India. Pakistani Punjab produces 68% of Pakistan's food grain production. Its share of Pakistan's GDP has historically ranged from 51.8% to 54.7%.
Called "The Granary of India" or "The Bread Basket of India", Indian Punjab produces 1% of the world's rice, 2% of its wheat, and 2% of its cotton. In 2001, it was recorded that farmers made up 39% of Indian Punjab's workforce.
Alternatively, Punjab is also adding to the economy with the increase in employment of Punjab youth in the private sector. Government schemes such as 'Ghar Ghar Rozgar and Karobar Mission' have brought enhanced employability in the private sector. So far, 32,420 youths have been placed in different jobs and 12,114 have been skill-trained. | https://en.wikipedia.org/wiki?curid=24717 |
Ring system
A ring system is a disc or ring orbiting an astronomical object that is composed of solid material such as dust and moonlets, and is a common component of satellite systems around giant planets. A ring system around a planet is also known as a planetary ring system.
The most prominent and most famous planetary rings in the Solar System are those around Saturn, but the other three giant planets (Jupiter, Uranus, and Neptune) also have ring systems. Recent evidence suggests that ring systems may also be found around other types of astronomical objects, including minor planets, moons, and brown dwarfs, and as well, the interplanetary spaces between planets such as Venus and Mercury.
There are three ways that thicker planetary rings (the rings around planets) have been proposed to have formed: from material of the protoplanetary disk that was within the Roche limit of the planet and thus could not coalesce to form moons, from the debris of a moon that was disrupted by a large impact, or from the debris of a moon that was disrupted by tidal stresses when it passed within the planet's Roche limit. Most rings were thought to be unstable and to dissipate over the course of tens or hundreds of millions of years, but it now appears that Saturn's rings might be quite old, dating to the early days of the Solar System.
Fainter planetary rings can form as a result of meteoroid impacts with moons orbiting around the planet or, in case of Saturn's E-ring, the ejecta of cryovolcanic material.
The composition of ring particles varies; they may be silicate or icy dust. Larger rocks and boulders may also be present, and in 2007 tidal effects from eight 'moonlets' only a few hundred meters across were detected within Saturn's rings. The maximum size of a ring particle is determined by the specific strength of the material it is made of, its density, and the tidal force at its altitude. The tidal force is proportional to the average density inside the radius of the ring, or to the mass of the planet divided by the radius of the ring cubed. It is also inversely proportional to the square of the orbital period of the ring.
Sometimes rings will have "shepherd" moons, small moons that orbit near the inner or outer edges of rings or within gaps in the rings. The gravity of shepherd moons serves to maintain a sharply defined edge to the ring; material that drifts closer to the shepherd moon's orbit is either deflected back into the body of the ring, ejected from the system, or accreted onto the moon itself.
It is also predicted that Phobos, a moon of Mars, will break up and form into a planetary ring in about 50 million years. Its low orbit, with an orbital period that is shorter than a Martian day, is decaying due to tidal deceleration.
Jupiter's ring system was the third to be discovered, when it was first observed by the "Voyager 1" probe in 1979, and was observed more thoroughly by the "Galileo" orbiter in the 1990s. Its four main parts are a faint thick torus known as the "halo"; a thin, relatively bright main ring; and two wide, faint "gossamer rings". The system consists mostly of dust.
Saturn's rings are the most extensive ring system of any planet in the Solar System, and thus have been known to exist for quite some time. Galileo Galilei first observed them in 1610, but they were not accurately described as a disk around Saturn until Christiaan Huygens did so in 1655. With help from the NASA/ESA/ASI Cassini mission, a further understanding of the ring formation and active movement was understood. The rings are not a series of tiny ringlets as many think, but are more of a disk with varying density. They consist mostly of water ice and trace amounts of rock, and the particles range in size from micrometers to meters.
Uranus' ring system lies between the level of complexity of Saturn's vast system and the simpler systems around Jupiter and Neptune. They were discovered in 1977 by James L. Elliot, Edward W. Dunham, and Jessica Mink. In the time between then and 2005, observations by "Voyager 2" and the Hubble Space Telescope led to a total of 13 distinct rings being identified, most of which are opaque and only a few kilometers wide. They are dark and likely consist of water ice and some radiation-processed organics. The relative lack of dust is due to aerodynamic drag from the extended exosphere-corona of Uranus.
The system around Neptune consists of five principal rings that, at their densest, are comparable to the low-density regions of Saturn's rings. However, they are faint and dusty, much more similar in structure to those of Jupiter. The very dark material that makes up the rings is likely organics processed by radiation, like in the rings of Uranus. 20 to 70 percent of the rings are dust, a relatively high proportion. Hints of the rings were seen for decades prior to their conclusive discovery by "Voyager 2" in 1989.
Reports in March 2008 suggested that Saturn's moon Rhea may have its own tenuous ring system, which would make it the only moon known to have a ring system. A later study published in 2010 revealed that imaging of Rhea by the "Cassini" spacecraft was inconsistent with the predicted properties of the rings, suggesting that some other mechanism is responsible for the magnetic effects that had led to the ring hypothesis.
It had been theorized by some astronomers that Pluto might have a ring system. However, this possibility has been ruled out by "New Horizons", which would have detected any such ring system.
10199 Chariklo, a centaur, was the first minor planet discovered to have rings. It has two rings, perhaps due to a collision that caused a chain of debris to orbit it. The rings were discovered when astronomers observed Chariklo passing in front of the star UCAC4 248-108672 on June 3, 2013 from seven locations in South America. While watching, they saw two dips in the star's apparent brightness just before and after the occultation. Because this event was observed at multiple locations, the conclusion that the dip in brightness was in fact due to rings is unanimously the leading hypothesis. The observations revealed what is likely a -wide ring system that is about 1,000 times closer than the Moon is to Earth. In addition, astronomers suspect there could be a moon orbiting amidst the ring debris. If these rings are the leftovers of a collision as astronomers suspect, this would give fodder to the idea that moons (such as the Moon) form through collisions of smaller bits of material. Chariklo's rings have not been officially named, but the discoverers have nicknamed them Oiapoque and Chuí, after two rivers near the northern and southern ends of Brazil.
A second centaur, 2060 Chiron, is also suspected to have a pair of rings. Based on stellar-occultation data that were initially interpreted as resulting from jets associated with Chiron's comet-like activity, the rings are proposed to be 324 (± 10) km in radius. Their changing appearance at different viewing angles can explain the long-term variation in Chiron's brightness over time.
Ring systems may form around centaurs when they are tidally disrupted in a close encounter (within 0.4 to 0.8 times the Roche limit) with a giant planet. (By definition, a centaur is a minor planet whose orbit crosses the orbit(s) of one or more giant planets.) For a differentiated body approaching a giant planet at an initial relative velocity of 3−6 km/s with an initial rotational period of 8 hours, a ring mass of 0.1%−10% of the centaur's mass is predicted. Ring formation from an undifferentiated body is less likely. The rings would be composed mostly or entirely of material from the parent body's icy mantle. After forming, the ring would spread laterally, leading to satellite formation from whatever portion of it spreads beyond the centaur's Roche Limit. Satellites could also form directly from the disrupted icy mantle. This formation mechanism predicts that roughly 10% of centaurs will have experienced potentially ring-forming encounters with giant planets.
A ring around Haumea, a dwarf planet and resonant Kuiper belt member, was revealed by a stellar occultation observed on 21 January 2017. This makes it the first trans-Neptunian object found to have a ring system. The ring has a radius of about 2,287 km, a width of ≈70 km and an opacity of 0.5. The ring plane coincides with Haumea's equator and the orbit of its larger, outer moon Hi’iaka (which has a semimajor axis of ≈25,657 km). The ring is close to the 3:1 resonance with Haumea's rotation, which is located at a radius of 2,285 ± 8 km. It is well within Haumea's Roche limit, which would lie at a radius of about 4,400 km if Haumea were spherical (being nonspherical pushes the limit out farther).
Because all giant planets of the Solar System have rings, the existence of exoplanets with rings is plausible. Although particles of ice, the material that is predominant in the rings of Saturn, can only exist around planets beyond the frost line, within this line rings consisting of rocky material can be stable in the long term. Such ring systems can be detected for planets observed by the transit method by additional reduction of the light of the central star if their opacity is sufficient. As of January 2015, no such observations are known.
A sequence of occultations of the star 1SWASP J140747.93-394542.6 observed in 2007 over 56 days was interpreted as a transit of a ring system of a (not directly observed) substellar companion dubbed “J1407b”. This ring system is attributed a radius of about 90 million km (about 200 times that of Saturn's rings). In press releases, the term super-Saturn was dubbed. However, the age of this stellar system is only about 16 million years, which suggests that this structure, if real, is more like a protoplanetary disk rather than a stable ring system in an evolved planetary system. The ring was observed to have a 0.0267 AU-wide gap at a radial distance of 0.4 AU. Simulations suggest that this gap is more likely the result of an embedded moon than resonance effects of an external moon(s).
Fomalhaut b was found to be large and unclearly defined when detected in 2008. This may either be due to a cloud of dust attracted from the dust disc of the star, or a possible ring system. | https://en.wikipedia.org/wiki?curid=24718 |
P-code machine
In computer programming, a p-code machine, or portable code machine is a virtual machine designed to execute p-code (the assembly language of a hypothetical CPU). This term is applied both generically to all such machines (such as the Java Virtual Machine and MATLAB precompiled code), and to specific implementations, the most famous being the p-Machine of the Pascal-P system, particularly the UCSD Pascal implementation (among whose developers the "p" in "p-code" was construed to mean "pseudo" more often than "portable", "pseudo-code" thus meaning instructions for a pseudo-machine) .
Although the concept was first implemented circa 1966 (as O-code for BCPL and P a code for the Euler Language), the term p-code first appeared in the early 1970s. Two early compilers generating p-code were the Pascal-P compiler in 1973, by Kesav V. Nori, Urs Ammann, Kathleen Jensen, Hans-Heinrich Nägeli, and Christian Jacobi, and the Pascal-S compiler in 1975, by Niklaus Wirth.
Programs that have been translated to p-code can either be interpreted by a software program that emulates the behavior of the hypothetical CPU, or translated into the machine code of the CPU on which the program is to run and then executed. If there is sufficient commercial interest, a hardware implementation of the CPU specification may be built (e.g., the Pascal MicroEngine or a version of the Java processor).
Compared to direct translation into native machine code, a two-stage approach involving translation into p-code and execution by an interpreter or just-in-time compiler offers several advantages.
One of the significant disadvantages of p-code is execution speed, which can sometimes be remedied through the use of a JIT compiler. P-code is often also easier to reverse-engineer than native code.
In the early 1980s, at least two operating systems achieved machine independence through extensive use of p-code. The Business Operating System (BOS) was a cross-platform operating system designed to run p-code programs exclusively. The UCSD p-System, developed at The University of California, San Diego, was a self-compiling and self-hosted operating system based on p-code optimized for generation by the Pascal programming language.
In the 1990s, translation into p-code became a popular strategy for implementations of languages such as Python, Microsoft P-Code in Visual Basic and Java bytecode in Java.
The Go programming language uses a generic, portable assembly as a form of p-code, implemented by Ken Thompson as an extension of the work on Plan 9 from Bell Labs. Unlike CLR bytecode or JVM bytecode, there is no stable specification, and the Go build tools do not emit a bytecode format to be used at a later time. The Go assembler uses the generic assembly language as an intermediate representation, and Go executables are machine-specific statically linked binaries.
Like many other p-code machines, the UCSD p-Machine is a stack machine, which means that most instructions take their operands from the stack, and place results back on the stack. Thus, the "add" instruction replaces the two topmost elements of the stack with their sum. A few instructions take an immediate argument. Like Pascal, the p-code is strongly typed, supporting boolean (b), character (c), integer (i), real (r), set (s), and pointer (a) types natively.
Some simple instructions:
Unlike other stack-based environments (such as Forth and the Java Virtual Machine) but very similar to a real target CPU, the p-System has only one stack shared by procedure stack frames (providing return address, etc.) and the arguments to local instructions. Three of the machine's registers point into the stack (which grows upwards):
Also present is a constant area, and, below that, the heap growing down towards the stack. The NP (the new pointer) register points to the top (lowest used address) of the heap. When EP gets greater than NP, the machine's memory is exhausted.
The fifth register, PC, points at the current instruction in the code area.
Stack frames look like this:
The procedure calling sequence works as follows: The call is introduced with
where codice_1 specifies the difference in nesting levels (remember that Pascal supports nested procedures). This instruction will "mark" the stack, i.e. reserve the first five cells of the above stack frame, and initialise previous EP, dynamic, and static link. The caller then computes and pushes any parameters for the procedure, and then issues
to call a user procedure (codice_1 being the number of parameters, codice_3 the procedure's address). This will save the PC in the return address cell, and set the procedure's address as the new PC.
User procedures begin with the two instructions
The first sets SP to MP + codice_4, the second sets EP to SP + codice_5. So codice_4 essentially specifies the space reserved for locals (plus the number of parameters plus 5), and codice_5 gives the number of entries needed locally for the stack. Memory exhaustion is checked at this point.
Returning to the caller is accomplished via
with codice_8 giving the return type (i, r, c, b, a as above, and p for no return value). The return value has to be stored in the appropriate cell previously. On all types except p, returning will leave this value on the stack.
Instead of calling a user procedure (cup), standard procedure codice_9 can be called with
These standard procedures are Pascal procedures like codice_10 (codice_11), codice_12 (codice_13), etc. Peculiarly codice_14 is a p-Code instruction instead.
Niklaus Wirth specified a simple p-code machine in the 1976 book "Algorithms + Data Structures = Programs". The machine had 3 registers - a program counter "p", a base register "b", and a top-of-stack register "t". There were 8 instructions:
This is the code for the machine, written in Pascal:
const
type
var
procedure interpret;
begin
end {interpret};
This machine was used to run Wirth's PL/0, a Pascal subset compiler used to teach compiler development. | https://en.wikipedia.org/wiki?curid=24722 |
Proton-pump inhibitor
Proton-pump inhibitors (PPIs) are members of a class of medications whose main action is a profound and prolonged reduction of stomach acid production. Within the class of medications, there is no clear evidence that one agent works better than another.
They are the most potent inhibitors of acid secretion available. This group of medications followed and largely superseded another group of medications with similar effects, but a different mode of action, called H2-receptor antagonists.
PPIs are among the most widely sold medications in the world, and the first one, omeprazole, is on the World Health Organization's List of Essential Medicines, the safest and most effective medicines needed in a health system. Cost varies significantly between different agents.
These medications are used in the treatment of many conditions, such as:
Specialty professional organizations recommend that people take the lowest effective PPI dose to achieve the desired therapeutic result when used to treat gastroesophageal reflux disease long-term. In the United States, the Food and Drug Administration (FDA) has advised that no more than three 14-day treatment courses should be used in one year.
Despite their extensive use, the quality of the evidence supporting their use in some of these conditions is variable. The effectiveness of PPIs has not been demonstrated for every case. For example, although they reduce the incidence of esophageal adenocarcinoma in Barrett's oesophagus, they do not change the length affected.
PPIs are often used longer than necessary. In about half of people who are hospitalized or seen at a primary care clinic there is no documented reason for their long-term use of PPIs. Some researchers believe that, given the little evidence of long-term effectiveness, the cost of the medication and the potential for harm means that clinicians should consider stopping PPIs in many people.
After four weeks, if symptoms have resolved, the PPI may be stopped in those who were using them for heartburn, gastroesophageal reflux disease, or inflammation of the esophagus if these last two were not severe. Stopping is not recommended in those with Barrett esophagus or a bleeding stomach ulcer. Stopping may be carried out by first decreasing the amount of medication taken or having the person take the medication only when symptoms are present.
In general, proton pump inhibitors are well tolerated, and the incidence of short-term adverse effects is relatively low. The range and occurrence of adverse effects are similar for all of the PPIs, though they have been reported more frequently with omeprazole. This may be due to its longer availability and, hence, clinical experience.
Common adverse effects include headache, nausea, diarrhea, abdominal pain, fatigue, and dizziness. Infrequent adverse effects include rash, itch, flatulence, constipation, anxiety, and depression. Also infrequently, PPI use may be associated with occurrence of myopathies, including the serious reaction rhabdomyolysis.
Long-term use of PPIs requires assessment of the balance of the benefits and risks of the therapy. Various adverse outcomes have been associated with long-term PPI use in several primary reports, but reviews assess the overall quality of evidence in these studies as "low" or "very low". They describe inadequate evidence to establish causal relationships between PPI therapy and many of the proposed associations, due to study design and small estimates of effect size. Benefits outweigh risks when PPIs are used appropriately, but when used inappropriately, modest risks become important. They recommend that PPIs should be used at the lowest effective dose in people with a proven indication, but discourage dose escalation and continued chronic therapy in people unresponsive to initial empiric therapy.
A three-year trial of pantoprazole, completed in 2019, did not find any significant adverse events.
Gastric acid is important for breakdown of food and release of micronutrients, and some studies have shown possibilities for interference with absorption of iron, calcium, magnesium, and vitamin B12. With regard to iron and vitamin B12, the data are weak and several confounding factors have been identified.
Low levels of magnesium can be found in people on PPI therapy and these can be reversed when they are switched to H2-receptor antagonist medications.
High dose or long-term use of PPIs carries a possible increased risk of bone fractures which was not found with short-term, low dose use; the FDA included a warning regarding this on PPI drug labels in 2010.
Some studies have shown a correlation between use of PPIs and "Clostridium difficile" infections. While the data are contradictory and controversial, the FDA had sufficient concern to include a warning about this adverse effect on the label of PPI medications. Concerns have also been raised about spontaneous bacterial peritonitis in older people taking PPIs and in people with irritable bowel syndrome taking PPIs; both types of infections arise in these populations due to underlying conditions and it is not clear if this is a class effect of PPIs. PPIs may predispose an individual to developing small intestinal bacterial overgrowth or fungal overgrowth.
Long-term use of PPIs is associated with the development of benign polyps from fundic glands (which is distinct from fundic gland polyposis); these polyps do not cause cancer and resolve when PPIs are discontinued. There is concern that use of PPIs may mask gastric cancers or other serious gastric problems.
PPI use has also been associated with the development of microscopic colitis.
There is also evidence that PPI use alters the composition of the bacterial populations inhabiting the gut. Although the mechanisms by which PPIs cause these changes are yet to be determined they may have a role in the increased risk of bacterial infections with PPI use. These infections can include "Helicobacter pylori" due to this species not favouring an acid environment, leading an increased risk of ulcers and Gastric cancer risk in genetically susceptible patients.
PPI use in subjects who have received attempted "H. pylori" eradication may also be associated with an increased risk of gastric cancer. The validity and robustness of this finding, with the lack of causality, have led to this association being questioned. It is recommended that long-term PPIs should be used judiciously after considering individual's risk–benefit profile, particularly among those with history of "H. pylori" infection, and that further, well-designed, prospective studies are needed.
Associations of PPI use and cardiovascular events have also been widely studied but clear conclusions have not been made as these relative risks are confounded by other factors. PPIs are commonly used in people with cardiovascular disease for gastric protection when aspirin is given for its antiplatelet actions. An interaction between PPIs and the metabolism of the platelet inhibitor clopidogrel is known and this drug is also often used in people with cardiac disease.
One suggested mechanism for cardiovascular effects is because PPIs bind and inhibit dimethylargininase, the enzyme that degrades asymmetric dimethylarginine (ADMA), resulting in higher ADMA levels and a decrease in bioavailable nitric oxide.
Associations have been shown between PPI use and an increased risk of pneumonia, particularly in the 30 days after starting therapy, where it was found to be 50% higher in community use. Other very weak associations of PPI use have been found, such as with chronic kidney disease and dementia. As these results were derived from observational studies, it remains uncertain whether such associations are causal relationships.
Proton pump inhibitors act by irreversibly blocking the hydrogen/potassium adenosine triphosphatase enzyme system (the H+/K+ ATPase, or, more commonly, the gastric proton pump) of the gastric parietal cells. The proton pump is the terminal stage in gastric acid secretion, being directly responsible for secreting H+ ions into the gastric lumen, making it an ideal target for inhibiting acid secretion.
Targeting the terminal step in acid production, as well as the irreversible nature of the inhibition, results in a class of medications that are significantly more effective than H2 antagonists and reduce gastric acid secretion by up to 99%.
Decreasing the acid in the stomach can aid the healing of duodenal ulcers and reduce the pain from indigestion and heartburn. However, stomach acids are needed to digest proteins, vitamin B12, calcium, and other nutrients, and too little stomach acid causes the condition hypochlorhydria.
The PPIs are given in an inactive form, which is neutrally charged (lipophilic) and readily crosses cell membranes into intracellular compartments (like the parietal cell canaliculus) with acidic environments. In an acid environment, the inactive drug is protonated and rearranges into its active form. As described above, the active form will covalently and irreversibly bind to the gastric proton pump, deactivating it.
The rate of omeprazole absorption is decreased by concomitant food intake. In addition, the absorption of lansoprazole and esomeprazole is decreased and delayed by food. It has been reported, however, that these pharmacokinetic effects have no significant impact on efficacy.
PPIs have a half-life in human blood plasma of only 60–90 minutes, but because they covalently bind to the pump, the half-life of their inhibition of gastric acid secretion lasts an estimated 24 hours. Dissociation of the inhibitory complex is probably due to the effect of the endogenous antioxidant glutathione which leads to the release of omeprazole sulfide and reactivation of the enzyme.
Medically used proton pump inhibitors:
PPIs were developed in the 1980s, with omeprazole being launched in 1988. Most of these medications are benzimidazole derivatives, related to omeprazole, but imidazopyridine derivatives such as tenatoprazole have also been developed. Potassium-competitive inhibitors such as revaprazan reversibly block the potassium-binding site of the proton pump, acting more quickly, but are not available in most countries.
In British Columbia, Canada the cost of the PPIs varies significantly from to per dose while all agents in the class appear more or less equally effective.
A comparative table of FDA-approved indications for PPIs is shown below. | https://en.wikipedia.org/wiki?curid=24723 |
Pan-Slavism
Pan-Slavism, a movement which crystallized in the mid-19th century, is the political ideology concerned with the advancement of integrity and unity for the Slavic-speaking peoples. Its main impact occurred in the Balkans, where non-Slavic empires had ruled the South Slavs for centuries. These were mainly the Byzantine Empire, Austria-Hungary (both as separate entities for most of the period), the Ottoman Empire, and Venice.
Extensive pan-Slavism began much like Pan-Germanism, both of which grew from the sense of unity and nationalism experienced within ethnic groups after the French Revolution and the consequent Napoleonic Wars against European monarchies. Like other Romantic nationalist movements, Slavic intellectuals and scholars in the developing fields of history, philology, and folklore actively encouraged the passion of their shared identity and ancestry. Pan-Slavism also co-existed with the Southern Slavic independence.
Commonly used symbols of the Pan-Slavic movement were the Pan-Slavic colours (blue, white and red) and the Pan-Slavic anthem, "Hey, Slavs".
The first pan-Slavists were the 16th-century Croatian writer Vinko Pribojević and the 17th-century Aleksandar Komulović, Bartol Kašić, Ivan Gundulić and Croatian Catholic missionary Juraj Križanić. Some of the earliest manifestations of Pan-Slavic thought within the Habsburg Monarchy have been attributed to Adam Franz Kollár and Pavel Jozef Šafárik. The movement began following the end of the Napoleonic Wars in 1815. In the aftermath, the leaders of Europe sought to restore the pre-war status quo. At the Congress of Vienna, Austria's representative, Prince von Metternich, felt the threat to this status quo in Austria was the nationalists demanding independence from the empire. While their subjects were composed of numerous ethnic groups (such as Italians, Romanians, Hungarians, etc.), most of the subjects were Slavs.
The First Pan-Slav congress was held in Prague, Bohemia in June, 1848, during the revolutionary movement of 1848. The Czechs had refused to send representatives to the Frankfurt Assembly feeling that Slavs had a distinct interest from the Germans. The Austroslav, František Palacký, presided over the event. Most of the delegates were Czech and Slovak. Palacký called for the co-operation of the Habsburgs and had also endorsed the Habsburg monarchy as the political formation most likely to protect the peoples of central Europe. When the Germans asked him to declare himself in favour of their desire for national unity, he replied that he would not as this would weaken the Habsburg state: “Truly, if it were not that Austria had long existed, it would be necessary, in the interest of Europe, in the interest of humanity itself, to create it.”
The Pan-Slav congress met during the revolutionary turmoil of 1848. Young inhabitants of Prague had taken to the streets and in the confrontation, a stray bullet had killed the wife of Field Marshal Alfred I, Prince of Windisch-Grätz, the commander of the Austrian forces in Prague. Enraged, Windischgrätz seized the city, disbanded the congress, and established martial law throughout Bohemia.
The first Pan-Slavic convention was held in Prague on June 2 through 16, 1848. The delegates at the Congress were specifically both anti-Austrian and anti-Russian. Still "the Right"—the moderately liberal wing of the Congress—under the leadership of František Palacký (1798–1876), a Czech historian and politician, and Pavol Jozef Šafárik (1795–1861), a Slovak philologist, historian and archaeologist, favored autonomy of the Slav lands within the framework of Austrian (Habsburg) monarchy. In contrast "the Left"—the radical wing of the Congress—under the leadership of Karel Sabina (1813–1877), a Czech writer and journalist, Josef Václav Frič, a Czech nationalist, Karol Libelt (1817–1861), a Polish writer and politician, and others, pressed for a close alliance with the revolutionary-democratic movement going on in Germany and Hungary in 1848.
A national rebirth in the Hungarian "Upper Land" (now Slovakia) awoke in a completely new light, both before the Slovak Uprising in 1848 and after. The driving force of this rebirth movement were Slovak writers and politicians who called themselves Štúrovci, the followers of Ľudovít Štúr. As the Slovak nobility was Magyarized and most Slovaks were merely farmers or priests, this movement failed to attract much attention. Nonetheless, the campaign was successful as a brotherly cooperation between the Croats and the Slovaks brought its fruit throughout the war. Most of the battles between Slovaks and Hungarians however, did not turn out in favor for the Slovaks who were logistically supported by the Austrians, but not sufficiently. The shortage of manpower proved to be decisive as well.
During the war, the Slovak National Council brought its demands to the young Austrian emperor, Franz Joseph I, who seemed to take a note of it and promised support for the Slovaks against the revolutionary radical Hungarians. However the moment the revolution was over, Slovak demands were forgotten. These demands included an autonomous land within the Austrian Empire called "Slovenský kraj" which would be eventually led by a Serbian prince. This act of ignorance from the Emperor convinced the Slovak and the Czech elite who proclaimed the concept of Austroslavism as dead.
Disgusted by the Emperor's policy, in 1849, Ľudovít Štúr, the person who codified the first official Slovak language, wrote a book he would name "Slavdom and the World of the Future". This book served as a manifesto where he noted that Austroslavism was not the way to go anymore. He also wrote a sentence that often serves as a quote until this day: "Every nation has its time under God's sun, and the linden [a symbol of the Slavs] is blossoming, while the oak [a symbol of the Teutons] bloomed long ago."
He expressed confidence in the Russian Empire however, as it was the only country of Slavs that was not dominated by anybody else, yet it was one of the most powerful nations in the world. He often symbolized Slavs as being a tree, with "minor" Slavic nations being branches while the trunk of the tree was Russian. His Pan-Slavic views were unleashed in this book, where he stated that the land of Slovaks should be annexed by the Tsar's empire and that eventually the population could be not only Russified, but also converted into the rite of Orthodoxy, religion originally spread by Cyril and Methodius during the times of Great Moravia, which served as an opposition to the Catholic missionaries from the Franks. After the Hungarian invasion of Pannonia, Hungarians converted into Catholicism, which effectively influenced the Slavs living in Pannonia and in the land south of the Lechs.
However, the Russian Empire often claimed Pan-Slavism as a justification for its aggressive moves in the Balkan Peninsula of Europe against the Ottoman Empire, which conquered and held the land of Slavs for centuries. This eventually led to the Balkan campaign of the Russian Empire, which resulted in the entire Balkan being liberated from the Ottoman Empire, with the help and the initiative of the Russian Empire. Pan-Slavism has some supporters among Czech and Slovak politicians, especially among the nationalistic and far-right ones, such as People's Party - Our Slovakia.
During World War I, captured Slavic soldiers were asked to fight against "oppression in the Austrian Empire". Consequently, some did. (see Czechoslovak Legions)
The creation of an independent Czechoslovakia made the old ideals of Pan-Slavism anachronistic. Relations with other Slavic states varied, sometimes being so tense it escalated into an armed conflict, such as with the Second Polish Republic where border clashes over Silesia resulted in a short hostile conflict, the Polish–Czechoslovak War. Even tensions between Czechs and Slovaks had appeared before and during the World War II.
Pan-Slavism in the south would often turn to Russia for support. The Southern Slavic movement advocated the independence of the Slavic peoples in the Austro-Hungarian Empire, Republic of Venice and the Ottoman Empire. Some Serbian intellectuals sought to unite all of the Southern, Balkan Slavs, whether Catholic (Croats, Slovenes), or Orthodox (Serbs, Bulgarians) as a "Southern-Slavic nation of three faiths".
Austria feared that Pan-Slavists would endanger the empire. In Austria-Hungary Southern Slavs were distributed among several entities: Slovenes in the Austrian part (Carniola, Styria, Carinthia, Gorizia and Gradisca, Trieste, Istria (also Croats)), Croats and Serbs in the Hungarian part within the autonomous Kingdom of Croatia-Slavonia and in the Austrian part within the autonomous Kingdom of Dalmatia, and in Bosnia and Herzegovina, under direct control from Vienna. Due to a different position within Austria-Hungary several different goals were prominent among the Southern Slavs of Austria-Hungary. A strong alternative to Pan-Slavism was Austroslavism, especially among the Croats and Slovenes. Because the Serbs were dispersed among several regions, and the fact that they had ties to the independent nation state of Kingdom of Serbia, they were among the strongest supporters of independence of South-Slavs from Austria-Hungary and uniting into a common state under Serbian monarchy.
In 1863, the Association of Serbian Philology commemorated the death of Cyril a thousand years earlier, its president Dimitrije Matić, talked of the creation of an ethnically "pure" Slavonic people: "with God’s help, there should be a whole Slavonic people with purely Slavonic faces and of purely Slavonic character"
After World War I the creation of the Kingdom of Yugoslavia, under Serbian royalty of the Karađorđević dynasty, united most Southern Slavic-speaking nations regardless of religion and cultural background. The only ones they did not unite with were the Bulgarians. Still, in the years after the Second World War, there were proposals to incorporate Bulgaria into a Greater Yugoslavia thus uniting all south Slavic-speaking nations into one state. The idea was abandoned after the split between Josip Broz Tito and Joseph Stalin in 1948. This led to some bitter sentiment between the people of Yugoslavia and Bulgaria in the aftermath.
At the end of Second World War, the Partisans leader Josip Broz Tito, a Croat, became Yugoslav president, and the country become a socialist republic. Tito advocated Brotherhood and unity which meant equality among the ethnic groups, including non-Slav minorities. This led to relatively peaceful co-existence and prosperity until the breakup of the federation.
Although early Pan-Slavism had found support among some Poles, it soon lost its appeal as the movement became dominated by Russia. While Russian Pan-Slavists spoke of liberation of other Slavs through Russian actions, parts of Poland had been ruled by the Russian Empire since the Partitions of Poland. At different points in history, Poland often saw itself in partnership with non-Slavic nations, such as Hungary, Saxony, Sweden and Lithuania under the Polish–Lithuanian Commonwealth. Especially after 1795, Revolutionary and Napoleonic France was held in high regard by most Poles, and seen as the main champion of reconstitution of their country, particularly since it was a mutual enemy of Austria, Prussia and Russia. The influence of 19th century Pan-Slavism had little impact in Poland except for creating sympathy towards the other oppressed Slavic nations and their aspirations to independence. At the same time while Pan-Slavism worked against Austro-Hungary with South Slavs, Poles enjoyed a wide autonomy within the state and assumed a loyalist position towards the Habsburgs. Within the Austro-Hungarian polity, they were able to develop their national culture and preserve the Polish language, both of which were under threat in both German and Russian Empires. A Pan-Slavic federation was proposed, but on the condition that the Russian Empire would be excluded from such an entity. After Poland regained its independence (from Germany, Austria and Russia) in 1918 no major force considered Pan-Slavism as a serious alternative, viewing Pan-Slavism as little more than a code word for Russification. During Poland's communist era, the USSR used Pan-Slavism as propaganda tool to justify its control over the country. The issue of the Pan-Slavism was not part of current mainstream politics, and is widely seen as an ideology of Russian imperialism.
Joseph Conrad in Notes on Life and Letters.:
""... between Polonism and Slavonism there is not so much hatred as a complete and ineradicable incompatibility."" ... Conrad argues that "nothing is more foreign than what in the literary world is called Slavonism to his "individual" sensibility and "the whole Polish mentality""
Pan-Slavism is popular amongst immigrants from the former USSR to Slavic countries of the European Union. It expresses fierce populism, nostalgia for the Soviet era, and strong anti-Western sentiments.
During the time of the Soviet Union, Bolshevik teachings viewed Pan-Slavism as a reactionary element formerly used by the Russian Empire. As a result, Bolsheviks viewed it as contrary to their Marxist ideology. However, with the emergence of World War II, the Stalinist government saw fit to utilize Pan-Slavic politics, resulting in the Pan-Slavic congress being held in Moscow in 1942.
The authentic idea of unity of the Slavic people was all but gone after World War I when the maxim "Versailles and Trianon have put an end to all Slavisms" and was finally put to rest with the fall of communism in Central and Eastern Europe in late 1980s. With the breakup of federal states such as Czechoslovakia and Yugoslavia and the problem of Russian dominance in any proposed all-Slavic organisation, the idea of Pan-Slavic unity is mostly considered dead in the western world. Varying relations between the Slavic countries exist nowadays; they range from mutual respect on equal footing and sympathy towards one another through traditional dislike and enmity, to indifference. None, other than culture and heritage oriented organizations, are currently considered as a form of rapprochement among the countries with Slavic origins. The political parties which include panslavism as part of their program usually live on the fringe of the political spectrum (e.g. in Poland candidates from Związek Słowiański got no more than few thousands votes). In modern times, the appeals to Pan-Slavism are often made in Belarus, Russia, Serbia and Slovakia.
The similarity of Slavic languages inspired many people to create Pan-Slavic languages, i.e., zonal constructed languages for all Slavic people to communicate with one another. Several of these languages were created in the past, but due to the Internet, many more Pan-Slavic languages were created in the Digital Age. The most popular modern Pan-Slavic language is Interslavic. | https://en.wikipedia.org/wiki?curid=24724 |
Positron
The positron or antielectron is the antiparticle or the antimatter counterpart of the electron. The positron has an electric charge of +1 "e", a spin of 1/2 (the same as the electron), and has the same mass as an electron. When a positron collides with an electron, annihilation occurs. If this collision occurs at low energies, it results in the production of two or more photons.
Positrons can be created by positron emission radioactive decay (through weak interactions), or by pair production from a sufficiently energetic photon which is interacting with an atom in a material.
In 1928, Paul Dirac published a paper proposing that electrons can have both a positive and negative charge. This paper introduced the Dirac equation, a unification of quantum mechanics, special relativity, and the then-new concept of electron spin to explain the Zeeman effect. The paper did not explicitly predict a new particle but did allow for electrons having either positive or negative energy as solutions. Hermann Weyl then published a paper discussing the mathematical implications of the negative energy solution. The positive-energy solution explained experimental results, but Dirac was puzzled by the equally valid negative-energy solution that the mathematical model allowed. Quantum mechanics did not allow the negative energy solution to simply be ignored, as classical mechanics often did in such equations; the dual solution implied the possibility of an electron spontaneously jumping between positive and negative energy states. However, no such transition had yet been observed experimentally.
Dirac wrote a follow-up paper in December 1929 that attempted to explain the unavoidable negative-energy solution for the relativistic electron. He argued that "... an electron with negative energy moves in an external [electromagnetic] field as though it carries a positive charge." He further asserted that all of space could be regarded as a "sea" of negative energy states that were filled, so as to prevent electrons jumping between positive energy states (negative electric charge) and negative energy states (positive charge). The paper also explored the possibility of the proton being an island in this sea, and that it might actually be a negative-energy electron. Dirac acknowledged that the proton having a much greater mass than the electron was a problem, but expressed "hope" that a future theory would resolve the issue.
Robert Oppenheimer argued strongly against the proton being the negative-energy electron solution to Dirac's equation. He asserted that if it were, the hydrogen atom would rapidly self-destruct. Persuaded by Oppenheimer's argument, Dirac published a paper in 1931 that predicted the existence of an as-yet-unobserved particle that he called an "anti-electron" that would have the same mass and the opposite charge as an electron and that would mutually annihilate upon contact with an electron.
Feynman, and earlier Stueckelberg, proposed an interpretation of the positron as an electron moving backward in time, reinterpreting the negative-energy solutions of the Dirac equation. Electrons moving backward in time would have a positive electric charge. Wheeler invoked this concept to explain the identical properties shared by all electrons, suggesting that "they are all the same electron" with a complex, self-intersecting worldline. Yoichiro Nambu later applied it to all production and annihilation of particle-antiparticle pairs, stating that "the eventual creation and annihilation of pairs that may occur now and then is no creation or annihilation, but only a change of direction of moving particles, from the past to the future, or from the future to the past." The backwards in time point of view is nowadays accepted as completely equivalent to other pictures, but it does not have anything to do with the macroscopic terms "cause" and "effect", which do not appear in a microscopic physical description.
Dmitri Skobeltsyn first observed the positron in 1929. While using a Wilson cloud chamber to try to detect gamma radiation in cosmic rays, Skobeltsyn detected particles that acted like electrons but curved in the opposite direction in an applied magnetic field.
Likewise, in 1929 Chung-Yao Chao, a graduate student at Caltech, noticed some anomalous results that indicated particles behaving like electrons, but with a positive charge, though the results were inconclusive and the phenomenon was not pursued.
Carl David Anderson discovered the positron on 2 August 1932, for which he won the Nobel Prize for Physics in 1936. Anderson did not coin the term "positron", but allowed it at the suggestion of the "Physical Review" journal editor to whom he submitted his discovery paper in late 1932. The positron was the first evidence of antimatter and was discovered when Anderson allowed cosmic rays to pass through a cloud chamber and a lead plate. A magnet surrounded this apparatus, causing particles to bend in different directions based on their electric charge. The ion trail left by each positron appeared on the photographic plate with a curvature matching the mass-to-charge ratio of an electron, but in a direction that showed its charge was positive.
Anderson wrote in retrospect that the positron could have been discovered earlier based on Chung-Yao Chao's work, if only it had been followed up on. Frédéric and Irène Joliot-Curie in Paris had evidence of positrons in old photographs when Anderson's results came out, but they had dismissed them as protons.
The positron had also been contemporaneously discovered by Patrick Blackett and Giuseppe Occhialini at the Cavendish Laboratory in 1932. Blackett and Occhialini had delayed publication to obtain more solid evidence, so Anderson was able to publish the discovery first.
Positrons are produced naturally in β+ decays of naturally occurring radioactive isotopes (for example, potassium-40) and in interactions of gamma quanta (emitted by radioactive nuclei) with matter. Antineutrinos are another kind of antiparticle produced by natural radioactivity (β− decay). Many different kinds of antiparticles are also produced by (and contained in) cosmic rays. In research published in 2011 by the American Astronomical Society positrons were discovered originating above thunderstorm clouds; positrons are produced in gamma-ray flashes created by electrons accelerated by strong electric fields in the clouds. Antiprotons have also been found to exist in the Van Allen Belts around the Earth by the PAMELA module.
Antiparticles, of which the most common are positrons due to their low mass, are also produced in any environment with a sufficiently high temperature (mean particle energy greater than the pair production threshold). During the period of baryogenesis, when the universe was extremely hot and dense, matter and antimatter were continually produced and annihilated. The presence of remaining matter, and absence of detectable remaining antimatter, also called baryon asymmetry, is attributed to CP-violation: a violation of the CP-symmetry relating matter to antimatter. The exact mechanism of this violation during baryogenesis remains a mystery.
Positron production from radioactive decay can be considered both artificial and natural production, as the generation of the radioisotope can be natural or artificial. Perhaps the best known naturally-occurring radioisotope which produces positrons is potassium-40, a long-lived isotope of potassium which occurs as a primordial isotope of potassium. Even though a small percent of potassium (0.0117%) it is the single most abundant radioisotope in the human body. In a human body of 70 kg mass, about 4,400 nuclei of 40K decay per second. The activity of natural potassium is 31 Bq/g. About 0.001% of these 40K decays produce about 4000 natural positrons per day in the human body. These positrons soon find an electron, undergo annihilation, and produce pairs of 511 keV photons, in a process similar (but much lower intensity) to that which happens during a PET scan nuclear medicine procedure.
Recent observations indicate black holes and neutron stars produce vast amounts of positron-electron plasma in astrophysical jets. Large clouds of positron-electron plasma have also been associated with neutron stars.
Satellite experiments have found evidence of positrons (as well as a few antiprotons) in primary cosmic rays, amounting to less than 1% of the particles in primary cosmic rays. These do not appear to be the products of large amounts of antimatter from the Big Bang, or indeed complex antimatter in the universe (evidence for which is lacking, see below). Rather, the antimatter in cosmic rays appear to consist of only these two elementary particles, probably made in energetic processes long after the Big Bang.
Preliminary results from the presently operating Alpha Magnetic Spectrometer ("AMS-02") on board the International Space Station show that positrons in the cosmic rays arrive with no directionality, and with energies that range from 10 to 250 GeV. In September 2014, new results with almost twice as much data were presented in a talk at CERN and published in Physical Review Letters. A new measurement of positron fraction up to 500 GeV was reported, showing that positron fraction peaks at a maximum of about 16% of total electron+positron events, around an energy of 275 ± 32 GeV. At higher energies, up to 500 GeV, the ratio of positrons to electrons begins to fall again. The absolute flux of positrons also begins to fall before 500 GeV, but peaks at energies far higher than electron energies, which peak about 10 GeV. These results on interpretation have been suggested to be due to positron production in annihilation events of massive dark matter particles.
Positrons, like anti-protons, do not appear to originate from any hypothetical "antimatter" regions of the universe. On the contrary, there is no evidence of complex antimatter atomic nuclei, such as antihelium nuclei (i.e., anti-alpha particles), in cosmic rays. These are actively being searched for. A prototype of the "AMS-02" designated "AMS-01", was flown into space aboard the on STS-91 in June 1998. By not detecting any antihelium at all, the "AMS-01" established an upper limit of 1.1×10−6 for the antihelium to helium flux ratio.
Physicists at the Lawrence Livermore National Laboratory in California have used a short, ultra-intense laser to irradiate a millimeter-thick gold target and produce more than 100 billion positrons. Presently significant lab production of 5 MeV positron-electron beams allows investigation of multiple characteristics such as how different elements react to 5 MeV positron interactions or impacts, how energy is transferred to particles, and the shock effect of gamma-ray bursts (GRBs).
Certain kinds of particle accelerator experiments involve colliding positrons and electrons at relativistic speeds. The high impact energy and the mutual annihilation of these matter/antimatter opposites create a fountain of diverse subatomic particles. Physicists study the results of these collisions to test theoretical predictions and to search for new kinds of particles.
The ALPHA experiment combines positrons with antiprotons to study properties of antihydrogen.
Gamma rays, emitted indirectly by a positron-emitting radionuclide (tracer), are detected in positron emission tomography (PET) scanners used in hospitals. PET scanners create detailed three-dimensional images of metabolic activity within the human body.
An experimental tool called positron annihilation spectroscopy (PAS) is used in materials research to detect variations in density, defects, displacements, or even voids, within a solid material. | https://en.wikipedia.org/wiki?curid=24731 |
Phencyclidine
Phencyclidine or phenylcyclohexyl piperidine (PCP), also known as angel dust among other names, is a drug used for its mind-altering effects. PCP may cause hallucinations, distorted perceptions of sounds, and violent behavior. As a recreational drug, it is typically smoked, but may be taken by mouth, snorted, or injected. It may also be mixed with cannabis or tobacco.
Adverse effects may include seizures, coma, addiction, and an increased risk of suicide. Flashbacks may occur despite stopping usage. Chemically, PCP is a member of the arylcyclohexylamine class, and pharmacologically, it is a dissociative anesthetic. PCP works primarily as an NMDA receptor antagonist.
PCP is most commonly used in the United States. While usage peaked there in the 1970s, between 2005 and 2011 an increase in visits to emergency departments as a result of the drug occurred. As of 2017 in the United States about 1% of people in grade twelve reported using PCP in the prior year while 2.9% of those over the age of 25 reported using it at some point in their lives.
PCP was initially made in 1926 and brought to market as an anesthetic medication in the 1950s. Its use in humans was disallowed in the United States in 1965 due to the high rates of side effects while its use in animals was disallowed in 1978. Moreover, ketamine was discovered and was better tolerated as an anesthetic. PCP is classified as a schedule II drug in the United States. A number of derivatives of PCP have been sold for recreational and non-medical use.
Phencyclidine is used for its ability to induce a dissociative state.
Behavioral effects can vary by dosage. Low doses produce a numbness in the extremities and intoxication, characterized by staggering, unsteady gait, slurred speech, bloodshot eyes, and loss of balance. Moderate doses (5–10 mg intranasal, or 0.01–0.02 mg/kg intramuscular or intravenous) will produce analgesia and anesthesia. High doses may lead to convulsions. The drug is often illegally produced under poorly controlled conditions; this means that users may be unaware of the actual dose they are taking.
Psychological effects include severe changes in body image, loss of ego boundaries, paranoia, and depersonalization. Psychosis, agitation and dysphoria, hallucinations, blurred vision, euphoria, and suicidal impulses are also reported, as well as occasional aggressive behavior. Like many other drugs, PCP has been known to alter mood states in an unpredictable fashion, causing some individuals to become detached, and others to become animated. PCP may induce feelings of strength, power, and invulnerability as well as a numbing effect on the mind.
Studies by the Drug Abuse Warning Network in the 1970s show that media reports of PCP-induced violence are greatly exaggerated and that incidents of violence are unusual and often limited to individuals with reputations for aggression regardless of drug use. Although uncommon, events of PCP-intoxicated individuals acting in an unpredictable fashion, possibly driven by their delusions or hallucinations, have been publicized. One example is the case of Big Lurch, a former rapper with a history of violent crime, who was convicted of murdering and cannibalizing his roommate while under the influence of PCP. Other commonly cited types of incidents include inflicting property damage and self-mutilation of various types, such as pulling one's own teeth. These effects were not noted in its medicinal use in the 1950s and 1960s however, and reports of physical violence on PCP have often been shown to be unfounded.
Recreational doses of the drug also occasionally appear to induce a psychotic state that resembles a schizophrenic episode. Users generally report feeling detached from reality.
Symptoms are summarized by the mnemonic device RED DANES: rage, erythema (redness of skin), dilated pupils, delusions, amnesia, nystagmus (oscillation of the eyeball when moving laterally), excitation, and skin dryness.
PCP is self-administered and induces ΔFosB expression in the D1-type medium spiny neurons of the nucleus accumbens, and accordingly, excessive PCP use is known to cause addiction. PCP's rewarding and reinforcing effects are at least partly mediated by blocking the NMDA receptors in the glutamatergic inputs to D1-type medium spiny neurons in the nucleus accumbens. PCP has been shown to produce conditioned place aversion and conditioned place preference in animal studies.
A 2019 review found that the transition rate from a diagnosis of hallucinogen-induced psychosis (which included PCP) to that of schizophrenia was 26%. This was lower than cannabis-induced psychosis (34%) but higher than amphetamine (22%), opioid (12%), alcohol (10%) and sedative (9%) induced psychoses. In comparison, the transition rate for brief, atypical and not otherwise specified psychosis was found to be 36%.
PCP comes in both powder and liquid forms (PCP base is dissolved most often in ether), but typically it is sprayed onto leafy material such as cannabis, mint, oregano, tobacco, parsley, or ginger leaves, then smoked.
Management of PCP intoxication mostly consists of supportive care – controlling breathing, circulation, and body temperature – and, in the early stages, treating psychiatric symptoms. Benzodiazepines, such as lorazepam, are the drugs of choice to control agitation and seizures (when present). Typical antipsychotics such as phenothiazines and haloperidol have been used to control psychotic symptoms, but may produce many undesirable side effects – such as dystonia – and their use is therefore no longer preferred; phenothiazines are particularly risky, as they may lower the seizure threshold, worsen hyperthermia, and boost the anticholinergic effects of PCP. If an antipsychotic is given, intramuscular haloperidol has been recommended.
Forced acid diuresis (with ammonium chloride or, more safely, ascorbic acid) may increase clearance of PCP from the body, and was somewhat controversially recommended in the past as a decontamination measure. However, it is now known that only around 10% of a dose of PCP is removed by the kidneys, which would make increased urinary clearance of little consequence; furthermore, urinary acidification is dangerous, as it may induce acidosis and worsen rhabdomyolysis (muscle breakdown), which is not an unusual manifestation of PCP toxicity.
PCP is well known for its primary action on the NMDA receptor, an ionotropic glutamate receptor, in rats and in rat brain homogenate. As such, PCP is an NMDA receptor antagonist. The role of NMDAR antagonism in the effect of PCP, ketamine, and related dissociative agents was first published in the early 1980s by David Lodge and colleagues. Other NMDA receptor antagonists include ketamine, tiletamine, dextromethorphan, nitrous oxide, and dizocilpine (MK-801).
Research also indicates that PCP inhibits nicotinic acetylcholine receptors (nAChRs) among other mechanisms. Analogues of PCP exhibit varying potency at nACh receptors and NMDA receptors. Findings demonstrate that presynaptic nAChRs and NMDA receptor interactions influence postsynaptic maturation of glutamatergic synapses and consequently impact synaptic development and plasticity in the brain. These effects can lead to inhibition of excitatory glutamate activity in certain brain regions such as the hippocampus and cerebellum thus potentially leading to memory loss as one of the effects of prolonged use. Acute effects on the cerebellum manifest as changes in blood pressure, breathing rate, pulse rate, and loss of muscular coordination during intoxication.
PCP, like ketamine, also acts as a potent dopamine D2High receptor partial agonist in rat brain homogenate and has affinity for the human cloned D2High receptor. This activity may be associated with some of the other more psychotic features of PCP intoxication, which is evidenced by the successful use of D2 receptor antagonists (such as haloperidol) in the treatment of PCP psychosis.
In addition to its well explored interactions with NMDA receptors, PCP has also been shown to inhibit dopamine reuptake, and thereby leads to increased extracellular levels of dopamine and hence increased dopaminergic neurotransmission. However, PCP has little affinity for the human monoamine transporters, including the dopamine transporter (DAT). Instead, its inhibition of monoamine reuptake may be mediated by interactions with allosteric sites on the monoamine transporters. PCP is notably a high-affinity ligand of the PCP site 2 (Ki = 154 nM), a not-well-characterized site associated with monoamine reuptake inhibition.
Studies on rats indicate that PCP interacts indirectly with opioid receptors (endorphin and enkephalin) to produce analgesia.
A binding study assessed PCP at 56 sites including neurotransmitter receptors and transporters and found that PCP had Ki values of >10,000 nM at all sites except the dizocilpine (MK-801) site of the NMDA receptor (Ki = 59 nM), the σ2 receptor (PC12) (Ki = 136 nM), and the serotonin transporter (Ki = 2,234 nM). The study notably found Ki values of >10,000 nM for the D2 receptor, the opioid receptors, the σ1 receptor, and the dopamine and norepinephrine transporters. These results suggest that PCP is a highly selective ligand of the NMDAR and σ2 receptor. However, PCP may also interact with allosteric sites on the monoamine transporters to produce inhibition of monoamine reuptake.
Phencyclidine is an NMDA receptor antagonist that blocks the activity of the NMDA receptor to cause anaesthesia and analgesia without causing cardiorespiratory depression. NMDA is an excitatory receptor in the brain, when activated normally the receptor acts as an ion channel and there is an influx of positive ions through the channel to cause nerve cell depolarisation. Phencyclidine enters the ion channel and binds, reversibly and non-competitively, inside the channel pore to block the entry of positive ions to the cell therefore inhibiting cell depolarisation.
Some studies found that, like other NMDA receptor antagonists, PCP can cause a kind of brain damage called Olney's lesions in rats. Studies conducted on rats showed that high doses of the NMDA receptor antagonist dizocilpine caused reversible vacuoles to form in certain regions of the rats' brains. All studies of Olney's lesions have only been performed on non-human animals and may not apply to humans. One unpublished study by Frank Sharp reportedly showed no damage by the NDMA antagonist, ketamine, a structurally similar drug, far beyond recreational doses, but due to the study never having been published, its validity is controversial.
PCP has also been shown to cause schizophrenia-like changes in "N"-acetylaspartate and "N"-acetylaspartylglutamate levels in the rat brain, which are detectable both in living rats and upon necropsy examination of brain tissue. It also induces symptoms in humans that mimic schizophrenia. PCP not only produced symptoms similar to schizophrenia, it also yielded electroencephalogram changes in the thalamocortical pathway (increased delta decreased alpha) and in the hippocampus (increase theta bursts) that were similar to those in schizophrenia. PCP induced augmentation of dopamine release may link the NMDA and DA hypothesis of schizophrenia.
PCP is metabolized into PCHP, PPC and PCAA. The drug is metabolized 90% by oxidative hydroxylation in the liver during the first pass. Metabolites are glucuronidated and excreted in the urine. PCP is excreted 9% in its unchanged form.
When smoked, some of the compound is broken down by heat into 1-phenylcyclohexene (PC) and piperidine.
It takes 15 to 60 minutes for effects of PCP to onset.
PCP is an arylcyclohexylamine.
Fewer than 30 different analogs of PCP were reported as being used on the street during the 1970s and 1980s, mainly in the United States. Only of a few of these compounds were widely used including rolicyclidine (PCPy), eticyclidine (PCE), and tenocyclidine (TCP). Less common analogs include 3-HO-PCP, 3-MeO-PCMo, and 3-MeO-PCP.
The generalized structural motif required for PCP-like activity is derived from structure-activity relationship studies of PCP derivatives. All of these derivatives are likely to share some of their psychoactive effects with PCP itself, although a range of potencies and varying mixtures of anesthetic, dissociative, and stimulant effects are known, depending on the particular drug and its substituents. In some countries such as the United States, Australia, and New Zealand, all of these compounds would be considered controlled substance analogs of PCP under the Federal Analog Act and are hence illegal drugs if sold for human consumption.
PCP was initially made in 1926 and brought to market as an anesthetic medication in the 1950s. Its anesthetic effects were discovered by Victor Maddox, a chemist at Parke-Davis in Michigan, while investigating synthetic analgesic agents. It was known under the developmental code name "CI-395". It was approved for use as an investigational drug under the brand names Sernyl and Sernylan in the 1950s as an anesthetic, but because of its long terminal half-life and adverse side effects, such as hallucinations, mania, delirium, and disorientation, it was removed from the market in 1965 and limited to veterinary use.
PCP is a Schedule II substance in the United States and its ACSCN is 7471. Its manufacturing quota for 2014 was 19 grams.
It is a Schedule I drug by the Controlled Drugs and Substances act in Canada, a List I drug of the Opium Law in the Netherlands, and a Class A substance in the United Kingdom.
PCP began to emerge as a recreational drug in major cities in the United States in 1960s. In 1978, "People" magazine and Mike Wallace of "60 Minutes" called PCP the country's "number one" drug problem. Although recreational use of the drug had always been relatively low, it began declining significantly in the 1980s. In surveys, the number of high school students admitting to trying PCP at least once fell from 13% in 1979 to less than 3% in 1990. | https://en.wikipedia.org/wiki?curid=24733 |
Product of group subsets
In mathematics, one can define a product of group subsets in a natural way. If "S" and "T" are subsets of a group "G", then their product is the subset of "G" defined by
The subsets "S" and "T" need not be subgroups for this product to be well defined. The associativity of this product follows from that of the group product. The product of group subsets therefore defines a natural monoid structure on the power set of "G".
A lot more can be said in the case where "S" and "T" are subgroups. The product of two subgroups "S" and "T" of a group "G" is itself a subgroup of "G" if and only if "ST" = "TS".
If "S" and "T" are subgroups of "G", their product need not be a subgroup (for example, two distinct subgroups of order 2 in the symmetric group on 3 symbols). This product is sometimes called the "Frobenius product". In general, the product of two subgroups "S" and "T" is a subgroup if and only if "ST" = "TS", and the two subgroups are said to permute. (Walter Ledermann has called this fact the "Product Theorem", but this name, just like "Frobenius product" is by no means standard.) In this case, "ST" is the group generated by "S" and "T"; i.e., "ST" = "TS" = ⟨"S" ∪ "T"⟩.
If either "S" or "T" is normal then the condition "ST" = "TS" is satisfied and the product is a subgroup. If both "S" and "T" are normal, then the product is normal as well.
If "S" and "T" are finite subgroups of a group "G", then "ST" is a subset of "G" of size "|ST|" given by the "product formula":
Note that this applies even if neither "S" nor "T" is normal.
The following modular law (for groups) holds for any "Q" a subgroup of "S", where "T" is any other arbitrary subgroup (and both "S" and "T" are subgroups of some group "G"):
The two products that appear in this equality are not necessarily subgroups.
If "QT" is a subgroup (equivalently, as noted above, if "Q" and "T" permute) then "QT" = ⟨"Q" ∪ "T"⟩ = "Q" ∨ "T"; i.e., "QT" is the join of "Q" and "T" in the lattice of subgroups of "G", and the modular law for such a pair may also be written as "Q" ∨ ("S" ∩ "T") = "S" ∩ ("Q ∨ T"), which is the equation that defines a modular lattice if it holds for any three elements of the lattice with "Q" ≤ "S". In particular, since normal subgroups permute with each other, they form a modular sublattice.
A group in which every subgroup permutes is called an Iwasawa group. The subgroup lattice of an Iwasawa group is thus a modular lattice, so these groups are sometimes called "modular groups" (although this latter term may have other meanings.)
The assumption in the modular law for groups (as formulated above) that "Q" is a subgroup of "S" is essential. If "Q" is "not" a subgroup of "S", then the tentative, more general distributive property that one may consider "S" ∩ ("QT") = ("S" ∩ "Q")("S" ∩ "T") is "false".
In particular, if "S" and "T" intersect only in the identity, then every element of "ST" has a unique expression as a product "st" with "s" in "S" and "t" in "T". If "S" and "T" also commute, then "ST" is a group, and is called a Zappa–Szép product. Even further, if "S" or "T" is normal in "ST", then "ST" coincides with the semidirect product of "S" and "T". Finally, if both "S" and "T" are normal in "ST", then "ST" coincides with the direct product of "S" and "T".
If "S" and "T" are subgroups whose intersection is the trivial subgroup (identity element) and additionally "ST" = "G", then "S" is called a complement of "T" and vice versa.
By a (locally unambiguous) abuse of terminology, two subgroups that intersect only on the (otherwise obligatory) identity are sometimes called disjoint.
A question that arises in the case of a non-trivial intersection between a normal subgroup "N" and a subgroup "K" is what is the structure of the quotient "NK"/"N". Although one might be tempted to just "cancel out" "N" and say the answer is "K", that is not correct because a homomorphism with kernel "N" will also "collapse" (map to 1) all elements of "K" that happen to be in "N". Thus the correct answer is that "NK"/"N" is isomorphic with "K"/("N"∩"K"). This fact is sometimes called the second isomorphism theorem, (although the numbering of these theorems sees some variation between authors); it has also been called the "diamond theorem" by I. Martin Isaacs because of the shape of subgroup lattice involved, and has also been called the "parallelogram rule" by Paul Moritz Cohn, who thus emphasized analogy with the parallelogram rule for vectors because in the resulting subgroup lattice the two sides assumed to represent the quotient groups ("SN") / "N" and "S" / ("S" ∩ "N") are "equal" in the sense of isomorphism.
Frattini's argument guarantees the existence of a product of subgroups (giving rise to the whole group) in a case where the intersection is not necessarily trivial (and for this latter reason the two subgroups are not complements). More specifically, if "G" is a finite group with normal subgroup "N", and if "P" is a Sylow "p"-subgroup of "N", then "G" = "N""G"("P")"N", where "N""G"("P") denotes the normalizer of "P" in "G". (Note that the normalizer of "P" includes "P", so the intersection between "N" and "N""G"("P") is at least "P".)
In a semigroup S, the product of two subsets defines a structure of semigroup on P(S), the power set of the semigroup S; furthermore P(S) is a semiring with addition as union (of subsets) and multiplication as product of subsets. | https://en.wikipedia.org/wiki?curid=24734 |
4-Phenyl-4-(1-piperidinyl)cyclohexanol
4-Phenyl-4-(1-piperidinyl)cyclohexanol, also known as PPC, is an organic chemical which is a metabolite of phencyclidine (PCP). It can be detected in the hair of PCP users.
PPC has been shown to cause increases in locomotor activity in lab mice. | https://en.wikipedia.org/wiki?curid=24737 |
Piperidine
Piperidine is an organic compound with the molecular formula (CH2)5NH. This heterocyclic amine consists of a six-membered ring containing five methylene bridges (–CH2–) and one amine bridge (–NH–). It is a colorless liquid with an odor described as objectionable, and typical of amines. The name comes from the genus name "Piper", which is the Latin word for pepper. Although piperidine is a common organic compound, it is best known as a representative structure element within many pharmaceuticals and alkaloids, such as natural-occurring solenopsins.
Piperidine was first reported in 1850 by the Scottish chemist Thomas Anderson and again, independently, in 1852 by the French chemist Auguste Cahours, who named it. Both men obtained piperidine by reacting piperine with nitric acid.
Industrially, piperidine is produced by the hydrogenation of pyridine, usually over a molybdenum disulfide catalyst:
Pyridine can also be reduced to piperidine via a modified Birch reduction using sodium in ethanol.
Piperidine itself has been obtained from black pepper, from "Psilocaulon absimile" (Aizoaceae), and in "Petrosimonia monandra".
The piperidine structural motif is present in numerous natural alkaloids. These include piperine, which gives black pepper its spicy taste. This gave the compound its name. Other examples are the fire ant toxin solenopsin, the nicotine analog anabasine of tree tobacco ("Nicotiana glauca"), lobeline of Indian tobacco, and the toxic alkaloid coniine from poison hemlock, which was used to put Socrates to death.
Piperidine prefers a chair conformation, similar to cyclohexane. Unlike cyclohexane, piperidine has two distinguishable chair conformations: one with the N–H bond in an axial position, and the other in an equatorial position. After much controversy during the 1950s–1970s, the equatorial conformation was found to be more stable by 0.72 kcal/mol in the gas phase. In nonpolar solvents, a range between 0.2 and 0.6 kcal/mol has been estimated, but in polar solvents the axial conformer may be more stable. The two conformers interconvert rapidly through nitrogen inversion; the free energy activation barrier for this process, estimated at 6.1 kcal/mol, is substantially lower than the 10.4 kcal/mol for ring inversion. In the case of "N"-methylpiperidine, the equatorial conformation is preferred by 3.16 kcal/mol, which is much larger than the preference in methylcyclohexane, 1.74 kcal/mol.
Piperidine is a widely used secondary amine. It is widely used to convert ketones to enamines. Enamines derived from piperidine can be used in the Stork enamine alkylation reaction.
Upon treatment with calcium hypochlorite, piperidine convert to N-chloropiperidine, the chloramine with the formula C5H10NCl. The resulting chloramine undergoes dehydrohalogenation to afford the cyclic imine.
Piperidine is used as a solvent and as a base. The same is true for certain derivatives: "N"-formylpiperidine is a polar aprotic solvent with better hydrocarbon solubility than other amide solvents, and 2,2,6,6-tetramethylpiperidine is a highly sterically hindered base, useful because of its low nucleophilicity and high solubility in organic solvents.
A significant industrial application of piperidine is for the production of dipiperidinyl dithiuram tetrasulfide, which is used as an accelerator of the sulfur vulcanization of rubber.
Piperidine and its derivatives are ubiquitous building blocks in pharmaceuticals and fine chemicals. The piperidine structure is found in, for example:
Piperidine is also commonly used in chemical degradation reactions, such as the sequencing of DNA in the cleavage of particular modified nucleotides. Piperidine is also commonly used as a base for the deprotection of Fmoc-amino acids used in solid-phase peptide synthesis.
Piperidine is listed as a Table II precursor under the United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances due to its use (peaking in the 1970s) in the clandestine manufacture of PCP (1-(1-phenylcyclohexyl)piperidine, also known as angel dust, sherms, wet, etc.). | https://en.wikipedia.org/wiki?curid=24739 |
Political question
In United States constitutional law, the political question doctrine is closely linked to the concept of justiciability, as it comes down to a question of whether or not the court system is an appropriate forum in which to hear the case. This is because the court system only has authority to hear and decide a legal question, not a political question. Legal questions are deemed to be justiciable, while political questions are nonjusticiable. One scholar explained:
A ruling of nonjusticiability ultimately prohibits the issue that brings the case before the court from a resolution in a court of law. In the typical case where there is a finding of nonjusticiability due to the political question doctrine, the issue presented before the court is usually so specific that the Constitution gives all power to one of the coordinate political branches, or at the opposite end of the spectrum, the issue presented is so vague that the United States Constitution does not even consider it. A court can only decide issues based on law. The Constitution dictates the different legal responsibilities of each respective branch of government. If there is an issue where the court does not have the Constitution as a guide, there are no legal criteria to use. When there are no specific constitutional duties involved, the issue is to be decided through the democratic process. The court will not engage in political disputes.
A constitutional dispute that requires knowledge of a non-legal character or the use of techniques not suitable for a court or explicitly assigned by the Constitution to the U.S. Congress, or the President of the United States, is a political question, which judges customarily refuse to address.
The doctrine has its roots in the historic Supreme Court case of "Marbury v. Madison" (1803). In that case, Chief Justice John Marshall drew a distinction between two different functions of the U.S. Secretary of State. Marshall stated that when the Secretary of State was performing a purely discretionary matter, such as advising the President on matters of policy, he was not held to any legally identifiable standards. Therefore, some of the Secretary's actions are unable to be reviewed by a court of law.
Unlike the rules of standing, ripeness, and mootness, when the political question doctrine applies, a particular question is beyond judicial competence no matter who raises it, how immediate the interests it affects, or how burning the controversy. The doctrine is grounded in separations of powers principle, as well as the federal judiciary's desire to avoid inserting itself into conflicts between branches of the federal government. It is justified by the notion that there exist some questions best resolved through the political process, voters approving or correcting the challenged action by voting for or against those involved in the decision, or simply beyond judicial capability.
The leading Supreme Court case in the area of the political question doctrine is "Baker v. Carr" (1962). In that case, the Supreme Court held that an unequal apportionment of a state legislature may have denied equal protection presented a justiciable issue. In the "Baker" opinion , the Court outlined six characteristics "[p]rominent on the surface of any case held to involve a political question," which include:
The first factor—a textually demonstrable commitment to another branch—is the classical view that the Court must decide all cases and issues before it unless, as a matter of constitutional interpretation, the Constitution itself has committed the determination of the issue to another branch of government. The second and third factors—lack of judicially discoverable standards and involvement of the judiciary in nonjudicial policy determinations—suggest a functional approach, based on practical considerations of how government ought to work. The final three factors: lack of respect for other branches, need for adherence to a political decision already made, and possibility of embarrassment, are based on the Court's prudential consideration against overexertion or aggrandizement.
While this is a still rather unsettled doctrine, its application has been settled in a few decided areas. These areas are:
The Guarantee Clause, Article IV, section 4, requires the federal government to "guarantee to every State in this Union a Republican Form of Government". The Supreme Court has declared that this Clause does not imply any set of "judicially manageable standards which a court could utilize independently in order to identify a State's lawful government".
In Luther v. Borden, the Court refused to decide which group was the legitimate government of Rhode Island, relying on this ground. Since then the Court consistently has refused to resort to the Guaranty Clause as a constitutional source for invalidating state action, such as whether it is lawful for states to adopt laws through referendums.
Article I, section 2 of the Constitution states that the House "shall have the sole power of Impeachment", and Article I, section 3 provides that the "Senate shall have the sole Power to try all Impeachments". Since the Constitution placed the sole power of impeachment in two political bodies, it is qualified as a political question. As a result, neither the decision of the House to impeach nor a vote of the Senate to remove a President or any other official can be appealed to any court.
Important cases discussing the political question doctrine:
The political question doctrine has also had significance beyond American constitutional law.
A type of act by the French government, the "acte de gouvernement", avoids judicial review as it is too politically sensitive. While the scope of the concept has been reduced over time, there are still acts that the courts do not have jurisdiction over, such as matters that are deemed to be unseverable from France's diplomatic acts, like the President to launch nuclear tests or sever financial aid to Iraq. Other acts include the President’s decision to dissolve Parliament, to award honors, or to grant amnesty. Such "actes de gouvernement" need to be politically-based and also concern domains in which the courts are not competent to judge, e.g. national security and international relations.
The postwar constitution gave the Supreme Court of Japan the power of judicial review, and the court developed its own political question doctrine (; tōchikōi). The Supreme Court of Japan was in part trying to avoid deciding the merits of cases under Article 9 of the post-war pacifist constitution, which renounces war and the threat or use of force. Issues arising under Art. 9 include the legitimacy of Japan’s Self-Defense Force, the U.S.-Japan Security Treaty, and the stationing of U.S. Forces in Japan.
The "Sunagawa case" is considered the leading precedent on the political question doctrine in Japan. In 1957, in what is later known to be the Sunagawa incident, demonstrators entered a then U.S. military base in the Tokyo suburb of Sunagawa. By their entry into the base, demonstrators violated a special Japanese criminal law based on the U.S.-Japan Security Treaty. A Tokyo District Court found that the U.S. military's presence in Japan were unconstitutional under Art. 9 of the Constitution and acquitted the defendants.
The Supreme Court overturned the district court in a fast-track appeal, implicitly developing the political question doctrine in the ruling. The Court found it inappropriate for the judiciary to judge the constitutionality of highly political matters like the U.S.-Japan Security Treaty, unless they expressly violate the Constitution. On the Security Treaty, the Court saw “an extremely high degree of political consideration" and "there is a certain element of incompatibility in the process of judicial determination of its constitutionality by a court of law which has as its mission the exercise of the purely judicial function.” It therefore found that the question should be resolved by the Cabinet, the Diet, and ultimately by the people through elections. The presence of U.S. forces, moreover, did not violate Article 9 of the pacifist Constitution, because it did not involve forces under Japanese command.
Thereafter, the political question doctrine became a barrier for challenges under Art. 9. Under the clear mistake rule developed by the Court, it defers to the political branches on Art. 9 issues so long as the act is “not obviously unconstitutional and void.”
Other notable cases on the political question doctrine in Japan include the "Tomabechi case", which concerned whether the dissolution of the Diet was valid. In the Tomabechi case, the Court also decided against judicial review by implicitly invoking the political question doctrine, citing the separation of powers as justification. In addition, the Court announced that in political question cases not related to Art. 9, the clear mistake rule does not apply and judicial review is categorically prohibited.
Before international courts, the International Court of Justice has dealt with the doctrine in its advisory function, and the European Court of Human Rights has engaged with the doctrine through the margin of appreciation.
Within European Union law, the Court of Justice of the European Union has never addressed the political question doctrine in its jurisprudence explicitly, yet it has been argued that there are traces of the doctrine present in its rulings. | https://en.wikipedia.org/wiki?curid=24740 |
Paul Dirac
Paul Adrien Maurice Dirac (; 8 August 1902 – 20 October 1984) was an English theoretical physicist who is regarded as one of the most significant physicists of the 20th century.
Dirac made fundamental contributions to the early development of both quantum mechanics and quantum electrodynamics. Among other discoveries, he formulated the Dirac equation which describes the behaviour of fermions and predicted the existence of antimatter. Dirac shared the 1933 Nobel Prize in Physics with Erwin Schrödinger "for the discovery of new productive forms of atomic theory". He also made significant contributions to the reconciliation of general relativity with quantum mechanics.
Dirac was regarded by his friends and colleagues as unusual in character. In a 1926 letter to Paul Ehrenfest, Albert Einstein wrote of Dirac, "I have trouble with Dirac. This balancing on the dizzying path between genius and madness is awful." In another letter he wrote, "I don't understand Dirac at all (Compton effect)."
He was the Lucasian Professor of Mathematics at the University of Cambridge, a member of the Center for Theoretical Studies, University of Miami, and spent the last decade of his life at Florida State University.
Paul Adrien Maurice Dirac was born at his parents' home in Bristol, England, on 8 August 1902, and grew up in the Bishopston area of the city. His father, Charles Adrien Ladislas Dirac, was an immigrant from Saint-Maurice, Switzerland, who worked in Bristol as a French teacher. His mother, Florence Hannah Dirac, née Holten, the daughter of a ship's captain, was born in Cornwall, England, and worked as a librarian at the Bristol Central Library. Paul had a younger sister, Béatrice Isabelle Marguerite, known as Betty, and an older brother, Reginald Charles Félix, known as Felix, who committed suicide in March 1925. Dirac later recalled: "My parents were terribly distressed. I didn't know they cared so much ... I never knew that parents were supposed to care for their children, but from then on I knew."
Charles and the children were officially Swiss nationals until they became naturalised on 22 October 1919. Dirac's father was strict and authoritarian, although he disapproved of corporal punishment. Dirac had a strained relationship with his father, so much so that after his father's death, Dirac wrote, "I feel much freer now, and I am my own man." Charles forced his children to speak to him only in French, so that they might learn the language. When Dirac found that he could not express what he wanted to say in French, he chose to remain silent.
Dirac was educated first at Bishop Road Primary School and then at the all-boys Merchant Venturers' Technical College (later Cotham School), where his father was a French teacher. The school was an institution attached to the University of Bristol, which shared grounds and staff. It emphasised technical subjects like bricklaying, shoemaking and metal work, and modern languages. This was unusual at a time when secondary education in Britain was still dedicated largely to the classics, and something for which Dirac would later express his gratitude.
Dirac studied electrical engineering on a City of Bristol University Scholarship at the University of Bristol's engineering faculty, which was co-located with the Merchant Venturers' Technical College. Shortly before he completed his degree in 1921, he sat for the entrance examination for St John's College, Cambridge. He passed and was awarded a £70 scholarship, but this fell short of the amount of money required to live and study at Cambridge. Despite his having graduated with a first class honours Bachelor of Science degree in engineering, the economic climate of the post-war depression was such that he was unable to find work as an engineer. Instead, he took up an offer to study for a Bachelor of Arts degree in mathematics at the University of Bristol free of charge. He was permitted to skip the first year of the course owing to his engineering degree.
In 1923, Dirac graduated, once again with first class honours, and received a £140 scholarship from the Department of Scientific and Industrial Research. Along with his £70 scholarship from St John's College, this was enough to live at Cambridge. There, Dirac pursued his interests in the theory of general relativity, an interest he had gained earlier as a student in Bristol, and in the nascent field of quantum physics, under the supervision of Ralph Fowler. From 1925 to 1928 he held an 1851 Research Fellowship from the Royal Commission for the Exhibition of 1851. He completed his PhD in June 1926 with the first thesis on quantum mechanics to be submitted anywhere. He then continued his research in Copenhagen and Göttingen.
In 1937, Dirac married Margit Wigner (Eugene Wigner's sister). He adopted Margit's two children, Judith and Gabriel. Paul and Margit Dirac had two children together, both daughters, Mary Elizabeth and Florence Monica.
Margit, known as Manci, visited her brother in 1934 in Princeton, New Jersey, from her native Hungary and, while at dinner at the Annex Restaurant, met the "lonely-looking man at the next table". This account from a Korean physicist, Y. S. Kim, who met and was influenced by Dirac, also says: "It is quite fortunate for the physics community that Manci took good care of our respected Paul A. M. Dirac. Dirac published eleven papers during the period 1939–46... Dirac was able to maintain his normal research productivity only because Manci was in charge of everything else".
Dirac was known among his colleagues for his precise and taciturn nature. His colleagues in Cambridge jokingly defined a unit called a "dirac", which was one word per hour. When Niels Bohr complained that he did not know how to finish a sentence in a scientific article he was writing, Dirac replied, "I was taught at school never to start a sentence without knowing the end of it." He criticised the physicist J. Robert Oppenheimer's interest in poetry: "The aim of science is to make difficult things understandable in a simpler way; the aim of poetry is to state simple things in an incomprehensible way. The two are incompatible."
Dirac himself wrote in his diary during his postgraduate years that he concentrated solely on his research, and stopped only on Sunday when he took long strolls alone.
An anecdote recounted in a review of the 2009 biography tells of Werner Heisenberg and Dirac sailing on an ocean liner to a conference in Japan in August 1929. "Both still in their twenties, and unmarried, they made an odd couple. Heisenberg was a ladies' man who constantly flirted and danced, while Dirac—'an Edwardian geek', as biographer Graham Farmelo puts it—suffered agonies if forced into any kind of socialising or small talk. 'Why do you dance?' Dirac asked his companion. 'When there are nice girls, it is a pleasure,' Heisenberg replied. Dirac pondered this notion, then blurted out: 'But, Heisenberg, how do you know beforehand that the girls are nice?
Margit Dirac told both George Gamow and Anton Capri in the 1960s that her husband had said to a house visitor, "Allow me to present Wigner's sister, who is now my wife."
Another story told of Dirac is that when he first met the young Richard Feynman at a conference, he said after a long silence, "I have an equation. Do you have one too?"
After he presented a lecture at a conference, one colleague raised his hand and said: "I don't understand the equation on the top-right-hand corner of the blackboard". After a long silence, the moderator asked Dirac if he wanted to answer the question, to which Dirac replied: "That was not a question, it was a comment."
Dirac was also noted for his personal modesty. He called the equation for the time evolution of a quantum-mechanical operator, which he was the first to write down, the "Heisenberg equation of motion". Most physicists speak of Fermi–Dirac statistics for half-integer-spin particles and Bose–Einstein statistics for integer-spin particles. While lecturing later in life, Dirac always insisted on calling the former "Fermi statistics". He referred to the latter as "Einstein statistics" for reasons, he explained, of "symmetry".
Heisenberg recollected a conversation among young participants at the 1927 Solvay Conference about Einstein and Planck's views on religion between Wolfgang Pauli, Heisenberg and Dirac. Dirac's contribution was a criticism of the political purpose of religion, which Bohr regarded as quite lucid when hearing it from Heisenberg later. Among other things, Dirac said:
I cannot understand why we idle discussing religion. If we are honest—and scientists have to be—we must admit that religion is a jumble of false assertions, with no basis in reality. The very idea of God is a product of the human imagination. It is quite understandable why primitive people, who were so much more exposed to the overpowering forces of nature than we are today, should have personified these forces in fear and trembling. But nowadays, when we understand so many natural processes, we have no need for such solutions. I can't for the life of me see how the postulate of an Almighty God helps us in any way. What I do see is that this assumption leads to such unproductive questions as why God allows so much misery and injustice, the exploitation of the poor by the rich and all the other horrors He might have prevented. If religion is still being taught, it is by no means because its ideas still convince us, but simply because some of us want to keep the lower classes quiet. Quiet people are much easier to govern than clamorous and dissatisfied ones. They are also much easier to exploit. Religion is a kind of opium that allows a nation to lull itself into wishful dreams and so forget the injustices that are being perpetrated against the people. Hence the close alliance between those two great political forces, the State and the Church. Both need the illusion that a kindly God rewards—in heaven if not on earth—all those who have not risen up against injustice, who have done their duty quietly and uncomplainingly. That is precisely why the honest assertion that God is a mere product of the human imagination is branded as the worst of all mortal sins.
Heisenberg's view was tolerant. Pauli, raised as a Catholic, had kept silent after some initial remarks, but when finally he was asked for his opinion, said: "Well, our friend Dirac has got a religion and its guiding principle is 'There is no God and Paul Dirac is His prophet. Everybody, including Dirac, burst into laughter.
Later in life, Dirac's views towards the idea of God were less acerbic. As an author of an article appearing in the May 1963 edition of "Scientific American", Dirac wrote:
It seems to be one of the fundamental features of nature that fundamental physical laws are described in terms of a mathematical theory of great beauty and power, needing quite a high standard of mathematics for one to understand it. You may wonder: Why is nature constructed along these lines? One can only answer that our present knowledge seems to show that nature is so constructed. We simply have to accept it. One could perhaps describe the situation by saying that God is a mathematician of a very high order, and He used very advanced mathematics in constructing the universe. Our feeble attempts at mathematics enable us to understand a bit of the universe, and as we proceed to develop higher and higher mathematics we can hope to understand the universe better.
In 1971, at a conference meeting, Dirac expressed his views on the existence of God. Dirac explained that the existence of God could only be justified if an improbable event were to have taken place in the past:
It could be that it is extremely difficult to start life. It might be that it is so difficult to start life that it has happened only once among all the planets... Let us consider, just as a conjecture, that the chance life starting when we have got suitable physical conditions is 10−100. I don't have any logical reason for proposing this figure, I just want you to consider it as a possibility. Under those conditions ... it is almost certain that life would not have started. And I feel that under those conditions it will be necessary to assume the existence of a god to start off life. I would like, therefore, to set up this connexion between the existence of a god and the physical laws: if physical laws are such that to start off life involves an excessively small chance, so that it will not be reasonable to suppose that life would have started just by blind chance, then there must be a god, and such a god would probably be showing his influence in the quantum jumps which are taking place later on. On the other hand, if life can start very easily and does not need any divine influence, then I will say that there is no god.
Dirac did not commit himself to any definite view, but he described the possibilities for answering the question of God in a scientific manner.
Dirac shared the 1933 Nobel Prize for physics with Erwin Schrödinger "for the discovery of new productive forms of atomic theory". Dirac was also awarded the Royal Medal in 1939 and both the Copley Medal and the Max Planck Medal in 1952. He was elected a Fellow of the Royal Society in 1930, an Honorary Fellow of the American Physical Society in 1948, and an Honorary Fellow of the Institute of Physics, London in 1971. He received the inaugural J. Robert Oppenheimer Memorial Prize in 1969. Dirac became a member of the Order of Merit in 1973, having previously turned down a knighthood as he did not want to be addressed by his first name.
In 1984, Dirac died in Tallahassee, Florida, and was buried at Tallahassee's Roselawn Cemetery. Dirac's childhood home in Bishopston, Bristol is commemorated with a blue plaque, and the nearby Dirac Road is named in recognition of his links with the city of Bristol. A commemorative stone was erected in a garden in Saint-Maurice, Switzerland, the town of origin of his father's family, on 1 August 1991. On 13 November 1995 a commemorative marker, made from Burlington green slate and inscribed with the Dirac equation, was unveiled in Westminster Abbey. The Dean of Westminster, Edward Carpenter, had initially refused permission for the memorial, thinking Dirac to be anti-Christian, but was eventually (over a five-year period) persuaded to relent.
Dirac established the most general theory of quantum mechanics and discovered the relativistic equation for the electron, which now bears his name. The remarkable notion of an antiparticle to each fermion particle – e.g. the positron as antiparticle to the electron – stems from his equation. He was the first to develop quantum field theory, which underlies all theoretical work on sub-atomic or "elementary" particles today, work that is fundamental to our understanding of the forces of nature. He proposed and investigated the concept of a magnetic monopole, an object not yet known empirically, as a means of bringing even greater symmetry to James Clerk Maxwell's equations of electromagnetism.
He quantised the gravitational field, and developed a general theory of quantum field theories with dynamical constraints, which forms the basis of the gauge theories and superstring theories of today. The influence and importance of his work have increased with the decades, and physicists use the concepts and equations that he developed daily.
Dirac's first step into a new quantum theory was taken late in September 1925. Ralph Fowler, his research supervisor, had received a proof copy of an exploratory paper by Werner Heisenberg in the framework of the old quantum theory of Bohr and Sommerfeld. Heisenberg leaned heavily on Bohr's correspondence principle but changed the equations so that they involved directly observable quantities, leading to the matrix formulation of quantum mechanics. Fowler sent Heisenberg's paper on to Dirac, who was on vacation in Bristol, asking him to look into this paper carefully.
Dirac's attention was drawn to a mysterious mathematical relationship, at first sight unintelligible, that Heisenberg had reached. Several weeks later, back in Cambridge, Dirac suddenly recognised that this mathematical form had the same structure as the Poisson brackets that occur in the classical dynamics of particle motion. From this thought, he quickly developed a quantum theory that was based on non-commuting dynamical variables. This led him to a more profound and significant general formulation of quantum mechanics than was achieved by any other worker in this field. Dirac's formulation allowed him to obtain the quantisation rules in a novel and more illuminating manner. For this work, published in 1926, Dirac received a PhD from Cambridge. This formed the basis for Fermi-Dirac statistics that applies to systems consisting of many identical spin 1/2 particles (i.e. that obey the Pauli exclusion principle), e.g. electrons in solids and liquids, and importantly to the field of conduction in semi-conductors.
Dirac was famously not bothered by issues of interpretation in quantum theory. In fact, in a paper published in a book in his honour, he wrote: "The interpretation of quantum mechanics has been dealt with by many authors, and I do not want to discuss it here. I want to deal with more fundamental things."
In 1928, building on 2×2 spin matrices which he purported to have discovered independently of Wolfgang Pauli's work on non-relativistic spin systems (Dirac told Abraham Pais, "I believe I got these [matrices] independently of Pauli and possibly Pauli got these independently of me."), he proposed the Dirac equation as a relativistic equation of motion for the wave function of the electron. This work led Dirac to predict the existence of the positron, the electron's antiparticle, which he interpreted in terms of what came to be called the "Dirac sea". The positron was observed by Carl Anderson in 1932. Dirac's equation also contributed to explaining the origin of quantum spin as a relativistic phenomenon.
The necessity of fermions (matter) being created and destroyed in Enrico Fermi's 1934 theory of beta decay led to a reinterpretation of Dirac's equation as a "classical" field equation for any point particle of spin "ħ"/2, itself subject to quantisation conditions involving anti-commutators. Thus reinterpreted, in 1934 by Werner Heisenberg, as a (quantum) field equation accurately describing all elementary matter particles – today quarks and leptons – this Dirac field equation is as central to theoretical physics as the Maxwell, Yang–Mills and Einstein field equations. Dirac is regarded as the founder of quantum electrodynamics, being the first to use that term. He also introduced the idea of vacuum polarisation in the early 1930s. This work was key to the development of quantum mechanics by the next generation of theorists, in particular Schwinger, Feynman, Sin-Itiro Tomonaga and Dyson in their formulation of quantum electrodynamics.
Dirac's "The" "Principles of Quantum Mechanics", published in 1930, is a landmark in the history of science. It quickly became one of the standard textbooks on the subject and is still used today. In that book, Dirac incorporated the previous work of Werner Heisenberg on matrix mechanics and of Erwin Schrödinger on wave mechanics into a single mathematical formalism that associates measurable quantities to operators acting on the Hilbert space of vectors that describe the state of a physical system. The book also introduced the Dirac delta function. Following his 1939 article, he also included the bra–ket notation in the third edition of his book, thereby contributing to its universal use nowadays.
In 1931, Dirac proposed that the existence of a single magnetic monopole in the universe would suffice to explain the quantisation of electrical charge. In 1975, 1982 and 2009, intriguing results suggested the possible detection of magnetic monopoles, but there is, to date, no direct evidence for their existence (see also Searches for magnetic monopoles).
Dirac was the Lucasian Professor of Mathematics at Cambridge from 1932 to 1969. In 1937, he proposed a speculative cosmological model based on the so-called large numbers hypothesis. During World War II, he conducted important theoretical and experimental research on uranium enrichment by gas centrifuge.
Dirac's quantum electrodynamics (QED) made predictions that were – more often than not – infinite and therefore unacceptable. A workaround known as renormalisation was developed, but Dirac never accepted this. "I must say that I am very dissatisfied with the situation", he said in 1975, "because this so-called 'good theory' does involve neglecting infinities which appear in its equations, neglecting them in an arbitrary way. This is just not sensible mathematics. Sensible mathematics involves neglecting a quantity when it is small – not neglecting it just because it is infinitely great and you do not want it!" His refusal to accept renormalisation resulted in his work on the subject moving increasingly out of the mainstream.
However, from his once rejected notes he managed to work on putting quantum electrodynamics on "logical foundations" based on Hamiltonian formalism that he formulated. He found a rather novel way of deriving the anomalous magnetic moment "Schwinger term" and also the Lamb shift, afresh in 1963, using the Heisenberg picture and without using the joining method used by Weisskopf and French, and by the two pioneers of modern QED, Schwinger and Feynman. That was two years before the Tomonaga–Schwinger–Feynman QED was given formal recognition by an award of the Nobel Prize for physics.
Weisskopf and French (FW) were the first to obtain the correct result for the Lamb shift and the anomalous magnetic moment of the electron. At first FW results did not agree with the incorrect but independent results of Feynman and Schwinger. The 1963–1964 lectures Dirac gave on quantum field theory at Yeshiva University were published in 1966 as the Belfer Graduate School of Science, Monograph Series Number, 3. After having relocated to Florida to be near his elder daughter, Mary, Dirac spent his last fourteen years (of both life and physics research) at the University of Miami in Coral Gables, Florida, and Florida State University in Tallahassee, Florida.
In the 1950s in his search for a better QED, Paul Dirac developed the Hamiltonian theory of constraints based on lectures that he delivered at the 1949 International Mathematical Congress in Canada. Dirac had also solved the problem of putting the Schwinger–Tomonaga equation into the Schrödinger representation and given explicit expressions for the scalar meson field (spin zero pion or pseudoscalar meson), the vector meson field (spin one rho meson), and the electromagnetic field (spin one massless boson, photon).
The Hamiltonian of constrained systems is one of Dirac's many masterpieces. It is a powerful generalisation of Hamiltonian theory that remains valid for curved spacetime. The equations for the Hamiltonian involve only six degrees of freedom described by formula_1,formula_2 for each point of the surface on which the state is considered. The formula_3 ("m" = 0, 1, 2, 3) appear in the theory only through the variables formula_4, formula_5 which occur as arbitrary coefficients in the equations of motion.
There are four constraints or weak equations for each point of the surface formula_6 = constant. Three of them formula_7 form the four vector density in the surface. The fourth formula_8 is a 3-dimensional scalar density in the surface "H"L ≈ 0; "Hr" ≈ 0 ("r" = 1, 2, 3)
In the late 1950s, he applied the Hamiltonian methods he had developed to cast Einstein's general relativity in Hamiltonian form and to bring to a technical completion the quantisation problem of gravitation and bring it also closer to the rest of physics according to Salam and DeWitt. In 1959 he also gave an invited talk on "Energy of the Gravitational Field" at the New York Meeting of the American Physical Society. In 1964 he published his "Lectures on Quantum Mechanics" (London: Academic) which deals with constrained dynamics of nonlinear dynamical systems including quantisation of curved spacetime. He also published a paper entitled "Quantization of the Gravitational Field" in the 1967 ICTP/IAEA Trieste Symposium on Contemporary Physics.
From September 1970 to January 1971, Dirac was Visiting Professor at Florida State University in Tallahassee. During that time he was offered a permanent position there, which he accepted, becoming a full professor in 1972. Contemporary accounts of his time there describe it as happy except that he apparently found the summer heat oppressive and liked to escape from it to Cambridge.
He would walk about a mile to work each day and was fond of swimming in one of the two nearby lakes (Silver Lake and Lost Lake), and was also more sociable than he had been at Cambridge, where he mostly worked at home apart from giving classes and seminars; at FSU he would usually eat lunch with his colleagues before taking a nap.
Dirac published over 60 papers in those last twelve years of his life, including a short book on general relativity. His last paper (1984), entitled "The inadequacies of quantum field theory," contains his final judgment on quantum field theory: "These rules of renormalisation give surprisingly, excessively good agreement with experiments. Most physicists say that these working rules are, therefore, correct. I feel that is not an adequate reason. Just because the results happen to be in agreement with observation does not prove that one's theory is correct." The paper ends with the words: "I have spent many years searching for a Hamiltonian to bring into the theory and have not yet found it. I shall continue to work on it as long as I can and other people, I hope, will follow along such lines."
Amongst his many students were Homi J. Bhabha, Fred Hoyle, John Polkinghorne and Freeman Dyson. Polkinghorne recalls that Dirac "was once asked what was his fundamental belief. He strode to a blackboard and wrote that the laws of nature should be expressed in beautiful equations."
In 1975, Dirac gave a series of five lectures at the University of New South Wales which were subsequently published as a book, "Directions in Physics" (1978). He donated the royalties from this book to the university for the establishment of the Dirac Lecture Series. The Silver Dirac Medal for the Advancement of Theoretical Physics is awarded by the University of New South Wales to commemorate the lecture.
Immediately after his death, two organisations of professional physicists established annual awards in Dirac's memory. The Institute of Physics, the United Kingdom's professional body for physicists, awards the Paul Dirac Medal for "outstanding contributions to theoretical (including mathematical and computational) physics". The first three recipients were Stephen Hawking (1987), John Stewart Bell (1988), and Roger Penrose (1989). The International Centre for Theoretical Physics awards the Dirac Medal of the ICTP each year on Dirac's birthday (8 August).
The Dirac-Hellman Award at Florida State University was endowed by Dr Bruce P. Hellman in 1997 to reward outstanding work in theoretical physics by FSU researchers. The Paul A.M. Dirac Science Library at Florida State University, which Manci opened in December 1989, is named in his honour, and his papers are held there. Outside is a statue of him by Gabriella Bollobás. The street on which the National High Magnetic Field Laboratory in Innovation Park of Tallahassee, Florida, is located is named Paul Dirac Drive. As well as in his hometown of Bristol, there is also a road named after him, Dirac Place, in Didcot, Oxfordshire. The BBC named a video codec, Dirac, in his honour.
An asteroid discovered in 1983 was named after Dirac. The Distributed Research utilising Advanced Computing (DiRAC) and Dirac software are named in his honour. | https://en.wikipedia.org/wiki?curid=24742 |
Shamanism
Shamanism is a religious practice that involves a practitioner, a shaman, who is believed to interact with a spirit world through altered states of consciousness, such as trance. The goal of this is usually to direct these spirits or spiritual energies into the physical world, for healing or some other purpose.
Beliefs and practices that have been categorized as "shamanic" have attracted the interest of scholars from a wide variety of disciplines, including anthropologists, archaeologists, historians, religious studies scholars, philosophers and psychologists. Hundreds of books and academic papers on the subject have been produced, with a peer-reviewed academic journal being devoted to the study of shamanism. In the 20th century, many Westerners involved in counter-cultural movements have created modern magico-religious practices influenced by their ideas of Indigenous religions from across the world, creating what has been termed "neoshamanism" or the neoshamanic movement. It has affected the development of many neopagan practices, as well as faced a backlash and accusations of cultural appropriation, exploitation and misrepresentation when outside observers have tried to represent cultures to which they do not belong. Although the term has been used to describe indigenous spiritual practices, some have critiqued the term shamanism as a generalizing descriptor of complex and diverse spiritual practices that are specific to different indigenous nations and tribes. Use of the term may impose simplicity on diverse and complex indigenous cultures, reinforce racist ideas, and perpetuate notions of “other” from a colonial perspective.
The word "shamanism" probably derives from the Manchu-Tungus word , meaning "one who knows". The word "shaman" may also have originated from the Evenki word "šamán", most likely from the southwestern dialect spoken by the Sym Evenki peoples. The Tungusic term was subsequently adopted by Russians interacting with the Indigenous peoples in Siberia. It is found in the memoirs of the exiled Russian churchman Avvakum.
The word was brought to Western Europe in the late 17th century by the Dutch traveler Nicolaes Witsen, who reported his stay and journeys among the Tungusic- and Samoyedic-speaking Indigenous peoples of Siberia in his book "Noord en Oost Tataryen" (1692). Adam Brand, a merchant from Lübeck, published in 1698 his account of a Russian embassy to China; a translation of his book, published the same year, introduced the word "shaman" to English speakers.
The etymology of the Evenki word is sometimes connected to a Tungus root "ša-" "to know". This has been questioned on linguistic grounds: "The possibility cannot be completely rejected, but neither should it be accepted without reservation since the assumed derivational relationship is phonologically irregular (note especially the vowel quantities)." Other scholars assert that the word comes directly from the Manchu language, and as such would be the only commonly used English word that is a loan from this language.
However, Mircea Eliade noted that the Sanskrit word "śramaṇa", designating a wandering monastic or holy figure, has spread to many Central Asian languages along with Buddhism and could be the ultimate origin of the Tungusic word. This proposal has been thoroughly critiqued since 1917. Ethnolinguist Juha Janhunen regards it as an "anachronism" and an "impossibility" that is nothing more than a "far-fetched etymology".
Twenty-first-century anthropologist and archeologist Silvia Tomaskova argues that by the mid-1600s, many Europeans applied the Arabic term "shaitan" (meaning "devil") to the non-Christian practices and beliefs of Indigenous peoples beyond the Ural Mountains. She suggests that "shaman" may have entered the various Tungus dialects as a corruption of this term, and then been told to Christian missionaries, explorers, soldiers and colonial administrators with whom the people had increasing contact for centuries.
A (female shaman) is sometimes called a ', which is not an actual Tungus term but simply "shaman" plus the Russian suffix ' (for feminine nouns).
There is no single agreed-upon definition for the word "shamanism" among anthropologists. The English historian Ronald Hutton noted that by the dawn of the 21st century, there were four separate definitions of the term which appeared to be in use. The first of these uses the term to refer to "anybody who contacts a spirit world while in an altered state of consciousness." The second definition limits the term to refer to those who contact a spirit world while in an altered state of consciousness at the behest of others. The third definition attempts to distinguish shamans from other magico-religious specialists who are believed to contact spirits, such as "mediums", "witch doctors", "spiritual healers" or "prophets," by claiming that shamans undertake some particular technique not used by the others. Problematically, scholars advocating the third view have failed to agree on what the defining technique should be. The fourth definition identified by Hutton uses "shamanism" to refer to the Indigenous religions of Siberia and neighboring parts of Asia. According to the Golomt Center for Shamanic Studies, a Mongolian organisation of shamans, the Evenk word "shaman" would more accurately be translated as "priest".
According to the Oxford English Dictionary, a shaman ( , or ) is someone who is regarded as having access to, and influence in, the world of benevolent and malevolent spirits, who typically enters into a trance state during a ritual, and practices divination and healing. The word "shaman" probably originates from the Tungusic Evenki language of North Asia. According to ethnolinguist Juha Janhunen, "the word is attested in all of the Tungusic idioms" such as Negidal, Lamut, Udehe/Orochi, Nanai, Ilcha, Orok, Manchu and Ulcha, and "nothing seems to contradict the assumption that the meaning 'shaman' also derives from Proto-Tungusic" and may have roots that extend back in time at least two millennia. The term was introduced to the west after Russian forces conquered the shamanistic Khanate of Kazan in 1552.
The term "shamanism" was first applied by Western anthropologists as outside observers of the ancient religion of the Turks and Mongols, as well as those of the neighbouring Tungusic- and Samoyedic-speaking peoples. Upon observing more religious traditions across the world, some Western anthropologists began to also use the term in a very broad sense. The term was used to describe unrelated magico-religious practices found within the ethnic religions of other parts of Asia, Africa, Australasia and even completely unrelated parts of the Americas, as they believed these practices to be similar to one another. While the term has been incorrectly applied by cultural outsiders to many Indigenous spiritual practices, the words “shaman” and “shamanism” do not accurately describe the variety and complexity that is Indigenous spirituality. Each Nation and tribe has its own way of life, and uses terms in their own languages.
Mircea Eliade writes, "A first definition of this complex phenomenon, and perhaps the least hazardous, will be: shamanism = 'technique of religious ecstasy'." Shamanism encompasses the premise that shamans are intermediaries or messengers between the human world and the spirit worlds. Shamans are said to treat ailments and illnesses by mending the soul. Alleviating traumas affecting the soul or spirit are believed to restore the physical body of the individual to balance and wholeness. Shamans also claim to enter supernatural realms or dimensions to obtain solutions to problems afflicting the community. Shamans claim to visit other worlds or dimensions to bring guidance to misguided souls and to ameliorate illnesses of the human soul caused by foreign elements. Shamans operate primarily within the spiritual world, which, they believe, in turn affects the human world. The restoration of balance is said to result in the elimination of the ailment.
Shamanism is a system of religious practice. Historically, it is often associated with Indigenous and tribal societies, and involves belief that shamans, with a connection to the otherworld, have the power to heal the sick, communicate with spirits, and escort souls of the dead to the afterlife. Shamanism is especially associated with the Native Peoples of Siberia in northern Asia, where shamanic practice has been noted for centuries by Asian and Western visitors. It is an ideology that used to be widely practiced in Europe, Asia, Tibet, North and South America, and Africa. It centered on the belief in supernatural phenomenon such as the world of gods, demons, and ancestral spirits.
Despite structural implications of colonialism and imperialism that have limited the ability of Indigenous Peoples to practice traditional spiritualities, many communities are undergoing resurgence through self-determination and the reclamation of dynamic traditions. Other groups have been able to avoid some of these structural impediments by virtue of their isolation, such as the nomadic Tuvan (with an estimated population of 3000 people surviving from this tribe). Tuva is one of the most isolated tribes in Russia where the art of shamanism has been preserved until today due to its isolated existence, allowing it to be free from the influences of other major religions.
Shamans often claim to have been called through dreams or signs. However, some say their powers are inherited. In traditional societies shamanic training varies in length, but generally takes years.
Turner and colleagues mention a phenomenon called "shamanistic initiatory crisis", a rite of passage for shamans-to-be, commonly involving physical illness or psychological crisis. The significant role of initiatory illnesses in the calling of a shaman can be found in the detailed case history of Chuonnasuan, who was the last master shaman among the Tungus peoples in Northeast China.
The wounded healer is an archetype for a shamanic trial and journey. This process is important to young shamans. They undergo a type of sickness that pushes them to the brink of death. This is said to happen for two reasons:
Though the importance of spiritual roles in many cultures cannot be overlooked, the degree to which such roles are comparable (and even classifiable under one term) is questionable. In fact, scholars have argued that such universalist classifications paint Indigenous societies as primitive while exemplifying the civility of Western societies. That being said, shamans have been conceptualized as those who are able to gain knowledge and power to heal in the spiritual world or dimension. Most shamans have dreams or visions that convey certain messages. Shamans may claim to have or have acquired many spirit guides, who they believe guide and direct them in their travels in the spirit world. These spirit guides are always thought to be present within the shaman, although others are said to encounter them only when the shaman is in a trance. The spirit guide energizes the shamans, enabling them to enter the spiritual dimension. Shamans claim to heal within the communities and the spiritual dimension by returning lost parts of the human soul from wherever they have gone. Shamans also claim to cleanse excess negative energies, which are said to confuse or pollute the soul.
Shamans act as mediators in their cultures. Shamans claim to communicate with the spirits on behalf of the community, including the spirits of the deceased. Shamans believe they can communicate with both living and dead to alleviate unrest, unsettled issues, and to deliver gifts to the spirits.
Among the Selkups, the sea duck is a spirit animal. Ducks fly in the air and dive in the water and are thus believed to belong to both the upper world and the world below. Among other Siberian peoples, these characteristics are attributed to waterfowl in general. The upper world is the afterlife primarily associated with deceased humans and is believed to be accessed by soul journeying through a portal in the sky. The lower world or "world below" is the afterlife primarily associated with animals and is believed to be accessed by soul journeying through a portal in the earth. In shamanic cultures, many animals are regarded as spirit animals.
Shamans perform a variety of functions depending upon their respective cultures; healing, leading a sacrifice, preserving traditions by storytelling and songs, fortune-telling, and acting as a psychopomp ("guide of souls"). A single shaman may fulfill several of these functions.
The functions of a shaman may include either guiding to their proper abode the souls of the dead (which may be guided either one-at-a-time or in a group, depending on the culture), and the curing of ailments. The ailments may be either purely physical afflictions—such as disease, which are claimed to be cured by gifting, flattering, threatening, or wrestling the disease-spirit (sometimes trying all these, sequentially), and which may be completed by displaying a supposedly extracted token of the disease-spirit (displaying this, even if "fraudulent", is supposed to impress the disease-spirit that it has been, or is in the process of being, defeated so that it will retreat and stay out of the patient's body), or else mental (including psychosomatic) afflictions—such as persistent terror, which is likewise believed to be cured by similar methods. In most languages a different term other than the one translated "shaman" is usually applied to a religious official leading sacrificial rites ("priest"), or to a raconteur ("sage") of traditional lore; there may be more of an overlap in functions (with that of a shaman), however, in the case of an interpreter of omens or of dreams.
There are distinct types of shamans who perform more specialized functions. For example, among the Nani people, a distinct kind of shaman acts as a psychopomp. Other specialized shamans may be distinguished according to the type of spirits, or realms of the spirit world, with which the shaman most commonly interacts. These roles vary among the Nenets, Enets, and Selkup shamans.
The assistant of an Oroqen shaman (called "jardalanin", or "second spirit") knows many things about the associated beliefs. He or she accompanies the rituals and interprets the behaviors of the shaman. Despite these functions, the "jardalanin" is not a shaman. For this interpretative assistant, it would be unwelcome to fall into a trance.
Among the Tucano people, a sophisticated system exists for environmental resources management and for avoiding resource depletion through overhunting. This system is conceptualized mythologically and symbolically by the belief that breaking hunting restrictions may cause illness. As the primary teacher of tribal symbolism, the shaman may have a leading role in this ecological management, actively restricting hunting and fishing. The shaman is able to "release" game animals, or their souls, from their hidden abodes. The Piaroa people have ecological concerns related to shamanism. Among the Inuit, shamans fetch the souls of game from remote places, or soul travel to ask for game from mythological beings like the Sea Woman.
The way shamans get sustenance and take part in everyday life varies across cultures. In many Inuit groups, they provide services for the community and get a "due payment", and believe the payment is given to the helping spirits. An account states that the gifts and payments that a shaman receives are given by his partner spirit. Since it obliges the shaman to use his gift and to work regularly in this capacity, the spirit rewards him with the goods that it receives. These goods, however, are only "welcome addenda". They are not enough to enable a full-time shaman. Shamans live like any other member of the group, as a hunter or housewife. Due to the popularity of ayahuasca tourism in South America, there are practitioners in areas frequented by backpackers who make a living from leading ceremonies.
There are many variations of shamanism throughout the world, but several common beliefs are shared by all forms of shamanism. Common beliefs identified by Eliade (1972) are the following:
As Alice Kehoe notes, Eliade's conceptualization of shamans produces a universalist image of Indigenous cultures, which perpetuates notions of the dead (or dying) Indian as well as the noble savage.
Shamanism is based on the premise that the visible world is pervaded by invisible forces or spirits which affect the lives of the living. Although the causes of disease lie in the spiritual realm, inspired by malicious spirits, both spiritual and physical methods are used to heal. Commonly, a shaman "enters the body" of the patient to confront the spiritual infirmity and heals by banishing the infectious spirit.
Many shamans have expert knowledge of medicinal plants native to their area, and an herbal treatment is often prescribed. In many places shamans learn directly from the plants, harnessing their effects and healing properties, after obtaining permission from the indwelling or patron spirits. In the Peruvian Amazon Basin, shamans and "curanderos" use medicine songs called "icaros" to evoke spirits. Before a spirit can be summoned it must teach the shaman its song. The use of totemic items such as rocks with special powers and an animating spirit is common.
Such practices are presumably very ancient. Plato wrote in his "Phaedrus" that the "first prophecies were the words of an oak", and that those who lived at that time found it rewarding enough to "listen to an oak or a stone, so long as it was telling the truth".
Belief in witchcraft and sorcery, known as "brujería" in Latin America, exists in many societies. Other societies assert all shamans have the power to both cure and kill. Those with shamanic knowledge usually enjoy great power and prestige in the community, but they may also be regarded suspiciously or fearfully as potentially harmful to others.
By engaging in their work, a shaman is exposed to significant personal risk as shamanic plant materials can be toxic or fatal if misused. Spells are commonly used in an attempt to protect against these dangers, and the use of more dangerous plants is often very highly ritualized.
Generally, shamans traverse the axis mundi and enter the "spirit world" by effecting a transition of consciousness, entering into an ecstatic trance, either autohypnotically or through the use of entheogens or ritual performances. The methods employed are diverse, and are often used together.
An entheogen ("generating the divine within") is a psychoactive substance used in a religious, shamanic, or spiritual context. Entheogens have been used in a ritualized context for thousands of years; their religious significance is well established in anthropological and modern evidences. Examples of traditional entheogens include: peyote, psilocybin and Amanita muscaria (fly agaric) mushrooms, uncured tobacco, cannabis, ayahuasca, "Salvia divinorum", iboga, and Mexican morning glory.
Some shamans observe dietary or customary restrictions particular to their tradition. These restrictions are more than just cultural. For example, the diet followed by shamans and apprentices prior to participating in an ayahuasca ceremony includes foods rich in tryptophan (a biosynthetic precursor to serotonin) as well as avoiding foods rich in tyramine, which could induce hypertensive crisis if ingested with MAOIs such as are found in ayahuasca brews as well as abstinence from alcohol or sex.
Entheogens have a substantial history of commodification, especially in the realm of spiritual tourism. For instance, countries such as Brazil and Peru have faced an influx of tourists since the psychedelic era beginning in the late 1960s, initiating what has been termed "ayahuasca tourism."
Just like shamanism itself, music and songs related to it in various cultures are diverse. In several instances, songs related to shamanism are intended to imitate natural sounds, via onomatopoeia.
Sound mimesis in various cultures may serve other functions not necessarily related to shamanism: practical goals such as luring game in the hunt; or entertainment (Inuit throat singing).
Shamans may employ varying materials in spiritual practice in different cultures.
There are two major frameworks among cognitive and evolutionary scientists for explaining shamanism. The first, proposed by anthropologist Michael Winkelman, is known as the "neurotheological theory". According to Winkelman, shamanism develops reliably in human societies because it provides valuable benefits to the practitioner, their group, and individual clients. In particular, the trance states induced by dancing, hallucinogens, and other triggers are hypothesized to have an "integrative" effect on cognition, allowing communication among mental systems that specialize in theory of mind, social intelligence, and natural history. With this cognitive integration, the shaman can better predict the movement of animals, resolve group conflicts, plan migrations, and provide other useful services.
The neurotheological theory contrasts with the "by-product" or "subjective" model of shamanism developed by Harvard anthropologist Manvir Singh. According to Singh, shamanism is a cultural technology that adapts to (or hacks) our psychological biases to convince us that a specialist can influence important but uncontrollable outcomes. Citing work on the psychology of magic and superstition, Singh argues that humans search for ways of influencing uncertain events, such as healing illness, controlling rain, or attracting animals. As specialists compete to help their clients control these outcomes, they drive the evolution of psychologically compelling magic, producing traditions adapted to people's cognitive biases. Shamanism, Singh argues, is the culmination of this cultural evolutionary process—a psychologically appealing method for controlling uncertainty. For example, some shamanic practices exploit our intuitions about humanness: Practitioners use trance and dramatic initiations to seemingly become entities distinct from normal humans and thus more apparently capable of interacting with the invisible forces believed to oversee important outcomes. Influential cognitive and anthropological scientists such as Pascal Boyer and Nicholas Humphrey have endorsed Singh's approach, although other researchers have criticized Singh's dismissal of individual- and group-level benefits.
David Lewis-Williams explains the origins of shamanic practice, and some of its precise forms, through aspects of human consciousness evinced in cave art and LSD experiments alike.
Gerardo Reichel-Dolmatoff relates these concepts to developments in the ways that modern science (systems theory, ecology, new approaches in anthropology and archeology) treats causality in a less linear fashion. He also suggests a cooperation of modern science and Indigenous lore.
Shamanic practices may originate as early as the Paleolithic, predating all organized religions, and certainly as early as the Neolithic period. The earliest known undisputed burial of a shaman (and by extension the earliest undisputed evidence of shamans and shamanic practices) dates back to the early Upper Paleolithic era (c. 30,000 BP) in what is now the Czech Republic.
Sanskrit scholar and comparative mythologist Michael Witzel proposes that all of the world's mythologies, and also the concepts and practices of shamans, can be traced to the migrations of two prehistoric populations: the "Gondwana" type (of circa 65,000 years ago) and the "Laurasian" type (of circa 40,000 years ago).
In November 2008, researchers from the Hebrew University of Jerusalem announced the discovery of a 12,000-year-old site in Israel that is perceived as one of the earliest-known shaman burials. The elderly woman had been arranged on her side, with her legs apart and folded inward at the knee. Ten large stones were placed on the head, pelvis, and arms. Among her unusual grave goods were 50 complete tortoise shells, a human foot, and certain body parts from animals such as a cow tail and eagle wings. Other animal remains came from a boar, leopard, and two martens. "It seems that the woman … was perceived as being in a close relationship with these animal spirits", researchers noted. The grave was one of at least 28 graves at the site, located in a cave in lower Galilee and belonging to the Natufian culture, but is said to be unlike any other among the Epipaleolithic Natufians or in the Paleolithic period.
A debated etymology of the word "shaman" is "one who knows", implying, among other things, that the shaman is an expert in keeping together the multiple codes of the society, and that to be effective, shamans must maintain a comprehensive view in their mind which gives them certainty of knowledge. According to this view, the shaman uses (and the audience understands) multiple codes, expressing meanings in many ways: verbally, musically, artistically, and in dance. Meanings may be manifested in objects such as amulets. If the shaman knows the culture of their community well, and acts accordingly, their audience will know the used symbols and meanings and therefore trust the shamanic worker.
There are also semiotic, theoretical approaches to shamanism, and examples of "mutually opposing symbols" in academic studies of Siberian lore, distinguishing a "white" shaman who contacts sky spirits for good aims by day, from a "black" shaman who contacts evil spirits for bad aims by night. (Series of such opposing symbols referred to a world-view behind them. Analogously to the way grammar arranges words to express meanings and convey a world, also this formed a cognitive map). Shaman's lore is rooted in the folklore of the community, which provides a "mythological mental map". Juha Pentikäinen uses the concept ""grammar of mind"".
Armin Geertz coined and introduced the hermeneutics, or "ethnohermeneutics", interpretation. Hoppál extended the term to include not only the interpretation of oral and written texts, but that of "visual texts as well (including motions, gestures and more complex rituals, and ceremonies performed, for instance, by shamans)". Revealing the animistic views in shamanism, but also their relevance to the contemporary world, where ecological problems have validated paradigms of balance and protection.
Shamanism is believed to be declining around the world, possibly due to other organized religious influences, like Christianity, that want people who practice shamanism to convert to their own system and doctrine. Another reason is Western views of shamanism as primitive, superstitious, backward and outdated. Whalers who frequently interact with Inuit tribes are one source of this decline in that region.
In many areas, former shamans ceased to fulfill the functions in the community they used to, as they felt mocked by their own community, or regarded their own past as deprecated and were unwilling to talk about it to ethnographers.
Moreover, besides personal communications of former shamans, folklore texts may narrate directly about a deterioration process. For example, a Buryat epic text details the wonderful deeds of the ancient "first shaman" Kara-Gürgän: he could even compete with God, create life, steal back the soul of the sick from God without his consent. A subsequent text laments that shamans of older times were stronger, possessing capabilities like omnividence, fortune-telling even for decades in the future, moving as fast as a bullet.
In most affected areas, shamanic practices ceased to exist, with authentic shamans dying and their personal experiences dying with them. The loss of memories is not always lessened by the fact the shaman is not always the only person in a community who knows the beliefs and motives related to the local shaman-hood. Although the shaman is often believed and trusted precisely because he or she "accommodates" to the beliefs of the community, several parts of the knowledge related to the local shamanhood consist of personal experiences of the shaman, or root in his or her family life, thus, those are lost with his or her death. Besides that, in many cultures, the entire traditional belief system has become endangered (often together with a partial or total language shift), with the other people of the community remembering the associated beliefs and practices (or the language at all) grew old or died, many folklore memories songs, and texts were forgotten—which may threaten even such peoples who could preserve their isolation until the middle of the 20th century, like the Nganasan.
Some areas could enjoy a prolonged resistance due to their remoteness.
After exemplifying the general decline even in the most remote areas, there are revitalizations or tradition-preserving efforts as a response. Besides collecting the memories, there are also tradition-preserving and even revitalization efforts, led by authentic former shamans (for example among the Sakha people and Tuvans). However, according to Richard L. Allen, research and policy analyst for the Cherokee Nation, they are overwhelmed with fraudulent shamans ("plastic medicine people"). "One may assume that anyone claiming to be a Cherokee 'shaman, spiritual healer, or pipe-carrier', is equivalent to a modern day medicine show and snake-oil vendor." One indicator of a plastic shaman might be someone who discusses "Native American spirituality" but does not mention any specific Native American tribe.
Besides tradition-preserving efforts, there are also neoshamanistic movements, these may differ from many traditional shamanistic practice and beliefs in several points. Admittedly, several traditional beliefs systems indeed have ecological considerations (for example, many Inuit peoples), and among Tucano people, the shaman indeed has direct resource-protecting roles.
Today, shamanism survives primarily among Indigenous peoples. Shamanic practices continue today in the tundras, jungles, deserts, and other rural areas, and even in cities, towns, suburbs, and shantytowns all over the world. This is especially true for Africa and South America, where "mestizo shamanism" is widespread.
The anthropologist Alice Kehoe criticizes the term "shaman" in her book "Shamans and Religion: An Anthropological Exploration in Critical Thinking". Part of this criticism involves the notion of cultural appropriation. This includes criticism of New Age and modern Western forms of shamanism, which, according to Kehoe, misrepresent or dilute Indigenous practices. Kehoe also believes that the term reinforces racist ideas such as the noble savage.
Kehoe is highly critical of Mircea Eliade's work on shamanism as an invention synthesized from various sources unsupported by more direct research. To Kehoe, citing that ritualistic practices (most notably drumming, trance, chanting, entheogens and hallucinogens, spirit communication and healing) as being definitive of shamanism is poor practice. Such citations ignore the fact that those practices exist outside of what is defined as shamanism and play similar roles even in non-shamanic cultures (such as the role of chanting in Judeo-Christian and Islamic rituals) and that in their expression are unique to each culture that uses them. Such practices cannot be generalized easily, accurately, or usefully into a global religion of shamanism. Because of this, Kehoe is also highly critical of the hypothesis that shamanism is an ancient, unchanged, and surviving religion from the Paleolithic period.
The term has been criticized for its colonial roots and as a tool to perpetuate contemporary linguistic colonialism. By Western scholars, the term "shamanism" is used to refer to a variety of different cultures and practices around the world, and differ greatly in different Indigenous cultures. Author and award-winning scholar from the Driftpile Cree Nation in Canada Billy-Ray Belcourt argues that using language with the intention of simplifying culture that is diverse, such as Shamanism, as it is prevalent in communities around the world and is made up of many complex components, works to conceal the complexities of the social and political violence that Indigenous communities have experienced at the hands of settlers. Belcourt argues that language used to imply “simplicity” in regards to Indigenous culture, is a tool used to belittle Indigenous cultures, as it views Indigenous communities solely as a result of a history embroiled in violence, that leaves Indigenous communities only capable of simplicity and plainness.
Anthropologist Mihály Hoppál also discusses whether the term "shamanism" is appropriate. He notes that for many readers, "-ism" implies a particular dogma, like Buddhism or Judaism. He recommends using the term "shamanhood" or "shamanship" (a term used in old Russian and German ethnographic reports at the beginning of the 20th century) for stressing the diversity and the specific features of the discussed cultures. He believes that this places more stress on the local variations and emphasizes that shamanism is not a religion of sacred dogmas, but linked to the everyday life in a practical way. Following similar thoughts, he also conjectures a contemporary paradigm shift. Piers Vitebsky also mentions that, despite really astonishing similarities, there is no unity in shamanism. The various, fragmented shamanistic practices and beliefs coexist with other beliefs everywhere. There is no record of pure shamanistic societies (although their existence is not impossible). Norwegian social anthropologist Hakan Rydving has likewise argued for the abandonment of the terms "shaman" and "shamanism" as "scientific illusions."
Dulam Bumochir has affirmed the above critiques of "shamanism" as a Western construct created for comparative purposes and, in an extensive article, has documented the role of Mongols themselves, particularly "the partnership of scholars and shamans in the reconstruction of shamanism" in post-1990/post-communist Mongolia. This process has also been documented by Swiss anthropologist Judith Hangartner in her landmark study of Darhad shamans in Mongolia. Historian Karena Kollmar-Polenz argues that the social construction and reification of shamanism as a religious "other" actually began with the 18th-century writings of Tibetan Buddhist monks in Mongolia and later "probably influenced the formation of European discourse on Shamanism". | https://en.wikipedia.org/wiki?curid=26861 |
Sexology
Sexology is the scientific study of human sexuality, including human sexual interests, behaviors, and functions. The term "sexology" does not generally refer to the non-scientific study of sexuality, such as political science or social criticism.
Sexologists apply tools from several academic fields, such as biology, medicine, psychology, epidemiology, sociology, and criminology. Topics of study include sexual development (puberty), sexual orientation, gender identity, sexual relationships, sexual activities, paraphilias, and atypical sexual interests. It also includes the study of sexuality across the lifespan, including child sexuality, puberty, adolescent sexuality, and sexuality among the elderly. Sexology also spans sexuality among the mentally and/or physically disabled. The sexological study of sexual dysfunctions and disorders, including erectile dysfunction, anorgasmia, and pedophilia, are also mainstays.
Sexual manuals have existed since antiquity, such as Ovid's "Ars Amatoria", the "Kama Sutra" of Vatsyayana, the "Ananga Ranga" and "The Perfumed Garden for the Soul's Recreation". ("Prostitution in the City of Paris"), an early 1830s study on 3,558 registered prostitutes in Paris, published by Alexander Jean Baptiste Parent-Duchatelet (and published in 1837, a year after he died), has been called the first work of modern sex research.
The scientific study of sexual behavior in human beings began in the 19th century. Shifts in Europe's national borders at that time brought into conflict laws that were sexually liberal and laws that criminalized behaviors such as homosexual activity.
Despite the prevailing social attitude of sexual repression in the Victorian era, the movement towards sexual emancipation began towards the end of the nineteenth century in England and Germany. In 1886, Richard Freiherr von Krafft-Ebing published "Psychopathia Sexualis." That work is considered as having established sexology as a scientific discipline.
In England, the founding father of sexology was the doctor and sexologist Havelock Ellis who challenged the sexual taboos of his era regarding masturbation and homosexuality and revolutionized the conception of sex in his time. His seminal work was the 1897 "Sexual Inversion", which describes the sexual relations of homosexual males, including men with boys. Ellis wrote the first objective study of homosexuality (the term was coined by Karl-Maria Kertbeny), as he did not characterize it as a disease, immoral, or a crime. The work assumes that same-sex love transcended age taboos as well as gender taboos. Seven of his twenty-one case studies are of inter-generational relationships. He also developed other important psychological concepts, such as autoerotism and narcissism, both of which were later developed further by Sigmund Freud.
Ellis pioneered transgender phenomena alongside the German Magnus Hirschfeld. He established it as new category that was separate and distinct from homosexuality. Aware of Hirschfeld's studies of transvestism, but disagreeing with his terminology, in 1913 Ellis proposed the term "sexo-aesthetic inversion" to describe the phenomenon.
In 1908, the first scholarly journal of the field, "Journal of Sexology" (Zeitschrift für Sexualwissenschaft), began publication and was published monthly for one year. Those issues contained articles by Freud, Alfred Adler, and Wilhelm Stekel. In 1913, the first academic association was founded: the "Society for Sexology".
Freud developed a theory of sexuality. These stages of development include: Oral, Anal, Phallic, Latency and Genital. These stages run from infancy to puberty and onwards. based on his studies of his clients, between the late 19th and early 20th centuries. Wilhelm Reich and Otto Gross, were disciples of Freud, but rejected by his theories because of their emphasis on the role of sexuality in the revolutionary struggle for the emancipation of mankind.
Pre-Nazi Germany, under the sexually liberal Napoleonic code, organized and resisted the anti-sexual, Victorian cultural influences. The momentum from those groups led them to coordinate sex research across traditional academic disciplines, bringing Germany to the leadership of sexology. Physician Magnus Hirschfeld was an outspoken advocate for sexual minorities, founding the Scientific Humanitarian Committee, the first advocacy for homosexual and transgender rights.
Hirschfeld also set up the first Institut für Sexualwissenschaft (Institute for Sexology) in Berlin in 1919. Its library housed over 20,000 volumes, 35,000 photographs, a large collection of art and other objects. People from around Europe visited the Institute to gain a clearer understanding of their sexuality and to be treated for their sexual concerns and dysfunctions.
Hirschfeld developed a system which identified numerous actual or hypothetical types of sexual intermediary between heterosexual male and female to represent the potential diversity of human sexuality, and is credited with identifying a group of people that today are referred to as transsexual or transgender as separate from the categories of homosexuality, he referred to these people as 'transvestiten' (transvestites). Germany's dominance in sexual behavior research ended with the Nazi regime. The Institute and its library were destroyed by the Nazis less than three months after they took power, May 8, 1933. The institute was shut down and Hirschfeld's books were burned.
Other sexologists in the early gay rights movement included Ernst Burchard and Benedict Friedlaender. Ernst Gräfenberg, after whom the G-spot is named, published the initial research developing the intrauterine device (IUD).
After World War II, sexology experienced a renaissance, both in the United States and Europe. Large scale studies of sexual behavior, sexual function, and sexual dysfunction gave rise to the development of sex therapy. Post-WWII sexology in the U.S. was influenced by the influx of European refugees escaping the Nazi regime and the popularity of the Kinsey studies. Until that time, American sexology consisted primarily of groups working to end prostitution and to educate youth about sexually transmitted diseases. Alfred Kinsey founded the Institute for Sex Research at Indiana University at Bloomington in 1947. This is now called the Kinsey Institute for Research in Sex, Gender and Reproduction. He wrote in his 1948 book that more was scientifically known about the sexual behavior of farm animals than of humans.
Psychologist and sexologist John Money developed theories on sexual identity and gender identity in the 1950s. His work, notably on the David Reimer case has since been regarded as controversial, even while the case was key to the development of treatment protocols for intersex infants and children.
Kurt Freund developed the penile plethysmograph in Czechoslovakia in the 1950s. The device was designed to provide an objective measurement of sexual arousal in males and is currently used in the assessment of pedophilia and hebephilia. This tool has since been used with sex offenders.
In 1966 and 1970, Masters and Johnson released their works "Human Sexual Response" and "Human Sexual Inadequacy," respectively. Those volumes sold well, and they were founders of what became known as the Masters & Johnson Institute in 1978.
Vern Bullough was a historian of sexology during this era, as well as being a researcher in the field.
The emergence of HIV/AIDS in the 1980s caused a dramatic shift in sexological research efforts towards understanding and controlling the spread of the disease.
Technological advances have permitted sexological questions to be addressed with studies using behavioral genetics, neuroimaging, and large-scale Internet-based surveys.
This is a list of sexologists and notable contributors to the field of sexology, by year of birth: | https://en.wikipedia.org/wiki?curid=26862 |
List of leaders of the Soviet Union
During its sixty-nine-year history, the Soviet Union usually had a "de facto" leader who would not necessarily be head of state, but would lead while holding an office such as Premier or General Secretary. Under the 1977 Constitution, the Chairman of the Council of Ministers, or Premier, was the head of government and the Chairman of the Presidium of the Supreme Soviet was the head of state. The office of the Chairman of the Council of Ministers was comparable to a prime minister in the First World whereas the office of the Chairman of the Presidium was comparable to a president. In the ideology of Vladimir Lenin, the head of the Soviet state was a collegiate body of the vanguard party (see "What Is To Be Done?").
Following Joseph Stalin's consolidation of power in the 1920s, the post of the General Secretary of the Central Committee of the Communist Party became synonymous with leader of the Soviet Union, because the post controlled both the Communist Party and the Soviet government both indirectly via party membership and via the tradition of a single person holding two highest posts in the party and in the government. The post of the General Secretary was abolished in 1952 under Stalin and later re-established by Nikita Khrushchev under the name of First Secretary. In 1966, Leonid Brezhnev reverted the office title to its former name. Being the head of the Communist Party of the Soviet Union, the office of the General Secretary was the highest in the Soviet Union until 1990. The post of General Secretary lacked clear guidelines of succession, so after the death or removal of a Soviet leader the successor usually needed the support of the Politburo, the Central Committee, or another government or party apparatus to both take and stay in power. The President of the Soviet Union, an office created in March 1990, replaced the General Secretary as the highest Soviet political office.
Contemporaneously to establishment of the office of the President, representatives of the Congress of People's Deputies voted to remove Article 6 from the Soviet Constitution which stated that the Soviet Union was a one-party state controlled by the Communist Party which in turn played the leading role in society. This vote weakened the party and its hegemony over the Soviet Union and its people. Upon death, resignation, or removal from office of an incumbent President, the Vice President of the Soviet Union would assume the office, though the Soviet Union dissolved before this was actually tested. After the failed August 1991 coup, the Vice President was replaced by an elected member of the State Council of the Soviet Union.
Vladimir Lenin was voted the Chairman of the Council of People's Commissars of the Soviet Union (Sovnarkom) on 30 December 1922 by the Congress of Soviets. At the age of 53, his health declined from effects of two bullet wounds, later aggravated by three strokes which culminated with his death in 1924. Irrespective of his health status in his final days, Lenin was already losing much of his power to Joseph Stalin. Alexei Rykov succeeded Lenin as Chairman of the Sovnarkom and although he was "de jure" the most powerful person in the country, but in fact all power was concentrated in the hands of the "troika" - the union of three influential party figures: Grigory Zinoviev, Joseph Stalin and Lev Kamenev. Stalin continued to increase his influence in the party, and by the end of the 1920s he became the sole dictator of the USSR, defeating all his political opponents. The post of General Secretary of the party, which was held by Stalin, became the most important post in the Soviet hierarchy.
Stalin's early policies pushed for rapid industrialisation, nationalisation of private industry and the collectivisation of private plots created under Lenin's New Economic Policy. As leader of the Politburo, Stalin consolidated near-absolute power by 1938 after the Great Purge, a series of campaigns of political murder, repression and persecution. Nazi German troops invaded the Soviet Union in June 1941, but by December the Soviet Army managed to stop the attack just shy of Moscow. On Stalin's orders, the Soviet Union launched a counter-attack on Nazi Germany which finally succeeded in 1945. Stalin died in March 1953 and his death triggered a power struggle in which Nikita Khrushchev after several years emerged victorious against Georgy Malenkov.
Khrushchev denounced Stalin on two occasions, first in 1956 and then in 1962. His policy of de-Stalinisation earned him many enemies within the party, especially from old Stalinist appointees. Many saw this approach as destructive and destabilising. A group known as Anti-Party Group tried to oust Khrushchev from office in 1957, but it failed. As Khrushchev grew older, his erratic behavior became worse, usually making decisions without discussing or confirming them with the Politburo. Leonid Brezhnev, a close companion of Khrushchev, was elected First Secretary the same day of Khrushchev's removal from power. Alexei Kosygin became the new Premier and Anastas Mikoyan kept his office as Chairman of the Presidium of the Supreme Soviet. On the orders of the Politburo, Mikoyan was forced to retire in 1965 and Nikolai Podgorny took over the office of Chairman of the Presidium. The Soviet Union in the post-Khrushchev 1960s was governed by a collective leadership. Henry A. Kissinger, the American National Security Advisor, mistakenly believed that Kosygin was the leader of the Soviet Union and that he was at the helm of Soviet foreign policy because he represented the Soviet Union at the 1967 Glassboro Summit Conference. The "Era of Stagnation", a derogatory term coined by Mikhail Gorbachev, was a period marked by low socio-economic efficiency in the country and a gerontocracy ruling the country. Yuri Andropov (aged 68 at the time) succeeded Brezhnev in his post as General Secretary in 1982. In 1983, Andropov was hospitalised and rarely met up at work to chair the politburo meetings due to his declining health. Nikolai Tikhonov usually chaired the meetings in his place. Following Andropov's death fifteen months after his appointment, an even older leader, 72 year old Konstantin Chernenko, was elected to the General Secretariat. His rule lasted for little more than a year until his death thirteen months later on 10 March 1985.
At the age of 54, Mikhail Gorbachev was elected to the General Secretariat by the Politburo on 11 March 1985. In May 1985, Gorbachev publicly admitted the slowing down of the economic development and inadequate living standards, being the first Soviet leader to do so while also beginning a series of fundamental reforms. From 1986 to around 1988, he dismantled central planning, allowed state enterprises to set their own outputs, enabled private investment in businesses not previously permitted to be privately owned and allowed foreign investment, among other measures. He also opened up the management of and decision-making within the Soviet Union and allowed greater public discussion and criticism, along with a warming of relationships with the West. These twin policies were known as "perestroika" (literally meaning "reconstruction", though it varies) and "glasnost" ("openness" and "transparency"), respectively. The dismantling of the principal defining features of Soviet Communism in 1988 and 1989 in the Soviet Union led to the unintended consequence of the Soviet Union breaking up after the failed August 1991 coup led by Gennady Yanayev.
The following list includes persons who held the top leadership position of the Soviet Union from its founding in 1922 until its 1991 dissolution. Note that † denotes leaders who died in office.
On four occasionsthe 2–3 year period between Vladimir Lenin's incapacitation and Joseph Stalin's leadership; the three months following Stalin's death; the interval between Nikita Khrushchev's fall and Leonid Brezhnev's consolidation of power; and the ailing Konstantin Chernenko's tenure as General Secretarya form of oligarchy known as a troika ("triumvirate") governed the Soviet Union, with no individual holding complete control over its policies.
The youngest leader of the USSR in 1924 was Joseph Stalin (45 years old). The oldest at the time of taking office was Konstantin Chernenko (72 years old). The oldest at the time of the loss of power is Leonid Brezhnev (75 years old). The shortest life was lived by Vladimir Lenin (53 years old). Mikhail Gorbachev (89 years old, living) lived the longest. Stalin ruled the longest (29 years). Georgy Malenkov spent the shortest time in power (183 days). | https://en.wikipedia.org/wiki?curid=26865 |
Seafood
Seafood is any form of sea life regarded as food by humans, prominently including fish and shellfish. Shellfish include various species of molluscs (e.g. bivalve molluscs such as clams, oysters, and mussels and cephalopods such as octopus and squid), crustaceans (e.g. shrimp, crabs, and lobster), and echinoderms (e.g. sea cucumbers and sea urchins). Historically, marine mammals such as cetaceans (whales and dolphins) as well as seals have been eaten as food, though that happens to a lesser extent in modern times. Edible sea plants such as some seaweeds and microalgae are widely eaten as around the world, especially in Asia. In the United States, although not generally in the United Kingdom, the term "seafood" is extended to fresh water organisms eaten by humans, so all edible aquatic life may be referred to as "seafood".
The harvesting of wild seafood is usually known as fishing or hunting, while the cultivation and farming of seafood is known as aquaculture or fish farming (in the case of fish). Seafood is often colloquially distinguished from meat, although it is still animal in nature and is excluded from a vegetarian diet, as decided by groups like the Vegetarian Society after confusion surrounding pescetarianism. Seafood is an important source of (animal) protein in many diets around the world, especially in coastal areas.
Most of the seafood harvest is consumed by humans, but a significant proportion is used as fish food to farm other fish or rear farm animals. Some seafoods (i.e. kelp) are used as food for other plants (a fertilizer). In these ways, seafoods are used to produce further food for human consumption. Also, products such as fish oil and spirulina tablets are extracted from seafoods. Some seafood is fed to aquarium fish, or used to feed domestic pets such as cats. A small proportion is used in medicine, or is used industrially for nonfood purposes (e.g. leather).
The harvesting, processing, and consuming of seafoods are ancient practices with archaeological evidence dating back well into the Paleolithic. Findings in a sea cave at Pinnacle Point in South Africa indicate "Homo sapiens" (modern humans) harvested marine life as early as 165,000 years ago, while the Neanderthals, an extinct human species contemporary with early "Homo sapiens", appear to have been eating seafood at sites along the Mediterranean coast beginning around the same time. Isotopic analysis of the skeletal remains of Tianyuan man, a 40,000-year-old anatomically modern human from eastern Asia, has shown that he regularly consumed freshwater fish. Archaeology features such as shell middens, discarded fish bones and cave paintings show that sea foods were important for survival and consumed in significant quantities. During this period, most people lived a hunter-gatherer lifestyle and were, of necessity, constantly on the move. However, early examples of permanent settlements (though not necessarily permanently occupied), such as those at Lepenski Vir, were almost always associated with fishing as a major source of food.
The ancient river Nile was full of fish; fresh and dried fish were a staple food for much of the population. The Egyptians had implements and methods for fishing and these are illustrated in tomb scenes, drawings, and papyrus documents. Some representations hint at fishing being pursued as a pastime.
Fishing scenes are rarely represented in ancient Greek culture, a reflection of the low social status of fishing. However, Oppian of Corycus, a Greek author wrote a major treatise on sea fishing, the "Halieulica" or "Halieutika", composed between 177 and 180. This is the earliest such work to have survived to the modern day. The consumption of fish varied in accordance with the wealth and location of the household. In the Greek islands and on the coast, fresh fish and seafood (squid, octopus, and shellfish) were common. They were eaten locally but more often transported inland. Sardines and anchovies were regular fare for the citizens of Athens. They were sometimes sold fresh, but more frequently salted. A stele of the late 3rd century BCE from the small Boeotian city of Akraiphia, on Lake Copais, provides us with a list of fish prices. The cheapest was "skaren" (probably parrotfish) whereas Atlantic bluefin tuna was three times as expensive. Common salt water fish were yellowfin tuna, red mullet, ray, swordfish or sturgeon, a delicacy which was eaten salted. Lake Copais itself was famous in all Greece for its eels, celebrated by the hero of "The Acharnians". Other fresh water fish were pike-fish, carp and the less appreciated catfish.
Pictorial evidence of Roman fishing comes from mosaics. At a certain time the goatfish was considered the epitome of luxury, above all because its scales exhibit a bright red color when it dies out of water. For this reason these fish were occasionally allowed to die slowly at the table. There even was a recipe where this would take place "in garo", in the sauce. At the beginning of the Imperial era, however, this custom suddenly came to an end, which is why "mullus" in the feast of Trimalchio (see "the Satyricon") could be shown as a characteristic of the "parvenu", who bores his guests with an unfashionable display of dying fish.
In medieval times, seafood was less prestigious than other animal meats, and often seen as merely an alternative to meat on fast days. Still, seafood was the mainstay of many coastal populations. Kippers made from herring caught in the North Sea could be found in markets as far away as Constantinople. While large quantities of fish were eaten fresh, a large proportion was salted, dried, and, to a lesser extent, smoked. Stockfish, cod that was split down the middle, fixed to a pole and dried, was very common, though preparation could be time-consuming, and meant beating the dried fish with a mallet before soaking it in water. A wide range of mollusks including oysters, mussels and scallops were eaten by coastal and river-dwelling populations, and freshwater crayfish were seen as a desirable alternative to meat during fish days. Compared to meat, fish was much more expensive for inland populations, especially in Central Europe, and therefore not an option for most.
Modern knowledge of the reproductive cycles of aquatic species has led to the development of hatcheries and improved techniques of fish farming and aquaculture. Better understanding of the hazards of eating raw and undercooked fish and shellfish has led to improved preservation methods and processing.
The following table is based on the ISSCAAP classification (International Standard Statistical Classification of Aquatic Animals and Plants) used by the FAO for the purposes of collecting and compiling fishery statistics. The production figures have been extracted from the FAO FishStat database, and include both capture from wild fisheries and aquaculture production.
Fish is a highly perishable product: the "fishy" smell of dead fish is due to the breakdown of amino acids into biogenic amines and ammonia.
Live food fish are often transported in tanks at high expense for an international market that prefers its seafood killed immediately before it is cooked. Delivery of live fish without water is also being explored. While some seafood restaurants keep live fish in aquaria for display purposes or for cultural beliefs, the majority of live fish are kept for dining customers. The live food fish trade in Hong Kong, for example, is estimated to have driven imports of live food fish to more than 15,000 tonnes in 2000. Worldwide sales that year were estimated at US$400 million, according to the World Resources Institute.
If the cool chain has not been adhered to correctly, food products generally decay and become harmful before the validity date printed on the package. As the potential harm for a consumer when eating rotten fish is much larger than for example with dairy products, the U.S. Food and Drug Administration (FDA) has introduced regulation in the USA requiring the use of a time temperature indicator on certain fresh chilled seafood products.
Fresh fish is a highly perishable food product, so it must be eaten promptly or discarded; it can be kept for only a short time. In many countries, fresh fish are filleted and displayed for sale on a bed of crushed ice or refrigerated. Fresh fish is most commonly found near bodies of water, but the advent of refrigerated train and truck transportation has made fresh fish more widely available inland.
Long term preservation of fish is accomplished in a variety of ways. The oldest and still most widely used techniques are drying and salting. Desiccation (complete drying) is commonly used to preserve fish such as cod. Partial drying and salting is popular for the preservation of fish like herring and mackerel. Fish such as salmon, tuna, and herring are cooked and canned. Most fish are filleted prior to canning, but some small fish (e.g. sardines) are only decapitated and gutted prior to canning.
Seafood is consumed all over the world; it provides the world's prime source of high-quality protein: 14–16% of the animal protein consumed worldwide; over one billion people rely on seafood as their primary source of animal protein. Fish is among the most common food allergens.
Iceland, Japan, and Portugal are the greatest consumers of seafood per capita in the world.
The UK Food Standards Agency recommends that at least two portions of seafood should be consumed each week, one of which should be oil-rich. There are over 100 different types of seafood available around the coast of the UK.
Oil-rich fish such as mackerel or herring are rich in long chain Omega-3 oils. These oils are found in every cell of the human body, and are required for human biological functions such as brain functionality.
Whitefish such as haddock and cod are very low in fat and calories which, combined with oily fish rich in Omega-3 such as mackerel, sardines, fresh tuna, salmon and trout, can help to protect against coronary heart disease, as well as helping to develop strong bones and teeth.
Shellfish are particularly rich in zinc, which is essential for healthy skin and muscles as well as fertility. Casanova reputedly ate 50 oysters a day.
Over 33,000 species of fish and many more marine invertebrate species have been described. Bromophenols, which are produced by marine algae, gives marine animals an odor and taste that is absent from freshwater fish and invertebrates. Also, a chemical substance called dimethylsulfoniopropionate (DMSP) that is found in red and green algae is transferred to animals in the marine food chain. When broken down, dimethyl sulfide (DMS) is produced, and is often released during food preparation when fresh fish and shellfish are heated. In small quantities it creates a specific smell one associates with the ocean, but which in larger quantities gives the impression of rotten seaweed and old fish. Another molecule known as TMAO occurs in fishes and give them a distinct smell. It also exists in freshwater species, but becomes more numerous in the cells of an animal the deeper it lives, so that fish from the deeper parts of the ocean has a stronger taste than species who lives in shallow water. Eggs from seaweed contains sex pheromones called dictyopterenes, which are meant to attract the sperm. These pheromones are also found in edible seaweeds, which contributes to their aroma. However, only a small number of species are commonly eaten by humans.
There is broad scientific consensus that docosahexaenoic acid (DHA) and eicosapentaenoic acid (EPA) found in seafood are beneficial to neurodevelopment and cognition, especially at young ages. The United Nations Food and Agriculture Organization has described fish as "nature's super food." Seafood consumption is associated with improved neurologic development during gestation and early childhood and more tenuously linked to reduced mortality from coronary heart disease.
The parts of fish containing essential fats and micronutrients, often cited as primary health benefits for eating seafood, are frequently discarded in the developed world. Micronutrients including calcium, potassium, selenium, zinc, and iodine are found in their highest concentrations in the head, intestines, bones, and scales.
There is some debate over the particular health benefits of fish, especially regarding the relationship between seafood consumption and cardiovascular health. However, government recommendations promoting limited seafood consumption are relatively unified. The US Food and Drug Administration recommends moderate (4 oz for children and 8 - 12 oz for adults, weekly) consumption of fish as part of a healthy and balanced diet. The UK National Health Service gives similar advice, recommending at least 2 portions (about 10 oz) of fish weekly. The Chinese National Health Commission recommends slightly more, advising 10 - 20 oz of fish weekly.
There are numerous factors to consider when evaluating health hazards in seafood. These concerns include marine toxins, microbes, foodborne illness, radionuclide contamination, and man-made pollutants. Shellfish are among the more common food allergens. Most of these dangers can be mitigated or avoided with accurate knowledge of when and where seafood is caught. However, consumers have limited access to relevant and actionable information in this regard and the seafood industry's systemic problems with mislabelling make decisions about what is safe even more fraught.
Ciguatera fish poisoning (CFP) is an illness resulting from consuming toxins produced by dinoflagellates which bioaccumulate in the liver, roe, head, and intestines of reef fish. It is the most common disease associated with seafood consumption and poses the greatest risk to consumers. The population of plankton which produces these toxins varies significantly over time and location, as seen in red tides. Evaluating the risk of ciguatera in any given fish requires specific knowledge of its origin and life history, information which is often inaccurate or unavailable. While ciguatera is relatively widespread compared to other seafood-related health hazards (up to 50,000 people suffer from ciguatera every year), mortality is very low.
Fish and shellfish have a natural tendency to concentrate inorganic and organic toxins and pollutants in their bodies, including methylmercury, a highly toxic organic compound of mercury, polychlorinated biphenyls (PCBs), and microplastics. Species of fish that are high on the food chain, such as shark, swordfish, king mackerel, albacore tuna, and tilefish contain higher concentrations of these bioaccumulants. This is because bioaccumulants are stored in the muscle tissues of fish, and when a predatory fish eats another fish, it assumes the entire body burden of bioaccumulants in the consumed fish. Thus species that are high on the food chain amass body burdens of bioaccumulants that can be ten times higher than the species they consume. This process is called biomagnification.
Man-made disasters can cause localized hazards in seafood which may spread widely via piscine food chains. The first occurrence of widespread mercury poisoning in humans occurred this way in the 1950s in Minamata, Japan. Wastewater from a nearby chemical factory released methylmercury that accumulated in fish which were consumed by humans. Severe mercury poisoning is now known as Minamata disease. The 2011 Fukushima Daiichi Nuclear Power Plant disaster and 1947 - 1991 Marshall Islands nuclear bomb testing led to dangerous radionuclide contamination of local sea life which, in the latter case, remained as of 2008.
A widely cited study in JAMA which synthesized government and MEDLINE reports, and meta-analyses to evaluate risks from methylmercury, dioxins, and polychlorinated biphenyls to cardiovascular health and links between fish consumption and neurologic outcomes concluded that: "The benefits of modest fish consumption (1-2 servings/wk) outweigh the risks among adults and, excepting a few selected fish species, among women of childbearing age. Avoidance of modest fish consumption due to confusion regarding risks and benefits could result in thousands of excess CHD [congenital heart disease] deaths annually and suboptimal neurodevelopment in children."
Due to the wide array of options in the seafood marketplace, seafood is far more susceptible to mislabeling than terrestrial food. There are more than 1,700 species of seafood in the United States' consumer marketplace, 80 - 90% of which are imported and less than 1% of which is tested for fraud. Estimates of mislabelled seafood in the United States range from 33% in general up to 86% for particular species.
Byzantine supply chains, frequent bycatch, brand naming, species substitution, and inaccurate ecolabels all contribute to confusion for the consumer. A 2013 study by Oceana found that one third of seafood sampled from the United States was incorrectly labelled. Snapper and tuna were particularly susceptible to mislabelling, and seafood substitution was the most common type of fraud. Another type of mislabelling is short-weighting, where practices such as overglazing or soaking can misleadingly increase the apparent weight of the fish. For supermarket shoppers, many seafood products are unrecognizable fillets. Without sophisticated DNA testing, there is no foolproof method to identify a fish species without their head, skin, and fins. This creates easy opportunities to substitute cheap products for expensive ones, a form of economic fraud.
Beyond financial concerns, significant health risks arise from hidden pollutants and marine toxins in an already fraught marketplace. Seafood fraud has led to widespread keriorrhea due to mislabeled escolar, mercury poisoning from products marketed as safe for pregnant women, and hospitalization and neurological damage due to mislabeled pufferfish. For example, a 2014 study published in PLOS One found that 15% of MSC certified Patagonian toothfish originated from uncertified and mercury polluted fisheries. These fishery-stock substitutions had 100% more mercury than their genuine counterparts, "vastly exceeding" limits in Canada, New Zealand, and Australia.
Research into population trends of various species of seafood is pointing to a global collapse of seafood species by 2048. Such a collapse would occur due to pollution and overfishing, threatening oceanic ecosystems, according to some researchers.
A major international scientific study released in November 2006 in the journal "Science" found that about one-third of all fishing stocks worldwide have collapsed (with a collapse being defined as a decline to less than 10% of their maximum observed abundance), and that if current trends continue all fish stocks worldwide will collapse within fifty years. In July 2009, Boris Worm of Dalhousie University, the author of the November 2006 study in "Science", co-authored an update on the state of the world's fisheries with one of the original study's critics, Ray Hilborn of the University of Washington at Seattle. The new study found that through good fisheries management techniques even depleted fish stocks can be revived and made commercially viable again.
The FAO State of World Fisheries and Aquaculture 2004 report estimates that in 2003, of the main fish stocks or groups of resources for which assessment information is available, "approximately one-quarter were overexploited, depleted or recovering from depletion (16%, 7% and 1% respectively) and needed rebuilding."
The National Fisheries Institute, a trade advocacy group representing the United States seafood industry, disagree. They claim that currently observed declines in fish population are due to natural fluctuations and that enhanced technologies will eventually alleviate whatever impact humanity is having on oceanic life.
For the most part Islamic dietary laws allow the eating of seafood, though the Hanbali forbid eels, the Shafi forbid frogs and crocodiles, and the Hanafi forbid bottom feeders such as shellfish and carp. The Jewish laws of Kashrut forbid the eating of shellfish and eels. In the Old Testament, the Mosaic Covenant allowed the Israelites to eat finfish, but shellfish and eels were an abomination and not allowed. In ancient and medieval times, the Catholic Church forbade the practice of eating meat, eggs and dairy products during Lent. Thomas Aquinas argued that these "afford greater pleasure as food [than fish], and greater nourishment to the human body, so that from their consumption there results a greater surplus available for seminal matter, which when abundant becomes a great incentive to lust." In the United States, the Catholic practice of abstaining from meat on Fridays during Lent has popularized the Friday fish fry, and parishes often sponsor a fish fry during Lent. In predominantly Roman Catholic areas, restaurants may adjust their menus during Lent by adding seafood items to the menu. | https://en.wikipedia.org/wiki?curid=26866 |
SI base unit
The SI base units are the standard units of measurement defined by the International System of Units (SI) for the seven base quantities of what is now known as the International System of Quantities: they are notably a basic set from which all other SI units can be derived. The units and their physical quantities are the second for time, the metre for measurement of length, the kilogram for mass, the ampere for electric current, the kelvin for temperature, the mole for amount of substance, and the candela for luminous intensity. The SI base units are a fundamental part of modern metrology, and thus part of the foundation of modern science and technology.
The SI base units form a set of mutually independent dimensions as required by dimensional analysis commonly employed in science and technology.
The names and symbols of SI base units are written in lowercase, except the symbols of those named after a person, which are written with an initial capital letter. For example, the "metre" (US English: "meter") has the symbol m, but the "kelvin" has symbol K, because it is named after Lord Kelvin and the "ampere" with symbol A is named after André-Marie Ampère.
A number of other units, such as the litre (US English: "liter"), astronomical unit and electronvolt, are not formally part of the SI, but are accepted for use with SI.
On 20 May 2019, as the final act of the 2019 redefinition of the SI base units, the BIPM officially introduced the following new definitions, replacing the preceding definitions of the SI base units.
New definitions of the base units were approved on 16 November 2018, and took effect 20 May 2019. The definitions of the base units have been modified several times since the Metre Convention in 1875, and new additions of base units have occurred. Since the redefinition of the metre in 1960, the kilogram had been the only base unit still defined directly in terms of a physical artefact, rather than a property of nature. This led to a number of the other SI base units being defined indirectly in terms of the mass of the same artefact; the mole, the ampere, and the candela were linked through their definitions to the mass of the International Prototype of the Kilogram, a roughly golfball-sized platinum–iridium cylinder stored in a vault near Paris.
It has long been an objective in metrology to define the kilogram in terms of a fundamental constant, in the same way that the metre is now defined in terms of the speed of light. The 21st General Conference on Weights and Measures (CGPM, 1999) placed these efforts on an official footing, and recommended "that national laboratories continue their efforts to refine experiments that link the unit of mass to fundamental or atomic constants with a view to a future redefinition of the kilogram". Two possibilities attracted particular attention: the Planck constant and the Avogadro constant.
In 2005, the International Committee for Weights and Measures (CIPM) approved preparation of new definitions for the kilogram, the ampere, and the kelvin and it noted the possibility of a new definition of the mole based on the Avogadro constant. The 23rd CGPM (2007) decided to postpone any formal change until the next General Conference in 2011.
In a note to the CIPM in October 2009, Ian Mills, the President of the CIPM "Consultative Committee – Units" (CCU) catalogued the uncertainties of the fundamental constants of physics according to the current definitions and their values under the proposed new definition. He urged the CIPM to accept the proposed changes in the definition of the "kilogram", "ampere", "kelvin", and "mole" so that they are referenced to the values of the fundamental constants, namely the Planck constant ("h"), the electron charge ("e"), the Boltzmann constant ("k"), and the Avogadro constant ("N"A). This approach was approved in 2018, only after measurements of these constants were achieved with sufficient accuracy. | https://en.wikipedia.org/wiki?curid=26872 |
Second
The second (symbol: s, abbreviation: sec) is the base unit of time in the International System of Units (SI) (French: Système International d’unités), commonly understood and historically defined as of a day – this factor derived from the division of the day first into 24 hours, then to 60 minutes and finally to 60 seconds each. Analog clocks and watches often have sixty tick marks on their faces, representing seconds (and minutes), and a "second hand" to mark the passage of time in seconds. Digital clocks and watches often have a two-digit seconds counter. The second is also part of several other units of measurement like meters per second for velocity, meters per second per second for acceleration, and cycles per second for frequency.
Although the historical definition of the unit was based on this division of the Earth's rotation cycle, the formal definition in the International System of Units (SI) is a much steadier timekeeper: it is defined by taking the fixed numerical value of the caesium frequency ∆"ν"Cs, the unperturbed ground-state hyperfine transition frequency of the caesium-133 atom, to be when expressed in the unit Hz, which is equal to s−1.
Because the Earth's rotation varies and is also slowing ever so slightly, a leap second is periodically added to clock time to keep clocks in sync with Earth's rotation.
Multiples of seconds are usually counted in hours and minutes. Fractions of a second are usually counted in tenths or hundredths. In scientific work, small fractions of a second are counted in milliseconds (thousandths), microseconds (millionths), nanoseconds (billionths), and sometimes smaller units of a second. An everyday experience with small fractions of a second is a 1-gigahertz microprocessor which has a cycle time of 1 nanosecond. Camera shutter speeds are often expressed in fractions of a second, such as second or second.
Sexagesimal divisions of the day from a calendar based on astronomical observation have existed since the third millennium BC, though they were not seconds as we know them today. Small divisions of time could not be measured back then, so such divisions were mathematically derived. The first timekeepers that could count seconds accurately were pendulum clocks invented in the 17th century. Starting in the 1950s, atomic clocks became better timekeepers than earth's rotation, and they continue to set the standard today.
A mechanical clock, one which does not depend on measuring the relative rotational position of the earth, keeps uniform time called "mean time", within whatever accuracy is intrinsic to it. That means that every second, minute and every other division of time counted by the clock will be the same duration as any other identical division of time. But a sundial which measures the relative position of the sun in the sky called "apparent time", does not keep uniform time. The time kept by a sundial varies by time of year, meaning that seconds, minutes and every other division of time is a different duration at different times of the year. The time of day measured with mean time versus apparent time may differ by as much as 15 minutes, but a single day will differ from the next by only a small amount; 15 minutes is a cumulative difference over a part of the year. The effect is due chiefly to the obliqueness of earth's axis with respect to its orbit around the sun.
The difference between apparent solar time and mean time was recognized by astronomers since antiquity, but prior to the invention of accurate mechanical clocks in the mid-17th century, sundials were the only reliable timepieces, and apparent solar time was the generally accepted standard.
Fractions of a second are usually denoted in decimal notation, for example 2.01 seconds, or two and one hundredth seconds. Multiples of seconds are usually expressed as minutes and seconds, or hours, minutes and seconds of clock time, separated by colons, such as 11:23:24, or 45:23 (the latter notation can give rise to ambiguity, because the same notation is used to denote hours and minutes). It rarely makes sense to express longer periods of time like hours or days in seconds, because they are awkwardly large numbers. For the metric unit of second, there are decimal prefixes representing 10 to 10 seconds.
Some common units of time in seconds are: a minute is 60 seconds; an hour is 3,600 seconds; a day is 86,400 seconds; a week is 604,800 seconds; a year (other than leap years) is 31,536,000 seconds; and a (Gregorian) century averages 3,155,695,200 seconds; with all of the above excluding any possible leap seconds.
Some common events in seconds are: a stone falls about 4.9 meters from rest in one second; a pendulum of length about one meter has a swing of one second, so pendulum clocks have pendulums about a meter long; the fastest human sprinters run 10 meters in a second; an ocean wave in deep water travels about 23 meters in one second; sound travels about 343 meters in one second in air; light takes 1.3 seconds to reach Earth from the surface of the Moon, a distance of 384,400 kilometers.
A second is part of other units, such as frequency measured in hertz (inverse seconds or second−1), speed (meters per second) and acceleration (meters per second squared). The metric system unit becquerel, a measure of radioactive decay, is measured in inverse seconds. The meter is defined in terms of the speed of light and the second; definitions of the metric base units kilogram, ampere, kelvin, and candela also depend on the second. The only base unit whose definition does not depend on the second is the mole. Of the 22 named derived units of the SI, only two (radian and steradian), do not depend on the second. Many derivative units for everyday things are reported in terms of larger units of time, not seconds, such as clock time in hours and minutes, velocity of a car in kilometers per hour or miles per hour, kilowatt hours of electricity usage, and speed of a turntable in rotations per minute.
A set of atomic clocks throughout the world keeps time by consensus: the clocks "vote" on the correct time, and all voting clocks are steered to agree with the consensus, which is called International Atomic Time (TAI). TAI "ticks" atomic seconds.
Civil time is defined to agree with the rotation of the earth. The international standard for timekeeping is Coordinated Universal Time (UTC). This time scale "ticks" the same atomic seconds as TAI, but inserts or omits leap seconds as necessary to correct for variations in the rate of rotation of the earth.
A time scale in which the seconds are not exactly equal to atomic seconds is UT1, a form of universal time. UT1 is defined by the rotation of the earth with respect to the sun, and does not contain any leap seconds. UT1 always differs from UTC by less than a second.
While they are not yet part of any timekeeping standard, optical lattice clocks with frequencies in the visible light spectrum now exist and are the most accurate timekeepers of all. A strontium clock with frequency 430 THz, in the red range of visible light, now holds the accuracy record: it will gain or lose less than a second in 15 billion years, which is longer than the estimated age of the universe. Such a clock can measure a change in its elevation of as little as 2 cm by the change in its rate due to gravitational time dilation.
There have only ever been three definitions of the second: as a fraction of the day, as a fraction of an extrapolated year, and as the microwave frequency of a caesium atomic clock, and they have realized a sexagesimal division of the day from ancient astronomical calendars.
Civilizations in the classic period and earlier created divisions of the calendar as well as arcs using a sexagesimal system of counting, so at that time the second was a sexagesimal subdivision of the day (ancient second=), not of the hour like the modern second (=). Sundials and water clocks were among the earliest timekeeping devices, and units of time were measured in degrees of arc. Conceptual units of time smaller than realizable on sundials were also used.
There are references to 'second' as part of a lunar month in the writings of natural philosophers of the Middle Ages, which were mathematical subdivisions that could not be measured mechanically.
The earliest mechanical clocks which appeared starting in the 14th century had displays that divided the hour into halves, thirds, quarters and sometimes even 12 parts, but never by 60. In fact, the hour was not commonly divided in 60 minutes as it was not uniform in duration. It was not practical for timekeepers to consider minutes until the first mechanical clocks that displayed minutes appeared near the end of the 16th century. Mechanical clocks kept the mean time, as opposed to the apparent time displayed by sundials.
By that time, sexagesimal divisions of time were well established in Europe.
The earliest clocks to display seconds appeared during the last half of the 16th century. The second became accurately measurable with the development of mechanical clocks. The earliest spring-driven timepiece with a second hand which marked seconds is an unsigned clock depicting Orpheus in the Fremersdorf collection, dated between 1560 and During the 3rd quarter of the 16th century, Taqi al-Din built a clock with marks every 1/5 minute.
In 1579, Jost Bürgi built a clock for William of Hesse that marked seconds. In 1581, Tycho Brahe redesigned clocks that had displayed only minutes at his observatory so they also displayed seconds, even though those seconds were not accurate. In 1587, Tycho complained that his four clocks disagreed by plus or minus four seconds.
In 1656, Dutch scientist Christiaan Huygens invented the first pendulum clock. It had a pendulum length of just under a meter which gave it a swing of one second, and an escapement that ticked every second. It was the first clock that could accurately keep time in seconds. By the 1730s, 80 years later, John Harrison's maritime chronometers could keep time accurate to within one second in 100 days.
In 1832, Gauss proposed using the second as the base unit of time in his millimeter-milligram-second system of units. The British Association for the Advancement of Science (BAAS) in 1862 stated that "All men of science are agreed to use the second of mean solar time as the unit of time." BAAS formally proposed the CGS system in 1874, although this system was gradually replaced over the next 70 years by MKS units. Both the CGS and MKS systems used the same second as their base unit of time. MKS was adopted internationally during the 1940s, defining the second as of a mean solar day.
Some time in the late 1940s, quartz crystal oscillator clocks with an operating frequency of ~100 kHz advanced to keep time with accuracy better than 1 part in 108 over an operating period of a day. It became apparent that a consensus of such clocks kept better time than the rotation of the Earth. Metrologists also knew that Earth's orbit around the Sun (a year) was much more stable than earth's rotation. This led to proposals as early as 1950 to define the second as a fraction of a year.
The Earth's motion was described in Newcomb's "Tables of the Sun" (1895), which provided a formula for estimating the motion of the Sun relative to the epoch 1900 based on astronomical observations made between 1750 and 1892. This resulted in adoption of an ephemeris time scale expressed in units of the sidereal year at that epoch by the IAU in 1952. This extrapolated timescale brings the observed positions of the celestial bodies into accord with Newtonian dynamical theories of their motion. In 1955, the tropical year, considered more fundamental than the sidereal year, was chosen by the IAU as the unit of time. The tropical year in the definition was not measured but calculated from a formula describing a mean tropical year that decreased linearly over time.
In 1956, the second was redefined in terms of a year relative to that epoch. The second was thus defined as "the fraction of the tropical year for 1900 January 0 at 12 hours ephemeris time". This definition was adopted as part of the International System of Units in 1960.
But even the best mechanical, electric motorized and quartz crystal-based clocks develop discrepancies, and virtually none are good enough to realize an ephemeris second. Far better for timekeeping is the natural and exact "vibration" in an energized atom. The frequency of vibration (i.e., radiation) is very specific depending on the type of atom and how it is excited. Since 1967, the second has been defined as exactly "the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom" (at a temperature of 0 K). This length of a second was selected to correspond exactly to the length of the ephemeris second previously defined. Atomic clocks use such a frequency to measure seconds by counting cycles per second at that frequency. Radiation of this kind is one of the most stable and reproducible phenomena of nature. The current generation of atomic clocks is accurate to within one second in a few hundred million years.
Atomic clocks now set the length of a second and the time standard for the world.
SI prefixes are commonly used for times shorter than one second, but rarely for multiples of a second. Instead, certain non-SI units are permitted for use in SI: minutes, hours, days, and in astronomy Julian years. | https://en.wikipedia.org/wiki?curid=26873 |
Metric prefix
A metric prefix is a unit prefix that precedes a basic unit of measure to indicate a multiple or fraction of the unit. While all metric prefixes in common use today are decadic, historically there have been a number of binary metric prefixes as well. Each prefix has a unique symbol that is prepended to the unit symbol. The prefix "kilo-", for example, may be added to "gram" to indicate "multiplication" by one thousand: one kilogram is equal to one thousand grams. The prefix "milli-", likewise, may be added to "metre" to indicate "division" by one thousand; one millimetre is equal to one thousandth of a metre.
Decimal multiplicative prefixes have been a feature of all forms of the metric system, with six of these dating back to the system's introduction in the 1790s. Metric prefixes have also been used with some non-metric units. The SI prefixes are metric prefixes that were standardized for use in the International System of Units (SI) by the International Bureau of Weights and Measures (BIPM) in resolutions dating from 1960 to 1991. Since 2009, they have formed part of the International System of Quantities. They are also used in the Unified Code for Units of Measure (UCUM)
The BIPM specifies twenty prefixes for the International System of Units (SI).
Each prefix name has a symbol that is used in combination with the symbols for units of measure. For example, the symbol for "kilo-" is k, and is used to produce km, kg, and kW, which are the SI symbols for kilometre, kilogram, and kilowatt, respectively. Except for the early prefixes of "kilo-", "hecto-", and "deca-", the symbols for the multiplicative prefixes are uppercase letters, and those for the fractional prefixes are lowercase letters. There is a Unicode symbol for micro "µ" for use if the Greek letter "μ" is unavailable. When both are unavailable, the visually similar lowercase Latin letter "u" is commonly used instead. SI unit symbols are never in italics.
Prefixes corresponding to an integer power of one thousand are generally preferred. Hence 100 m is preferred over 1 hm (hectometre) or 10 dam (decametres). The prefixes "hecto-", "deca-", "deci-", and "centi-" are commonly used for everyday purposes, and the centimetre (cm) is especially common. However, some modern building codes require that the millimetre be used in preference to the centimetre, because "use of centimetres leads to extensive usage of decimal points and confusion".
Prefixes may not be used in combination. This also applies to mass, for which the SI base unit (kilogram) already contains a prefix. For example, milligram (mg) is used instead of microkilogram (μkg).
In the arithmetic of measurements having units, the units are treated as multiplicative factors to values. If they have prefixes, all but one of the prefixes must be expanded to their numeric multiplier, except when combining values with identical units. Hence:
When powers of units occur, for example, squared or cubed, the multiplicative prefix must be considered part of the unit, and thus included in the exponentiation:
The use of prefixes can be traced back to the introduction of the metric system in the 1790s, long before the 1960 introduction of the SI. The prefixes, including those introduced after 1960, are used with any metric unit, whether officially included in the SI or not (e.g., millidynes and milligauss). Metric prefixes may also be used with non-metric units.
The choice of prefixes with a given unit is usually dictated by convenience of use. Unit prefixes for amounts that are much larger or smaller than those actually encountered are seldom used.
The units kilogram, gram, milligram, microgram, and smaller are commonly used for measurement of mass. However, megagram, gigagram, and larger are rarely used; tonnes (and kilotonnes, megatonnes, etc.) or scientific notation are used instead. Megagram and teragram are occasionally used to disambiguate the metric tonne from other units with the name "ton".
The kilogram is the only base unit of the International System of Units that includes a metric prefix.
The litre (equal to a cubic decimetre), millilitre (equal to a cubic centimetre), microlitre, and smaller are common. In Europe, the centilitre is often used for packaged products such as wine and the decilitre is used less frequently. Agricultural products, like grain, beer and wine, when in bulk are often measured in hectolitres (each 100 litres in size).
Larger volumes are usually denoted in kilolitres, megalitres or gigalitres, or else in cubic metres (1 cubic metre = 1 kilolitre) or cubic kilometres (1 cubic kilometre = 1 teralitre). For scientific purposes, the cubic metre is usually used.
The kilometre, metre, centimetre, millimetre, and smaller are common. (However, the decimetre is rarely used.) The micrometre is often referred to by the non-SI term "micron". In some fields, such as chemistry, the ångström (equal to 0.1 nm) historically competed with the nanometre. The femtometre, used mainly in particle physics, is sometimes called a fermi. For large scales, megametre, gigametre, and larger are rarely used. Instead, non-metric units are used, such as the solar radius, astronomical units, light years, and parsecs; the astronomical unit is mentioned in the SI standards as an accepted non-SI unit.
Prefixes for the SI standard unit second are most commonly encountered for quantities less than one second. For larger quantities, the system of minutes (60 seconds), hours (60 minutes) and days (24 hours) is accepted for use with the SI and more commonly used. When speaking of spans of time, the length of the day is usually standardized to seconds so as not to create issues with the irregular leap second.
Larger multiples of the second such as kiloseconds and megaseconds are occasionally encountered in scientific contexts, but are seldom used in common parlance. For long-scale scientific work, particularly in astronomy, the Julian year or "annum" is a standardized variant of the year, equal to exactly SI seconds (365 days, 6 hours). The unit is so named because it was the average length of a year in the Julian calendar. Long time periods are then expressed by using metric prefixes with the annum, such as megaannum or gigaannum.
The SI unit of angle is the radian, but degrees, minutes, and seconds see some scientific use.
Official policy also varies from common practice for the degree Celsius (°C). NIST states: "Prefix symbols may be used with the unit symbol °C and prefix names may be used with the unit name "degree Celsius". For example, 12 m°C (12 millidegrees Celsius) is acceptable." In practice, it is more common for prefixes to be used with the kelvin when it is desirable to denote extremely large or small absolute temperatures or temperature differences. Thus, temperatures of star interiors may be given in units of MK (megakelvins), and molecular cooling may be described in mK (millikelvins).
In use the joule and kilojoule are common, with larger multiples seen in limited contexts. In addition, the kilowatt hour, a composite unit formed from the kilowatt and hour, is often used for electrical energy; other multiples can be formed by modifying the prefix of watt (e.g. terawatt hour).
There exist a number of definitions for the non-SI unit, the calorie. There are gram calories and kilogram calories. One kilogram calorie, which equals one thousand gram calories, often appears capitalized and without a prefix (i.e. "Cal") when referring to "dietary calories" in food. It is common to apply metric prefixes to the gram calorie, but not to the kilogram calorie: thus, 1 kcal = 1000 cal = 1 Cal.
Metric prefixes are widely used outside the system of metric units. Common examples include the megabyte and the decibel. Metric prefixes rarely appear with imperial or US units except in some special cases (e.g., microinch, kilofoot, kilopound). They are also used with other specialized units used in particular fields (e.g., megaelectronvolt, gigaparsec, millibarn). They are also occasionally used with currency units (e.g., gigadollar), mainly by people who are familiar with the prefixes from scientific usage. In astronomy, geology, and paleontology, the year, with symbol a (from the Latin "annus"), is commonly used with metric prefixes: ka, Ma, and Ga.
Official policies about the use of SI prefixes with non-SI units vary slightly between the International Bureau of Weights and Measures (BIPM) and the American National Institute of Standards and Technology (NIST). For instance, the NIST advises that 'to avoid confusion, prefix symbols (and prefix names) are not used with the time-related unit symbols (names) min (minute), h (hour), d (day); nor with the angle-related symbols (names) ° (degree), ′ (minute), and ″ (second)', whereas the BIPM adds information about the use of prefixes with the symbol "as" for arcsecond when they state: 'However astronomers use milliarcsecond, which they denote mas, and microarcsecond, μas, which they use as units for measuring very small angles.'
When a metric prefix is affixed to a root word, the prefix carries the stress, while the root drops its stress but retains a full vowel in the syllable that is stressed when the root word stands alone. For example, "kilobyte" is , with stress on the first syllable. However, units in common use outside the scientific community may be stressed idiosyncratically. In English-speaking countries, "kilometre" is the most conspicuous example. It is often pronounced , with reduced vowels on both syllables of "metre". This stress is not applied to other multiples or sub-multiples of metre, or to other units prefixed with "kilo-".
The prefix "giga" is usually pronounced in English as , with hard ⟨g⟩ as in "get", but sometimes , with soft ⟨g⟩ as in "gin".
The LaTeX typesetting system features an "SIunitx" package in which the units of measurement are spelled out, for example, codice_1 formats as "3 THz".
Some of the prefixes formerly used in the metric system have fallen into disuse and were not adopted into the SI. The decimal prefix "myria-" (sometimes also written as "myrio-") (ten thousand) as well as the binary prefixes "double-" and "demi-", denoting a factor of 2 and (one half), respectively, were parts of the original metric system adopted by France in 1795. These were not retained when the SI prefixes were internationally adopted by the 11th CGPM conference in 1960.
Other metric prefixes used historically include hebdo- (107) and micri- (10−14).
Double prefixes have been used in the past, such as "micromillimetres" or 'millimicrons' (now nanometres), "micromicrofarads" (now picofarads), "kilomegatons" (now gigatons), "hectokilometres" (now 100 kilometres) and the derived adjective "hectokilometric" (typically used for qualifying the fuel consumption measures). These are not compatible with the SI.
Other obsolete double prefixes included "decimilli-" (10−4), which was contracted to "dimi-" and standardized in France up to 1961.
Although the "yotta-" prefix is large, the field of computer data is likely to approach values reaching or exceeding yottabytes in the future. Thus, various proposals for a prefix beyond "yotta-" have been put forth.
In 2010, UC Davis student Austin Sendek started a petition to designate "hella" as the SI prefix for one octillion (short scale; long scale: quadrilliard; 1027). The petition gathered over supporters by circulating through Facebook and media coverage. It was also adopted by Google and Wolfram Alpha.
A proposal made to the BIPM is ronna (R) for , quecca (Q) for , ronto (r) for and quecto (q) for .
In written English, the symbol "K" is often used informally to indicate a multiple of thousand in many contexts. For example, one may talk of a "40K salary" (), or call the Year 2000 problem the "Y2K problem". In these cases, an uppercase K is often used with an implied unit (although it could then be confused with the symbol for the kelvin temperature unit if the context is unclear). This informal postfix is read or spoken as "thousand" or "grand", or just "k".
The financial and general news media mostly use m/M, b/B and t/T as abbreviations for million, billion (109) and trillion (1012), respectively, for large quantities, typically currency and population.
The medical and automotive fields in the United States use the abbreviations "cc" or "ccm" for cubic centimetres. 1 cubic centimetre is equivalent to 1 millilitre.
For nearly a century, engineers used the abbreviation "MCM" to designate a "thousand circular mils" in specifying the cross-sectional area of large electrical cables. Since the mid-1990s, "kcmil" has been adopted as the official designation of a thousand circular mils, but the designation "MCM" still remains in wide use. A similar system is used in natural gas sales in the United States: "m" (or "M") for thousands and "mm" (or "MM") for millions of British thermal units or therms, and in the oil industry, where "MMbbl" is the symbol for "millions of barrels". This usage of the capital letter "M" for "thousand" is from Roman numerals, in which "M" means 1000.
In some fields of information technology, it has been common to designate non-decimal multiples based on powers of 1024, rather than 1000, for some SI prefixes ("kilo-", "mega-", "giga-"), contrary to the definitions in the International System of Units (SI). This practice was once sanctioned by some industry associations, including JEDEC. The International Electrotechnical Commission (IEC) standardized the system of binary prefixes ("kibi-", "mebi-", "gibi-", etc.) for this purpose. | https://en.wikipedia.org/wiki?curid=26874 |
Split (poker)
In poker it is sometimes necessary to split, or divide the pot among two or more players rather than awarding it all to a single player. This can happen because of ties, and also by playing intentional split-pot poker variants (the most typical of these is high-low split poker, where the high hand and low hand split the pot).
To split a pot, one player uses both hands to take the chips from the pot and make stacks, placing them side by side to compare height (and therefore value). Equal stacks are placed aside. If there is more than one denomination of chip in the pot, the largest value chip is done first, and then progressively smaller value chips. If there is an odd number of larger chips, smaller chips from the pot can be used to equalize stacks or make change as necessary. Pots are always split down to the lowest denomination of chip used in the game. Three-way ties or further splits can also be done this way.
After fully dividing a pot, there may be a single odd lowest-denomination chip remaining (or two odd chips if splitting three ways, etc.). Odd chips can be awarded in several ways, agreed upon before the beginning of the game. The following rules are common:
Sometimes it is necessary to further split a half pot into quarters, or even smaller portions. This is especially common in community card high-low split games such as Omaha hold'em, where one player has the high hand and two or more players have tied low hands. Unfortunate players receiving such a fractional pot call it being "quartered". When this happens, an exception to the odd chip rules above can be made: if the high hand wins its half of the pot alone, and the low half is going to be quartered, the odd chip (if any) from the first split should be placed in the low half, rather than being awarded to the high hand. | https://en.wikipedia.org/wiki?curid=26882 |
Superconductivity
Superconductivity is the set of physical properties observed in certain materials where electrical resistance vanishes and magnetic flux fields are expelled from the material. Any material exhibiting these properties is a superconductor. Unlike an ordinary metallic conductor, whose resistance decreases gradually as its temperature is lowered even down to near absolute zero, a superconductor has a characteristic critical temperature below which the resistance drops abruptly to zero. An electric current through a loop of superconducting wire can persist indefinitely with no power source.
The superconductivity phenomenon was discovered in 1911 by Dutch physicist Heike Kamerlingh Onnes. Like ferromagnetism and atomic spectral lines, superconductivity is a quantum mechanical mystery. It is characterized by the Meissner effect, the complete ejection of magnetic field lines from the interior of the superconductor during its transitions into the superconducting state. The occurrence of the Meissner effect indicates that superconductivity cannot be understood simply as the idealization of "perfect conductivity" in classical physics.
In 1986, it was discovered that some cuprate-perovskite ceramic materials have a critical temperature above . Such a high transition temperature is theoretically impossible for a conventional superconductor, leading the materials to be termed high-temperature superconductors. The cheaply available coolant liquid nitrogen boils at 77 K, and thus the existence of superconductivity at higher temperatures than this facilitates many experiments and applications that are less practical at lower temperatures.
There are many criteria by which superconductors are classified. The most common are:
A superconductor can be "Type I", meaning it has a single critical field, above which all superconductivity is lost and below which the magnetic field is completely expelled from the superconductor; or "Type II", meaning it has two critical fields, between which it allows partial penetration of the magnetic field through isolated points. These points are called vortices. Furthermore, in multicomponent superconductors it is possible to have a combination of the two behaviours. In that case the superconductor is of Type-1.5.
It is "conventional" if it can be explained by the BCS theory or its derivatives, or "unconventional", otherwise.
A superconductor is generally considered "high-temperature" if it reaches a superconducting state above a temperature of 30 K (-243.15 °C); as in the initial discovery by Georg Bednorz and K. Alex Müller. It may also reference materials that transition to superconductivity when cooled using liquid nitrogen – that is, at only "Tc" > 77 K, although this is generally used only to emphasize that liquid nitrogen coolant is sufficient. Low temperature superconductors refer to materials with a critical temperature below 30 K. One exception to this rule is the iron pnictide group of superconductors which display behaviour and properties typical of high temperature superconductors, yet some of the group have critical temperatures below 30K.
Superconductor material classes include chemical elements (e.g. mercury or lead), alloys (such as niobium–titanium, germanium–niobium, and niobium nitride), ceramics (YBCO and magnesium diboride), superconducting pnictides (like fluorine-doped LaOFeAs) or organic superconductors (fullerenes and carbon nanotubes; though perhaps these examples should be included among the chemical elements, as they are composed entirely of carbon).
Most of the physical properties of superconductors vary from material to material, such as the heat capacity and the critical temperature, critical field, and critical current density at which superconductivity is destroyed.
On the other hand, there is a class of properties that are independent of the underlying material. For instance, all superconductors have "exactly" zero resistivity to low applied currents when there is no magnetic field present or if the applied field does not exceed a critical value. The existence of these "universal" properties implies that superconductivity is a thermodynamic phase, and thus possesses certain distinguishing properties which are largely independent of microscopic details.
The simplest method to measure the electrical resistance of a sample of some material is to place it in an electrical circuit in series with a current source "I" and measure the resulting voltage "V" across the sample. The resistance of the sample is given by Ohm's law as "R = V / I". If the voltage is zero, this means that the resistance is zero.
Superconductors are also able to maintain a current with no applied voltage whatsoever, a property exploited in superconducting electromagnets such as those found in MRI machines. Experiments have demonstrated that currents in superconducting coils can persist for years without any measurable degradation. Experimental evidence points to a current lifetime of at least 100,000 years. Theoretical estimates for the lifetime of a persistent current can exceed the estimated lifetime of the universe, depending on the wire geometry and the temperature. In practice, currents injected in superconducting coils have persisted for more than 23 years (as in August 2018) in superconducting gravimeters. In such instruments, the measurement principle is based on the monitoring of the levitation of a superconducting niobium sphere with a mass of 4 grams.
In a normal conductor, an electric current may be visualized as a fluid of electrons moving across a heavy ionic lattice. The electrons are constantly colliding with the ions in the lattice, and during each collision some of the energy carried by the current is absorbed by the lattice and converted into heat, which is essentially the vibrational kinetic energy of the lattice ions. As a result, the energy carried by the current is constantly being dissipated. This is the phenomenon of electrical resistance and Joule heating.
The situation is different in a superconductor. In a conventional superconductor, the electronic fluid cannot be resolved into individual electrons. Instead, it consists of bound "pairs" of electrons known as Cooper pairs. This pairing is caused by an attractive force between electrons from the exchange of phonons. Due to quantum mechanics, the energy spectrum of this Cooper pair fluid possesses an "energy gap", meaning there is a minimum amount of energy Δ"E" that must be supplied in order to excite the fluid. Therefore, if Δ"E" is larger than the thermal energy of the lattice, given by "kT", where "k" is Boltzmann's constant and "T" is the temperature, the fluid will not be scattered by the lattice. The Cooper pair fluid is thus a superfluid, meaning it can flow without energy dissipation.
In a class of superconductors known as type II superconductors, including all known high-temperature superconductors, an extremely low but nonzero resistivity appears at temperatures not too far below the nominal superconducting transition when an electric current is applied in conjunction with a strong magnetic field, which may be caused by the electric current. This is due to the motion of magnetic vortices in the electronic superfluid, which dissipates some of the energy carried by the current. If the current is sufficiently small, the vortices are stationary, and the resistivity vanishes. The resistance due to this effect is tiny compared with that of non-superconducting materials, but must be taken into account in sensitive experiments. However, as the temperature decreases far enough below the nominal superconducting transition, these vortices can become frozen into a disordered but stationary phase known as a "vortex glass". Below this vortex glass transition temperature, the resistance of the material becomes truly zero.
In superconducting materials, the characteristics of superconductivity appear when the temperature "T" is lowered below a critical temperature "T"c. The value of this critical temperature varies from material to material. Conventional superconductors usually have critical temperatures ranging from around 20 K to less than 1 K. Solid mercury, for example, has a critical temperature of 4.2 K. As of 2015, the highest critical temperature found for a conventional superconductor is 203K for H2S, although high pressures of approximately 90 gigapascals were required. Cuprate superconductors can have much higher critical temperatures: YBa2Cu3O7, one of the first cuprate superconductors to be discovered, has a critical temperature above 90 K, and mercury-based cuprates have been found with critical temperatures in excess of 130 K. The basic physical mechanism responsible for the high critical temperature is not yet clear. However, it is clear that a two-electron pairing is involved, although the nature of the pairing (formula_1 wave vs. formula_2 wave) remains controversial.
Similarly, at a fixed temperature below the critical temperature, superconducting materials cease to superconduct when an external magnetic field is applied which is greater than the "critical magnetic field". This is because the Gibbs free energy of the superconducting phase increases quadratically with the magnetic field while the free energy of the normal phase is roughly independent of the magnetic field. If the material superconducts in the absence of a field, then the superconducting phase free energy is lower than that of the normal phase and so for some finite value of the magnetic field (proportional to the square root of the difference of the free energies at zero magnetic field) the two free energies will be equal and a phase transition to the normal phase will occur. More generally, a higher temperature and a stronger magnetic field lead to a smaller fraction of electrons that are superconducting and consequently to a longer London penetration depth of external magnetic fields and currents. The penetration depth becomes infinite at the phase transition.
The onset of superconductivity is accompanied by abrupt changes in various physical properties, which is the hallmark of a phase transition. For example, the electronic heat capacity is proportional to the temperature in the normal (non-superconducting) regime. At the superconducting transition, it suffers a discontinuous jump and thereafter ceases to be linear. At low temperatures, it varies instead as "e"−α/"T" for some constant, α. This exponential behavior is one of the pieces of evidence for the existence of the energy gap.
The order of the superconducting phase transition was long a matter of debate. Experiments indicate that the transition is second-order, meaning there is no latent heat. However, in the presence of an external magnetic field there is latent heat, because the superconducting phase has a lower entropy below the critical temperature than the normal phase. It has been experimentally demonstrated that, as a consequence, when the magnetic field is increased beyond the critical field, the resulting phase transition leads to a decrease in the temperature of the superconducting material.
Calculations in the 1970s suggested that it may actually be weakly first-order due to the effect of long-range fluctuations in the electromagnetic field. In the 1980s it was shown theoretically with the help of a disorder field theory, in which the vortex lines of the superconductor play a major role, that the transition is of second order within the type II regime and of first order (i.e., latent heat) within the type I regime, and that the two regions are separated by a tricritical point. The results were strongly supported by Monte Carlo computer simulations.
When a superconductor is placed in a weak external magnetic field H, and cooled below its transition temperature, the magnetic field is ejected. The Meissner effect does not cause the field to be completely ejected but instead the field penetrates the superconductor but only to a very small distance, characterized by a parameter "λ", called the London penetration depth, decaying exponentially to zero within the bulk of the material. The Meissner effect is a defining characteristic of superconductivity. For most superconductors, the London penetration depth is on the order of 100 nm.
The Meissner effect is sometimes confused with the kind of diamagnetism one would expect in a perfect electrical conductor: according to Lenz's law, when a "changing" magnetic field is applied to a conductor, it will induce an electric current in the conductor that creates an opposing magnetic field. In a perfect conductor, an arbitrarily large current can be induced, and the resulting magnetic field exactly cancels the applied field.
The Meissner effect is distinct from this—it is the spontaneous expulsion which occurs during transition to superconductivity. Suppose we have a material in its normal state, containing a constant internal magnetic field. When the material is cooled below the critical temperature, we would observe the abrupt expulsion of the internal magnetic field, which we would not expect based on Lenz's law.
The Meissner effect was given a phenomenological explanation by the brothers Fritz and Heinz London, who showed that the electromagnetic free energy in a superconductor is minimized provided
where H is the magnetic field and λ is the London penetration depth.
This equation, which is known as the London equation, predicts that the magnetic field in a superconductor decays exponentially from whatever value it possesses at the surface.
A superconductor with little or no magnetic field within it is said to be in the Meissner state. The Meissner state breaks down when the applied magnetic field is too large. Superconductors can be divided into two classes according to how this breakdown occurs. In Type I superconductors, superconductivity is abruptly destroyed when the strength of the applied field rises above a critical value "Hc". Depending on the geometry of the sample, one may obtain an intermediate state consisting of a baroque pattern of regions of normal material carrying a magnetic field mixed with regions of superconducting material containing no field. In Type II superconductors, raising the applied field past a critical value "H""c"1 leads to a mixed state (also known as the vortex state) in which an increasing amount of magnetic flux penetrates the material, but there remains no resistance to the flow of electric current as long as the current is not too large. At a second critical field strength "H""c"2, superconductivity is destroyed. The mixed state is actually caused by vortices in the electronic superfluid, sometimes called fluxons because the flux carried by these vortices is quantized. Most pure elemental superconductors, except niobium and carbon nanotubes, are Type I, while almost all impure and compound superconductors are Type II.
Conversely, a spinning superconductor generates a magnetic field, precisely aligned with the spin axis. The effect, the London moment, was put to good use in Gravity Probe B. This experiment measured the magnetic fields of four superconducting gyroscopes to determine their spin axes. This was critical to the experiment since it is one of the few ways to accurately determine the spin axis of an otherwise featureless sphere.
Superconductivity was discovered on April 8, 1911 by Heike Kamerlingh Onnes, who was studying the resistance of solid mercury at cryogenic temperatures using the recently produced liquid helium as a refrigerant. At the temperature of 4.2 K, he observed that the resistance abruptly disappeared. In the same experiment, he also observed the superfluid transition of helium at 2.2 K, without recognizing its significance. The precise date and circumstances of the discovery were only reconstructed a century later, when Onnes's notebook was found. In subsequent decades, superconductivity was observed in several other materials. In 1913, lead was found to superconduct at 7 K, and in 1941 niobium nitride was found to superconduct at 16 K.
Great efforts have been devoted to finding out how and why superconductivity works; the important step occurred in 1933, when Meissner and Ochsenfeld discovered that superconductors expelled applied magnetic fields, a phenomenon which has come to be known as the Meissner effect. In 1935, Fritz and Heinz London showed that the Meissner effect was a consequence of the minimization of the electromagnetic free energy carried by superconducting current.
The theoretical model that was first conceived in human history for superconductivity was completely classical: it is summarized by London constitutive equations.
It was put forward by the brothers Fritz and Heinz London in 1935, shortly after the discovery that magnetic fields are expelled from superconductors. A major triumph of the equations of this theory is their ability to explain the Meissner effect, wherein a material exponentially expels all internal magnetic fields as it crosses the superconducting threshold. By using the London equation, one can obtain the dependence of the magnetic field inside the superconductor on the distance to the surface.
The two constitutive equations for a superconductor by London are:
The first equation follows from Newton's second law for superconducting electrons.
During the 1950s, theoretical condensed matter physicists arrived at an understanding of "conventional" superconductivity, through a pair of remarkable and important theories: the phenomenological Ginzburg–Landau theory (1950) and the microscopic BCS theory (1957).
In 1950, the phenomenological Ginzburg–Landau theory of superconductivity was devised by Landau and Ginzburg. This theory, which combined Landau's theory of second-order phase transitions with a Schrödinger-like wave equation, had great success in explaining the macroscopic properties of superconductors. In particular, Abrikosov showed that Ginzburg–Landau theory predicts the division of superconductors into the two categories now referred to as Type I and Type II. Abrikosov and Ginzburg were awarded the 2003 Nobel Prize for their work (Landau had received the 1962 Nobel Prize for other work, and died in 1968). The four-dimensional extension of the Ginzburg–Landau theory, the Coleman-Weinberg model, is important in quantum field theory and cosmology.
Also in 1950, Maxwell and Reynolds "et al." found that the critical temperature of a superconductor depends on the isotopic mass of the constituent element. This important discovery pointed to the electron-phonon interaction as the microscopic mechanism responsible for superconductivity.
The complete microscopic theory of superconductivity was finally proposed in 1957 by Bardeen, Cooper and Schrieffer. This BCS theory explained the superconducting current as a superfluid of Cooper pairs, pairs of electrons interacting through the exchange of phonons. For this work, the authors were awarded the Nobel Prize in 1972.
The BCS theory was set on a firmer footing in 1958, when N. N. Bogolyubov showed that the BCS wavefunction, which had originally been derived from a variational argument, could be obtained using a canonical transformation of the electronic Hamiltonian. In 1959, Lev Gor'kov showed that the BCS theory reduced to the Ginzburg–Landau theory close to the critical temperature.
Generalizations of BCS theory for conventional superconductors form the basis for understanding of the phenomenon of superfluidity, because they fall into the lambda transition universality class. The extent to which such generalizations can be applied to unconventional superconductors is still controversial.
The first practical application of superconductivity was developed in 1954 with Dudley Allen Buck's invention of the cryotron. Two superconductors with greatly different values of critical magnetic field are combined to produce a fast, simple switch for computer elements.
Soon after discovering superconductivity in 1911, Kamerlingh Onnes attempted to make an electromagnet with superconducting windings but found that relatively low magnetic fields destroyed superconductivity in the materials he investigated. Much later, in 1955, G. B. Yntema succeeded in constructing a small 0.7-tesla iron-core electromagnet with superconducting niobium wire windings. Then, in 1961, J. E. Kunzler, E. Buehler, F. S. L. Hsu, and J. H. Wernick made the startling discovery that, at 4.2 kelvin niobium–tin, a compound consisting of three parts niobium and one part tin, was capable of supporting a current density of more than 100,000 amperes per square centimeter in a magnetic field of 8.8 tesla. Despite being brittle and difficult to fabricate, niobium–tin has since proved extremely useful in supermagnets generating magnetic fields as high as 20 tesla. In 1962 T. G. Berlincourt and R. R. Hake discovered that more ductile alloys of niobium and titanium are suitable for applications up to 10 tesla.
Promptly thereafter, commercial production of niobium–titanium supermagnet wire commenced at Westinghouse Electric Corporation and at Wah Chang Corporation. Although niobium–titanium boasts less-impressive superconducting properties than those of niobium–tin, niobium–titanium has, nevertheless, become the most widely used "workhorse" supermagnet material, in large measure a consequence of its very high ductility and ease of fabrication. However, both niobium–tin and niobium–titanium find wide application in MRI medical imagers, bending and focusing magnets for enormous high-energy-particle accelerators, and a host of other applications. Conectus, a European superconductivity consortium, estimated that in 2014, global economic activity for which superconductivity was indispensable amounted to about five billion euros, with MRI systems accounting for about 80% of that total.
In 1962, Josephson made the important theoretical prediction that a supercurrent can flow between two pieces of superconductor separated by a thin layer of insulator. This phenomenon, now called the Josephson effect, is exploited by superconducting devices such as SQUIDs. It is used in the most accurate available measurements of the magnetic flux quantum "Φ"0 = "h"/(2"e"), where "h" is the Planck constant. Coupled with the quantum Hall resistivity, this leads to a precise measurement of the Planck constant. Josephson was awarded the Nobel Prize for this work in 1973.
In 2008, it was proposed that the same mechanism that produces superconductivity could produce a superinsulator state in some materials, with almost infinite electrical resistance.
Until 1986, physicists had believed that BCS theory forbade superconductivity at temperatures above about 30 K. In that year, Bednorz and Müller discovered superconductivity in lanthanum barium copper oxide (LBCO), a lanthanum-based cuprate perovskite material, which had a transition temperature of 35 K (Nobel Prize in Physics, 1987). It was soon found that replacing the lanthanum with yttrium (i.e., making YBCO) raised the critical temperature above 90 K.
This temperature jump is particularly significant, since it allows liquid nitrogen as a refrigerant, replacing liquid helium.
This can be important commercially because liquid nitrogen can be produced relatively cheaply, even on-site. Also, the higher temperatures help avoid some of the problems that arise at liquid helium temperatures, such as the formation of plugs of frozen air that can block cryogenic lines and cause unanticipated and potentially hazardous pressure buildup.
Many other cuprate superconductors have since been discovered, and the theory of superconductivity in these materials is one of the major outstanding challenges of theoretical condensed matter physics.
There are currently two main hypotheses – the resonating-valence-bond theory, and spin fluctuation which has the most support in the research community. The second hypothesis proposed that electron pairing in high-temperature superconductors is mediated by short-range spin waves known as paramagnons.
In 2008, holographic superconductivity, which uses holographic duality or AdS/CFT correspondence theory, was proposed by Gubser, Hartnoll, Herzog, and Horowitz, as a possible explanation of high-temperature superconductivity in certain materials.
Since about 1993, the highest-temperature superconductor has been a ceramic material consisting of mercury, barium, calcium, copper and oxygen (HgBa2Ca2Cu3O8+δ) with "T"c = 133–138 K. The latter experiment (138 K) still awaits experimental confirmation, however.
In February 2008, an iron-based family of high-temperature superconductors was discovered. Hideo Hosono, of the Tokyo Institute of Technology, and colleagues found lanthanum oxygen fluorine iron arsenide (LaO1−xFxFeAs), an oxypnictide that superconducts below 26 K. Replacing the lanthanum in LaO1−"x"F"x"FeAs with samarium leads to superconductors that work at 55 K.
In 2014 and 2015, hydrogen sulfide () at extremely high pressures (around 150 gigapascals) was first predicted and then confimed to be a high-temperature superconductor with a transition temperature of 80 K. Additionally, in 2019 it was discovered that lanthanum hydride () becomes a superconductor at 250K under a pressure of 170 gigapascals. This is currently the highest temperature at which any material has shown superconductivity.
In 2018, a research team from the Department of Physics, Massachusetts Institute of Technology, discovered superconductivity in bilayer graphene with one layer twisted at an angle of approximately 1.1 degrees with cooling and applying a small electric charge. Even if the experiments were not carried out in a high-temperature environment, the results are correlated less to classical but high temperature superconductors, given that no foreign atoms need to be introduced.
Superconducting magnets are some of the most powerful electromagnets known. They are used in MRI/NMR machines, mass spectrometers, the beam-steering magnets used in particle accelerators and plasma confining magnets in some tokamaks. They can also be used for magnetic separation, where weakly magnetic particles are extracted from a background of less or non-magnetic particles, as in the pigment industries. They can also be used in large wind turbines to overcome the restrictions imposed by high electrical currents, with an industrial grade 3.6 megawatt superconducting windmill generator having been tested successfully in Denmark.
In the 1950s and 1960s, superconductors were used to build experimental digital computers using cryotron switches. More recently, superconductors have been used to make digital circuits based on rapid single flux quantum technology and RF and microwave filters for mobile phone base stations.
Superconductors are used to build Josephson junctions which are the building blocks of SQUIDs (superconducting quantum interference devices), the most sensitive magnetometers known. SQUIDs are used in scanning SQUID microscopes and magnetoencephalography. Series of Josephson devices are used to realize the SI volt. Depending on the particular mode of operation, a superconductor–insulator–superconductor Josephson junction can be used as a photon detector or as a mixer. The large resistance change at the transition from the normal- to the superconducting state is used to build thermometers in cryogenic micro-calorimeter photon detectors. The same effect is used in ultrasensitive bolometers made from superconducting materials.
Other early markets are arising where the relative efficiency, size and weight advantages of devices based on high-temperature superconductivity outweigh the additional costs involved. For example, in wind turbines the lower weight and volume of superconducting generators could lead to savings in construction and tower costs, offsetting the higher costs for the generator and lowering the total levelized cost of electricity (LCOE).
Promising future applications include high-performance smart grid, electric power transmission, transformers, power storage devices, electric motors (e.g. for vehicle propulsion, as in vactrains or maglev trains), magnetic levitation devices, fault current limiters, enhancing spintronic devices with superconducting materials, and superconducting magnetic refrigeration. However, superconductivity is sensitive to moving magnetic fields, so applications that use alternating current (e.g. transformers) will be more difficult to develop than those that rely upon direct current. Compared to traditional power lines, superconducting transmission lines are more efficient and require only a fraction of the space, which would not only lead to a better environmental performance but could also improve public acceptance for expansion of the electric grid. | https://en.wikipedia.org/wiki?curid=26884 |
Geography of Sweden
Sweden is a country in Northern Europe on the Scandinavian Peninsula. It borders Norway to the west; Finland to the northeast; and the Baltic Sea and Gulf of Bothnia to the south and east. At , Sweden is the 55th largest country in the world. It is the fifth largest in Europe and the largest in Northern Europe.
Sweden has a long coastline on the eastern side and the Scandinavian mountain chain (Scanderna) on the western border which separates Sweden from Norway. It has maritime borders with Denmark, Germany, Poland, Russia, Lithuania, Latvia and Estonia, and it is also linked to Denmark (southwest) by the Öresund bridge. It has an Exclusive Economic Zone of .
Much of Sweden is heavily forested, with 69% of the country being forest and woodland, while farmland constitutes only 8% of land use. Sweden consists of 39,960 km2 of water area, constituting around 95,700 lakes. The lakes are sometimes used for water power plants, especially the large northern rivers and lakes.
Most of northern and western central Sweden consists of vast tracts of hilly and mountainous land called the Norrland terrain. From the south the transition to the Norrland terrain is not only seen in the relief but also in the wide and contiguous boreal forests that extend north of it with till and peat being the overwhelmingly most common soil types.
South of the Norrland terrain lies the Central Swedish lowland which forms a broad east-west trending belt from Gothenburg to Stockholm. This is the traditional heartland of Sweden due to its large population and agricultural resources. The region forms a belt of fertile soils suitable for agriculture that interrupts the forested and till-coated lands to the north and south. Before the expansion of agriculture, these fertile soils were covered by a broad-leaved tree forest where maples, oaks, ashes, small-leaved lime and common hazel grew. The Central Swedish lowland does however also contain soils of poor quality, particularly in hills where Scots pine and Norway spruce grow on top of thin till soils. Agriculture aside, the region benefits also from the proximity of hydropower, forest and bergslagen's mineral resources. Sweden's four largest lakes, Vänern, Vättern, Mälaren and Hjälmaren, lie within the lowlands.
To the south of the Central Swedish lowland lies the South Swedish highlands which except for a lack of deep valleys is similar to the Norrland terrain found further north in Sweden. The highest point of the highlands lies at 377 m. Poor soil conditions have posed significant difficulties for agriculture in the highlands, meaning that over time small industries became relatively important in local economies.
Southernmost Sweden contains a varied landscape with both plains and hilly terrain. A characteristic chain of elongated hills runs across Scania from northwest to southeast. These hills are horsts located along the Tornquist Zone. Some of the horst are Hallandsåsen, Römelåsen and Söderåsen. The plains of Scania and Halland make up 10% of Sweden's cultivated lands and are the country's main agricultural landscape. Productivity is high relative to the rest of Sweden and more akin to that of more southern European countries. The natural vegetation is made up of broadleaf forest although conifer plantations are common. Southern Sweden has Sweden's greatest animal and plant diversity.
The two largest islands are Gotland and Öland in the southeast. They differ from the rest of Sweden by being made up of limestone and marl with an alvar vegetation adapted to the island's calcareous soils. Gotland and Öland have landforms that are rare or absent in mainland Sweden. These include active cliffs seen in segments of their western coasts, sea stacks called "rauks" and large cave systems.
Sweden has 25 provinces or "landskap" ("landscapes"), based on culture, geography and history: Bohuslän, Blekinge, Dalarna, Dalsland, Gotland, Gästrikland, Halland, Hälsingland, Härjedalen, Jämtland, Lapland, Medelpad, Norrbotten, Närke, Skåne, Småland, Södermanland, Uppland, Värmland, Västmanland, Västerbotten, Västergötland, Ångermanland, Öland and Östergötland.
While these provinces serve no political or administrative purpose, they play an important role for people's self-identification. The provinces are usually grouped together in three large lands ("landsdelar"): the northern Norrland, the central Svealand and southern Götaland. The sparsely populated Norrland encompasses almost 60% of the country.
Administratively, Sweden is divided into 21 counties, or "län". In each county there is a County Administrative Board, or "länsstyrelse", which is appointed by the national government.
In each county there is also a separate County Council, or "landsting", which is the municipal representation appointed by the county electorate.
The letters shown were on the vehicle registration plates until 1973.
Each county is further divided into municipalities or "kommuner", ranging from only one (in Gotland County) to forty-nine (in Västra Götaland County). The total number of municipalities is 290.
The northern municipalities are often large in size, but have small populations – the largest municipality is Kiruna with an area as large as the three southern provinces in Sweden (Scania, Blekinge and Halland) combined, but it only has a population of 25,000, and its density is about 1 / km2.
Sweden has a population of 10 million as of January 2017. The mountainous north is considerably less populated than the southern and central regions, partly because the summer period lasts longer in the south, and this is where the more successful agricultural industries were originally established. Another historical reason is said to be the desired proximity to key trade routes and partners in continental Europe, e.g. Germany. As a result, all seven urban areas in Sweden with a population of 100,000 or more, are located in the southern half of the country.
Cities and towns in Sweden are neither political nor administrative entities; rather they are localities or urban areas, independent of municipal subdivisions.
The largest city, in terms of population, is the capital Stockholm, in the east, the dominant city for culture and media, with a population of 1,250,000. The second largest city is Gothenburg, with 510,500, in the west. The third largest is Malmö in the south, with 258,000. The largest city in the north is Umeå with 76,000 inhabitants.
Sweden's natural resources include copper, gold, hydropower, iron ore, lead, silver, timber, uranium, and zinc.
Acid rain has become an issue because it is damaging soils and lakes and polluting the North Sea and the Baltic Sea. The HBV hydrology transport model has been used to analyze nutrient discharge to the Baltic from tributary watersheds.
The extreme points of Sweden include the coordinates that are farthest north, south, east and west in Sweden, and the ones that are at the highest and the lowest elevations in the country. Unlike Norway and Denmark, Sweden has no external territories that can be considered either inside or outside the country depending on definition, meaning that the extreme points of Sweden are unambiguous.
The latitude and longitude are expressed in , in which a positive latitude value refers to the Northern Hemisphere, and a negative value refers to the Southern Hemisphere. Additionally, a negative elevation value refers to land below sea level. The coordinates used in this article are sourced from Google Earth, which makes use of the World Geodetic System (WGS) 84, a geodetic reference system.
Sweden's northernmost point is Treriksröset, in the Lapland province, where the borders of Sweden, Norway, and Finland meet. The closest Swedish city to the area is Kiruna, which is Sweden's northernmost city. Sweden's southernmost point is in the harbour of the fishing village Smygehuk, near the city of Trelleborg, which borders the Baltic Sea. At the pier of the harbour, a signpost displays the exact position of the point, as well as the distance to Treriksröset, Stockholm, Berlin, Paris, and Moscow.
Sweden's westernmost point is on Stora Drammen, an islet in Skagerrak outside the coast of Bohuslän. Seabirds and harbor seals have colonies on the islet, but it is uninhabited by humans. Sweden's easternmost point is on Kataja, an islet south of Haparanda in the Bothnian Bay. The islet is divided between Sweden and Finland. The border was established in 1809, after the Finnish War, between what was previously two islets, a Swedish one called Kataja and a smaller Finnish one called Inakari. Since 1809, post-glacial rebound has caused the sea level in the region to drop relative to land level, joining the two islets. If counting the mainland only, Stensvik in Strömstad is Sweden's westernmost point, and Sundholmen in Haparanda is the easternmost point.
The highest point in Sweden is Kebnekaise, which stands at (August 2018). It is in the Scandinavian Mountains chain, in the province of Lapland. The mountain has two peaks, of which the glaciated southern one is the highest at . The northern peak, which stands at , is free of ice. Although the south top is traditionally said to be high, new measurements have shown that the glacier has shrunk fairly fast; therefore the summit is not as high as earlier. It was in 2008. Other points of comparable height in the vicinity of Kebnekaise include Sarektjåkka at , and Kaskasatjåkka at . If the summers of 2016 and 2017 get as warm as the previous years, the northern peak will become the highest.
Sweden's lowest point, which is below sea level, is in the Kristianstads Vattenrike Biosphere Reserve in the city of Kristianstad. The point is at the bottom of what was once Nosabyviken, a bay on the lake of Hammarsjön. The bay was drained in the 1860s by John Nun Milner, an engineer, to get more arable land for Kristianstad.
Only public transportation. | https://en.wikipedia.org/wiki?curid=26889 |
Demographics of Sweden
The total resident population of Sweden was 10,343,403 in March 2020. The population exceeded 10 million for the first time on Friday 20 January 2017. The three largest cities are Stockholm, Gothenburg and Malmö. Sweden's population has become much more ethnically, religiously and linguistically diverse over the past 70 years as a result of global immigration. Every fourth (24.9%) resident in the country has immigrant background and every third (32.3%) has at least one parent born abroad.
Demographic statistics according to the World Population Review.
Demographic statistics according to the CIA World Factbook, unless otherwise indicated.
Number of births :
Number of deaths :
Natural increase :
The demography of Sweden is monitored by Statistics Sweden (SCB).
The 2005 Swedish census showed an increase of 475,322 compared to the 1990 census, an average increase of 31,680 annually. During the 1990s, birth rate increased by more than 100,000 children per year while death rates fell and immigration surged. In the early 2000s, birth rate declined as immigration increased further, with the context of unrest in the Middle East, upholding steady population growth.
In 1950 Sweden had fewer people aged 10–20 with more people ages 20–30 and 0–10. In 2017 the ratio of male to female remains steady at about 50–50. As a whole, the graph broadens with people appearing to live longer. In 2050 it is predicted that all ages will increase from below 300,000 males and females to above 300,000 males and females. With about 50,000 people living to the ages of 90–100. In 2100 the graph is shaped as a rectangle with people of all ages and genders remaining steady. It narrows slightly at the top of the graph with about 250,000/300,000 males and females living to be 90–100 years old.
Statistics Sweden projects the following population development in Sweden:
Eurostat projects a population in Sweden reaching 11,994,364 people in 2040 and 14,388,478 in 2080.
The population density is 22.5 people per km² (58.2 per square mile) and it is substantially higher in the south than in the north. About 85% of the population live in urban areas. The capital city Stockholm has a municipal population of about 950,000 (with 1.5 million in the urban area and 2.3 million in the metropolitan area). The second- and third-largest cities are Gothenburg and Malmö. Greater Gothenburg counts just over a million inhabitants and the same goes for the western part of Scania, along the Öresund. The Öresund Region, the Danish-Swedish cross-border region around the Öresund that Malmö is part of, has a population of 4 million. Outside of major cities, areas with notably higher population density include the agricultural part of Östergötland, the western coast, the area around Lake Mälaren and the agricultural area around Uppsala.
Norrland, which covers approximately 60% of the Swedish territory, has a very low population density (below 5 people per square kilometer). The mountains and most of the remote coastal areas are almost unpopulated. Low population density exists also in large parts of western Svealand, as well as southern and central Småland. An area known as "Finnveden", which is located in the south-west of Småland, and mainly below the 57th parallel, can also be considered as almost empty of people.
The majority of the population are ethnic Swedes, or people who can trace their ethnicity to Swedish stock going back at least 12 generations. The Sweden Finns are a large ethnic minority comprising approximately 50,000 along the Swedish-Finnish border, and 450,000 first and second-generation immigrated ethnic Finns, mainly living in the Mälaren Valley region. Meänkieli Finnish has official status in parts of northern Sweden near the Finnish border. In addition, Sweden's indigenous population groups include the Sami people, who have a history of practicing hunting and gathering and gradually adopting a largely semi-nomadic reindeer herding lifestyle. They has been present in Fenno-Scandinavia from at earliest 5000 years to at latest around 2650 years . Today, the Sami language holds the status of official minority language in four municipalities in the Norrbotten county.
In addition to the Sami, Tornedalers, and Sweden Finns, Jewish and Roma people have national minority status in Sweden.
There are no official statistics on ethnicity, but according to Statistics Sweden, around 3,311,312 (32.3%) inhabitants of Sweden were of a foreign background in 2018, defined as being born abroad or born in Sweden with at least one parent born abroad. The most common countries of origin were Syria (1.82%), Finland (1.45%), Iraq (1.41%), Poland (0.91%), Iran (0.76%) and Somalia (0.67%). Sweden subsequently has one of the oldest populations in the world, with the average age of 41.1 years.
The total fertility rate is the number of children born per woman. It is based on fairly good data for the entire period. Sources: Our World In Data and Gapminder Foundation.
Data according to Statistics Sweden, which collects the official statistics for Sweden.
Sources: Our World In Data and the United Nations.
Source: "UN World Population Prospects"
Prior to World War II, emigrants generally outnumbered immigrants. Since then, net migration has been positive with many immigrants coming to Sweden from the 1970s through today.
Between 1820 and 1930, approximately 1.3 million Swedes, a third of the country's population at the time, emigrated to North America, and most of them to the United States. There are more than 4.4 million Swedish Americans according to a 2006 US Census Bureau estimate. In Canada, the community of Swedish ancestry is 330,000 strong.
The demographic profile of Sweden has altered considerably due to immigration patterns since the 1970s. As of 2017, Statistics Sweden reported that around 2,439,007 or 24.1% of the inhabitants of Sweden were from a foreign background: that is, each such person either had been born abroad or had been born in Sweden to two parents who themselves had both been born abroad. Also taking into account people with only one parent born abroad, this number increases to almost a third in 2017.
Additionally, the birth rate among immigrant women after arriving in Sweden is somewhat higher than among ethnic Swedes. Taking into account the fact that immigrant women have on average fewer children than Swedish women of comparable age, however, the difference in total birth rate is only 0.1 children more if the woman is foreign born – with the disclaimer that some women may have children not immigrating to and not reported in Sweden, who are thus not included in the statistics.
Immigration increased markedly with World War II. Historically, the most numerous of foreign born nationalities are ethnic Germans from Germany and other Scandinavians from Denmark and Norway. In short order, 70,000 war children were evacuated from Finland, of which 15,000 remained in Sweden. Also, many of Denmark's nearly 7,000 Jews who were evacuated to Sweden decided to remain there.
A sizable community from the Baltic States (Estonia, Latvia and Lithuania) arrived during the Second World War.
During the 1950s and 1960s, the recruitment of immigrant labour was an important factor of immigration. The Nordic countries signed a trade agreement in 1952, establishing a common labour market and free movement across borders. This migration within the Nordic countries, especially from Finland to Scandinavia, was essential to create the tax-base required for the expansion of the strong public sector now characteristic of Scandinavia.
This continued until 1967, when the labour market became saturated, and Sweden introduced new immigration controls.
On a smaller scale, Sweden took in political refugees from Hungary and the former Czechoslovakia after their countries were invaded by the Soviet Union in 1956 and 1968, respectively.
Since the early 1970s, immigration to Sweden has been mostly due to refugee migration and family reunification from countries in the Middle East and Latin America.
According to Eurostat, in 2010, there were 1.33 million foreign-born residents in Sweden, corresponding to 14.3% of the total population. Of these, 859,000 (64.3%) were born outside the EU and 477,000 (35.7%) were born in another EU Member State. By comparison, the Swedish civil registry reports, for 2018, that nearly 1.96 million residents are foreign-born, a 47% increase from 2010. There are 8.27 million Swedish-born residents, giving a total population of 10.23 million, and a 19.1% foreign-born population.
The first group of Assyrians/Syriacs moved to Sweden from Lebanon in 1967. Many of them live in Södertälje (Stockholm). There are also around 40,000 Roma in Sweden. Some Roma people have long historical roots in Sweden, while others are more recent migrants from elsewhere in Europe.
Immigrants from Western Asia have been a rapidly growing share of Sweden's population. According to the government agency Statistics Sweden, the number of immigrants born in all of Asia (including the Middle East) rose from just 1,000 in 1950 to 295,000 in 2003. Most of those immigrants came from Iraq, Iran, Lebanon and Syria, according to Statistics Sweden.
Immigration of Iraqis increased dramatically during the Iraq War, beginning in 2003. A total of 8,951 Iraqis came to Sweden in 2006, accounting for 45% of the entire Iraqi migration to Europe. By 2007, the community of Iraqis in Sweden numbered above 70,000. In 2008, Sweden introduced tighter rules on asylum seekers.
A significant number of Syrian Christians have also settled in Sweden. There have also been immigrants from South-Central Asia such as Afghanistan and Pakistan. Since the European migrant crisis, Syrians became the second-largest group of foreign-born persons in the Swedish civil registry in 2017 with 158,443 people (after former Yugoslavia).
Note that the table below lists the citizenship the person had when arriving in Sweden, and therefore there are no registered Eritreans, Russians or Bosnians from 1990, they were recorded as Ethiopians, Soviets and Yugoslavs. The nationality of Yugoslavs below is therefore people who came to Sweden from the Socialist Federal Republic of Yugoslavia before 1991 and people who came from today's Montenegro and Serbia before 2003, then called the Federal Republic of Yugoslavia. Counting all people who came from Slovenia, Croatia, Bosnia and Herzegovina, Serbia, Montenegro, Kosovo, Macedonia, Serbia and Montenegro, the Federal Republic of Yugoslavia and the Socialist Federal Republic of Yugoslavia, there were 176,033 people from there in 2018.
The twenty-five largest groups of foreign-born persons in the Swedish civil registry as of autumn 2018 were:
The ten most common countries of birth among immigrants registered in Sweden during 2016 (including asylum seekers who came in 2015) were the following:
The Swedish language is by far the dominating language in Sweden, and is used by the government administration. English is also widely spoken and is taught in public schools.
Since 1999, Sweden has five officially recognised minority languages: Sami, Meänkieli, Standard Finnish, Romani chib and Yiddish.
The Sami language, spoken by about 7,000 people in Sweden, may be used in government agencies, courts, preschools and nursing homes in the municipalities of Arjeplog, Gällivare, Jokkmokk and Kiruna and
Similarly, Finnish and Meänkieli can be used in the municipalities of Gällivare, Haparanda, Kiruna, Pajala and Övertorneå and its immediate neighbourhood.
Finnish is also official language, along with Swedish, in the city of Eskilstuna.
During the mid to late 20th century, immigrant communities brought other languages, among others being Persian, Serbo-Croatian, Arabic and Neo-Aramaic.
The majority (57.7%) of the population belongs to the Church of Sweden, the Lutheran church that was disestablished in 2000. This is because until 1996, those who had family members in the church automatically became members at birth. Other Christian denominations in Sweden include the Roman Catholic Church (see Catholic Church of Sweden), several Orthodox churches in diaspora, Baptist, Pentecostal, Neo-pietistic ("nyevangeliska") and other evangelical Christian churches ("frikyrkor" = 'free churches'). Shamanism persisted among the Sami people up until the 18th century, but no longer exists in its traditional form as most Sami today belong to the Lutheran church.
Jews were permitted to practice their religion in five Swedish cities in 1782, and have enjoyed full rights as citizens since 1870. The new Freedom of Religion Bill was passed in 1951, and former obstacles against Non-Lutherans working in schools and hospitals were removed. Further, that bill made it legal to leave any religious denomination, without entering another. There are also many Muslims, as well as a number of Buddhists and Bahá'í in Sweden, mainly as a result of 20th and 21st century immigration. There is also a small Zoroastrian community in Sweden. | https://en.wikipedia.org/wiki?curid=26890 |
Telecommunications in Sweden
This article covers telecommunications in Sweden.
Sweden liberalized its telecommunications industry starting in 1980s and being formally liberalized in 1993. This was three years ahead of USA and five years before the European common policy introduced in January 1998 allowed for an open and competitive telecommunication market. The Swedes, most of who are computer literate, enjoy a continuous growth in the Internet market and the availability of technologies such as Metro Ethernet, fiber, satellite, WAN access technologies and even the availability of 3G services. Statistically, 6.447 (2004) million telephone main lines are in use, 8.0436 (2005) million mobile cellular telephones are in use and 6.7 million Swedes are regular internet users.
This abundance of telecommunication technology is a result of promoting a competitive industry that was made possible by deregulation. Since Sweden was the first to take on this arduous task the government had to come up with “a regulatory framework of its own”. The processes that went about resulting in the liberalization of the telecommunications’ industry can be structured into three phases: “Phase 1 of monopoly to Phase 2 with a mix of monopoly and competition to a “mature” Phase 3 with extensive competition”.
During the period of 1993-2000 there is rise in competition with legislation of the regulatory body being changed several times. In the case of the POTS, Telia in 2000 still held monopoly in the fixed-line access market. Whereas, mobile phone and Internet penetration in the household market ended up being one of the highest in the world with more than 50 percent of the revenue coming from these two industries. There were three major organizations providing GSM services and 120 internet service providers. One of the major causes that lead competitions thrive in areas that did not have a history of monopoly was the light handed approach taken towards the interconnection issue by the regulatory body initially. Telia held very high interconnection charges, making it very difficult for new entrants to enter. But what it did do was push the new entrants to enter other markets. Tele2 did just that by taking out a massive marketing campaign to attract a huge number of customers to its internet access service. This campaign was successful enough to bring back Telia to the negotiation table over the interconnection issue . This process eventually lead to the abolition of the light handed regulatory approach towards interconnection and put more power in the hands of the regulatory body. The intensity of regulation kept increasing around 1999 in areas other than POTS, especially the mobile market.
In 2009, the Riksdag passed new legislation regulating the National Defence Radio Establishment (FRA), enabling them to collect information from both wireless and cable bound signals passing the Swedish border. Since most communications in Sweden pass through its borders at one point or another, this monitoring in practice affects most traffic within Sweden as well. | https://en.wikipedia.org/wiki?curid=26893 |
Transport in Sweden
Transportation in Sweden is carried out by car, bus, train, tram, boat or aeroplane.
Rail transport is operated by SJ, DSBFirst, Green Cargo, Vy Tåg and more. Most counties have companies that do ticketing, marketing and financing of local passenger rail, but the actual operation are done by the above-mentioned companies.
Stockholm Metro (Stockholms Tunnelbana) is the only metro system in Sweden.
Cities with light rail (trams);
Stockholm previously had a large tram network, but this was discontinued in favour of bus and metro; a revival of the tram network was seen in the construction of Tvärbanan in the late 1990s and early 2000s.
Sweden has right-hand traffic today like all its neighbours.
Sweden had left-hand traffic ("Vänstertrafik" in Swedish) from approximately 1736 and continued to do so until 1967. Despite this virtually all cars in Sweden were actually left-hand drive and the neighbouring Nordic countries already drove on the right, leading to mistakes by visitors. The Swedish voters rejected a change to driving on the right in a referendum held in 1955.
Nevertheless, in 1963 the Riksdag passed legislation ordering the switch to right-hand traffic. The changeover took place on a Sunday morning at 5am on September 3, 1967, which was known in Swedish as "Dagen H" (H-Day), the 'H' standing for "Högertrafik" or right-hand traffic.
Since Swedish cars were left-hand drive, experts had suggested that changing to driving on the right would reduce accidents, because drivers would have a better view of the road ahead. Indeed, fatal car-to-car and car-to-pedestrian accidents did drop sharply as a result. This was likely due to drivers initially being more careful and because of the initially very low speed limits, since accident rates soon returned to nearly the same as earlier.
Total roadways: 572,900 km, as of 2009.
Motorways run through Sweden, Denmark and over the Öresund Bridge to Stockholm, Gothenburg, Uppsala and Uddevalla. The system of motorways is still being extended. The longest continuous motorways are Värnamo-Gävle (E4; 585 km) and Rabbalshede-Vellinge (E6; 412 km; will by 2013 be extended so the motorway between Trelleborg and Oslo in Norway will be completed). | https://en.wikipedia.org/wiki?curid=26894 |
Swedish Armed Forces
The Swedish Armed Forces (, literally “the Defense Force”) is the government agency that forms the military forces of Sweden, and which is tasked with defense of the country, as well as promoting Sweden's wider interests, supporting international peacekeeping efforts, and providing humanitarian aid.
It consists of the Swedish Army, the Swedish Air Force and the Swedish Navy, as well as a military reserve force, the Home Guard. Since 1994, all Swedish military branches are organized within a single unified government agency, headed by the Supreme Commander, even though the individual services maintain their distinct identities. King Carl XVI Gustaf of Sweden is traditionally considered Honorary General and Admiral "à la suite".
The Swedish Armed Forces consist of a mix of volunteers and conscripts. About 4,000 men and women are called up for service every year.
Units from the Swedish Armed Forces are currently on deployment in several international operations either actively or as military observers, including Afghanistan as part of the Resolute Support Mission and in Kosovo (as part of Kosovo Force). Moreover, Swedish Armed Forces contribute as the lead nation for an EU Battle Group approximately once every three years through the Nordic Battlegroup. Sweden has close relations with NATO and NATO members, and participates in training exercises like the Admiral Pitka Recon Challenge, and Exercise Trident Juncture 2018. Sweden also has a strong cooperation with its closest allies of the Nordic countries being part of the Nordic Defence Cooperation NORDEFCO and joint exercises such as Exercise Northern Wind 2019. In total, about 10,000 people participate in Northern Wind, of which approximately 7,000 come from prioritized cooperation states: Finland, Norway, the US and the UK.
Sweden has not participated in an officially declared war since the 1814 Swedish–Norwegian War, although e.g. Swedish aircraft took part in the NATO-led 2011 military intervention in Libya. Swedish foreign policy has managed to keep Sweden out of war through a policy of neutrality.
Sweden also provides information to its citizens in case of an emergency being part of the concept of total defense with pamphlets being sent home to all households. The publication contains information about how to act in a situation of national crisis and most notably, nuclear war. The pamphlets (titled "If the war comes") were distributed to all households from 1943 to 1961; after 1961 some of the information from the pamphlet was printed in every phone book until 1991, the end of the Cold War. In 2018 the pamphlet was renewed and distributed under the title "If the crisis or the war comes" (Swedish: Om krisen eller kriget kommer). The new pamphlet includes the well-known quote from the older ones (in case of enemy invasion): "Every statement that the resistance has ceased is false. Resistance shall be made all the time and in every situation. It depends on You - Your efforts, Your determination, Your will to survive."
After a period of enhanced readiness during World War I, the Swedish Armed Forces were subject to severe downsizing during the interwar years. When World War II started, a large rearmament program was launched to once again guard Swedish neutrality, relying on mass conscription to fill the ranks.
After World War II, Sweden considered building nuclear weapons to deter a Soviet invasion. From 1945 to 1972 the Swedish government ran a clandestine nuclear weapons program under the guise of civilian defense research at the Swedish National Defence Research Institute. By the late 1950s the work had reached the point where underground testing was feasible. However, at this time the Riksdag prohibited research and development of nuclear weapons, pledging that research should be done only for the purpose of defense against nuclear attack. They reserved the right to continue development of nuclear weapons in the future. The option to continue development of weapons was abandoned in 1966, and Sweden's subsequent signing of the Non-Proliferation Treaty in 1968 began the wind-down of the program, which finally concluded in 1972.
During the Cold War, the wartime mass conscription system was kept in place to act as a deterrent to the Soviet Union, seen as the greatest military threat to Sweden. The end of the Cold War and the collapse of the Soviet Union meant that the perceived threat lessened and the armed forces were downsized, with conscription taking in less and less recruits until it was deactivated in 2010 (and then reactivated in 2017).
After twenty years of cooperation with NATO, starting with the Partnership for Peace back in 1994, Sweden was one of five partners granted enhanced opportunities for dialogue and cooperation at the Wales Summit in 2014. The status of Enhanced Opportunities Partner provided a platform for developing a more flexible and individualized relationship, in addition to other partner formats. It coincided with Russia's illegal annexation of Crimea and military intervention in Eastern Ukraine, and also with NATO defense bill for 2016-2020. Both the need to review NATO's own defense policy and the dramatic signal that a European country was prepared to violate the existing security order using military might, gave momentum to the new partner platform. Conscription was reintroduced in 2017 to supplement the insufficient number of volunteers signing up for service.
The Swedish Armed Forces have four main tasks:
Sweden aims to have the option of remaining neutral in case of proximate war. However, Sweden cooperates militarily with a number of foreign countries. As a member of the European Union, Sweden is acting as the lead nation for EU Battlegroups and also has a close cooperation, including joint exercises, with NATO through its membership in Partnership for Peace and Euro-Atlantic Partnership Council. In 2008 a partnership was initiated between the Nordic countries to, among other things, increase the capability of joint action, and this led to the creation of NORDEFCO. As a response to the expanded military cooperation the defence proposition of 2009 stated that Sweden will not remain passive if a Nordic country or a member of the European Union were attacked.
Recent political decisions have strongly emphasized the capability to participate in international operations, to the point where this has become the main short-term goal of training and equipment acquisition. However, after the 2008 South Ossetia war territorial defense was once again emphasized. Until then most units could not be mobilized within one year. In 2009 the Minister for Defence stated that in the future all of the armed forces must capable of fully mobilizing within one week.
In 2013, after Russian air exercises in close proximity to the Swedish border were widely reported, only six percent of Swedes expressed confidence in the ability of the nation to defend itself.
The Supreme Commander () is a four-star general or flag officer that is the agency head of the Swedish Armed Forces, and is the highest ranking professional officer on active duty. The Supreme Commander in turn reports, normally through the Minister for Defence, directly to the Government of Sweden, which in turn answers to the Riksdag.
The King of Sweden was, before the enactment of the 1974 Instrument of Government, the de jure commander in chief (), but currently only has a strictly ceremonial and representative role with respect to the Armed Forces.
The Swedish Armed Forces consists of three service branches; the Army, the Air Force and the Navy, with addition of the military reserve force Home Guard. Since 1994, the first three service branches are organized within a single unified government agency, headed by the Supreme Commander, while the Home Guard reports directly to the Supreme Commander. However, the services maintain their separate identities through the use of different uniforms, ranks, and other service specific traditions.
The Armed Forces Headquarters is the highest level of command in the Swedish Armed Forces. It is led by the Supreme Commander with a civilian Director General as his deputy, with functional directorates having different responsibilities (e.g. the Military Intelligence and Security Service). Overall, the Armed Forces Headquarters have about 1000 employees, including civilian personnel.
Some of the schools listed below answer to other units, listed under the various branches of the Armed Forces:
The Nordic Battle Group is a cooperative formation of the Swedish Armed Forces alongside mainly the other Nordic countries but also some of the Baltic countries as well as Ireland, tasked as one of the EU Battle Groups. The headquarter garrison for this group is currently situated in Enköping Sweden.
Currently, Sweden has military forces deployed in Afghanistan with the NATO-led Resolute Support Mission. Swedish forces were part of the previous International Security Assistance Force (2002–2014) in Afghanistan. Sweden is also part of the multinational Kosovo Force and has a naval force deployed to the gulf of Aden as a part of Operation Atalanta. Military observers from Sweden have been sent to a large number of countries, including Georgia, Lebanon, Israel and Sri Lanka and Sweden also participates with staff officers to missions in Sudan and Chad. Sweden has been one of the Peacekeeping nations of the Neutral Nations Supervisory Commission that is tasked with overseeing the truce in the Korean Demilitarized Zone since the Korean war ended in 1953.
A battalion and other units were deployed with the NATO-led peacekeeping SFOR in Bosnia and Herzegovina (1996–2000), following the Bosnian War.
Swedish air and ground forces saw combat during the Congo Crisis, as part of the United Nations Operation in the Congo force. 9 army battalions were sent in all, and their mission lasted 1960–1964.
In mid-1995, with the national service system based on universal military training, the Swedish Army consisted of 15 maneuver brigades and, in addition, 100 battalions of various sorts (artillery, engineers, rangers, air defense, amphibious, security, surveillance etc.) with a mobilization-time of between one and two days. When national service was replaced by a selective service system, fewer and fewer young men were drafted due to the reduction in size of the armed forces. By 2010 the Swedish Army had two battalions that could be mobilized within 90 days. When the volunteer system has been fully implemented by 2019, the army will consist of 7 maneuver battalions and 14 battalions of various sorts with a readiness of one week. The Home Guard will be reduced in size to 22,000 soldiers. In 2019 the Swedish armed forces, now with a restared national service system combained with volunteer forces, aimed to reach 3 brigades as maneuver units by 2025.
After having ended the universal male conscription system in 2010, as well as deactivating conscription in peacetime, the conscription system was re-activated in 2017. Since 2018 both women and men are conscripted on equal terms. The motivation behind reactivating conscription was the need for personnel, as volunteer numbers proved to be insufficient to maintain the armed forces.
Military personnel of the Swedish Armed Forces consists of:
K = Continuously
T = Part-time
P = Conscript, for personnel drafted under the Swedish law of comprehensive defense duty
Annual recruitment of GSS is assumed to be about 4,000 persons.
Source:
In 2008, professor Mats Alvesson of the University of Lund and Karl Ydén of the University of Göteborg claimed in an op-ed, based on Ydén's doctoral dissertation, that a large part of the officer corps of the Swedish Armed Forces was preoccupied with administrative tasks instead of training soldiers or partaking in international operations. They claimed that Swedish officers were mainly focused on climbing the ranks and thereby increasing their wages and that the main way of doing this is to take more training courses, which decreases the number of officers that are specialized in their field. Therefore, the authors claimed, the Swedish Armed Forces was poorly prepared for its mission.
Major changes have been made to the officer system since then.
The transformation of the old invasion defence-oriented armed forces to the new smaller and more mobile force has also been criticized. According to the Supreme Commander of the Swedish Armed Forces the present defence budget will not be enough to implement the new defence structure by 2019. And that even when finished the armed forces will only be able to fight for a week at most.
During 2013 several Russian Air Force exercises over the Baltic Sea aimed at Swedish Military targets have made
the future of the Swedish Armed Forces a hot topic and several political parties now want to increase defense funding. In August 2019, the government announced a bank tax to fund the military spending.
When an army based on national service (conscription) was introduced in 1901 all commissioned officers had ranks that were senior of the warrant officers ("underofficerare") and non-commissioned officers ("underbefäl"). In a reform 1926 the relative rank of the then senior warrant officer, fanjunkare, was increased to be equal with the junior officer rank "underlöjtnant" and above the most junior officer rank "fänrik". In 1960 the relative rank of the warrant officers were elevated further so that
In 1972 the personnel structure changed, reflecting increased responsibilities of warrant and non-commissioned officers, renaming the "underofficerare" as "kompaniofficerare", giving them the same ranks as company grade officers ("fänrik", "löjtnant", "kapten"). "Underbefäl" was renamed "plutonsofficerare" and given the rank titles of sergeant and "fanjunkare", although their relative rank were now placed below "fänrik". The commissioned officers were renamed "regementsofficerare", beginning with "löjtnant". The three-track career system was maintained, as well as three separate messes.
A major change in the personnel structure in 1983 (NBO 1983), merged the three professional corps of platoon officers, company officers, and regimental officers into a one-track career system within a single corps called professional officers ("yrkesofficerare"). The three messes were also merged to one.
In 2008 the Riksdag decided to create a two-track career system with a category called "specialistofficerare". When implementing the parliamentary resolution the Supreme Commander decided that some ranks in this category should, like the old "underofficerare" ranks in 1960–1972, have a relative rank higher than the most junior officers.
Manpower-numbers are taken from CIA – The World Factbook | https://en.wikipedia.org/wiki?curid=26895 |
Foreign relations of Sweden
The foreign policy of Sweden is based on the premise that national security is best served by staying free of alliances in peacetime in order to remain a neutral country in the event of war. In 2002, Sweden revised its security doctrine. The security doctrine still states that "Sweden pursues a policy of non-participation in military alliances," but permits cooperation in response to threats against peace and security. The government also seeks to maintain Sweden's high standard of living. These two objectives require heavy expenditures for social welfare, defense spending at rates considered low by Western European standards (currently around 1.2% of GNP), and close attention to foreign trade opportunities and world economic cooperation.
Sweden has been a member of the United Nations since November 19, 1946, and participates actively in the activities of the organization, including as an elected member of the Security Council (1957–1958, 1975–1976, 1997–1998 and 2017–2018), providing Dag Hammarskjöld as the second elected Secretary-General of the UN, etc. The strong interest of the Swedish Government and people in international cooperation and peacemaking has been supplemented in the early 1980s by renewed attention to Nordic and European security questions.
Sweden decided not to sign the Treaty on the Prohibition of Nuclear Weapons.
After the then Prime Minister Ingvar Carlsson had submitted Sweden's application in July 1991 the negotiations began in February 1993. Finally, on January 1, 1995, Sweden became a member of the European Union. While some argued that it went against Sweden's historic policy of neutrality, where Sweden had not joined during the Cold War because it was seen as incompatible with neutrality, others viewed the move as a natural extension of the economic cooperation that had been going on since 1972 with the EU. Sweden addressed this controversy by reserving the right not to participate in any future EU defense alliance. In membership negotiations in 1993–1994, Sweden also had reserved the right to make the final decision on whether to join the third stage of the EMU "in light of continued developments." In a nationwide referendum in November 1994, 52.3 percent of participants voted in favour of EU membership. Voter turnout was high, 83.3 percent of the eligible voters voted. The main Swedish concerns included winning popular support for EU cooperation, EU enlargement, and strengthening the EU in areas such as economic growth, job promotion, and environmental issues.
In polls taken a few years after the referendum, many Swedes indicated that they were unhappy with Sweden's membership in the EU. However, after Sweden successfully hosted its first presidency of the EU in the first half of 2001, most Swedes today have a more positive attitude towards the EU. The government, with the support of the Center Party, decided in spring 1997 to remain outside of the EMU, at least until 2002. A referendum was held on September 14, 2003. The results were 55.9% for "no", 42.0% "yes" and 2.1% giving no answer ("blank vote").
Swedish foreign policy has been the result of a wide consensus. Sweden cooperates closely with its Nordic neighbors, formally in economic and social matters through the Nordic Council of Ministers and informally in political matters through direct consultation.
Swedish neutrality and nonalignment policy in peacetime may partly explain how the country could stay out of wars since 1814. Swedish governments have not defined nonalignment as precluding outspoken positions in international affairs. Government leaders have favored national liberation movements that enjoy broad support among developing world countries, with notable attention to Africa. During the Cold War, Sweden was suspicious of the superpowers, which it saw as making decisions affecting small countries without always consulting those countries. With the end of the Cold War, that suspicion has lessened somewhat, although Sweden still chooses to remain nonaligned. Sweden has devoted particular attention to issues of disarmament, arms control, and nuclear nonproliferation and has contributed importantly to UN and other international peacekeeping efforts, including the NATO-led peacekeeping forces in the Balkans. It sat as an observer in the Western European Union from 1995 to 2011, but it is not an active member of NATO's Partnership for Peace and the Euro-Atlantic Partnership Council.
Sweden's engagement with NATO was especially strengthened during the term of Anders Fogh Rasmussen.
Sweden's nonalignment policy has led it to serve as the protecting power for a number of nations who don't have formal diplomatic relations with each other for various reasons. It currently represents the United States, Canada, and several Western European nations in North Korea for consular matters. On several occasions when the United Kingdom broke off relations with Iran (including the 1979 Iranian Revolution, the Salman Rushdie affair, and the 2012 storming of the British embassy in Tehran), Sweden served as the protecting power for the UK.
Sweden has employed its military on numerous occasions since the end of the Cold War, from Bosnia and Congo to Afghanistan and Libya. According to one study, "this military activism is driven both by the Swedish internationalist tradition of "doing good" in the world, but also for instrumental purposes. These include a desire for political influence in international institutions, an interest in collective milieu shaping, and a concern to improve the interoperability and effectiveness of the Swedish military." | https://en.wikipedia.org/wiki?curid=26896 |
Spice
A spice is a seed, fruit, root, bark, or other plant substance primarily used for flavoring, coloring or preserving food. Spices are distinguished from herbs, which are the leaves, flowers, or stems of plants used for flavoring or as a garnish. Many spices have antimicrobial properties, which may explain why spices are more prominent in cuisines originating in warmer climates, where food spoilage is more likely, and why the use of spices is more common with meat, which is particularly susceptible to spoiling. Spices are sometimes used in medicine, religious rituals, cosmetics or perfume production.
The spice trade developed throughout the Indian subcontinent and Middle East by at earliest 2000 BCE with cinnamon and black pepper, and in East Asia with herbs and pepper. The Egyptians used herbs for mummification and their demand for exotic spices and herbs helped stimulate world trade. The word "spice" comes from the Old French word "espice", which became "epice", and which came from the Latin root "spec", the noun referring to "appearance, sort, kind": "species" has the same root. By 1000 BCE, medical systems based upon herbs could be found in China, Korea, and India. Early uses were connected with magic, medicine, religion, tradition, and preservation.
Cloves were used in Mesopotamia by 1700 BCE. The ancient Indian epic Ramayana mentions cloves. The Romans had cloves in the 1st century CE, as Pliny the Elder wrote about them.
The earliest written records of spices come from ancient Egyptian, Chinese, and Indian cultures. The Ebers Papyrus from Early Egyptians that dates from 1550 B.C.E. describes some eight hundred different medicinal remedies and numerous medicinal procedures.
Historians believe that nutmeg, which originates from the Banda Islands in Southeast Asia, was introduced to Europe in the 6th century BCE.
Indonesian merchants traveled around China, India, the Middle East, and the east coast of Africa. Arab merchants facilitated the routes through the Middle East and India. This resulted in the Egyptian port city of Alexandria being the main trading center for spices. The most important discovery prior to the European spice trade were the monsoon winds (40 CE). Sailing from Eastern spice cultivators to Western European consumers gradually replaced the land-locked spice routes once facilitated by the Middle East Arab caravans.
In the story of Genesis, Joseph was sold into slavery by his brothers to spice merchants. In the biblical poem Song of Solomon, the male speaker compares his beloved to many forms of spices.
Spices were among the most demanded and expensive products available in Europe in the Middle Ages,[5] the most common being black pepper, cinnamon (and the cheaper alternative cassia), cumin, nutmeg, ginger and cloves. Given medieval medicine's main theory of humorism, spices and herbs were indispensable to balance "humors" in food,[6] a daily basis for good health at a time of recurrent pandemics. In addition to being desired by those using medieval medicine, the European elite also craved spices in the Middle Ages. An example of the European aristocracy's demand for spice comes from the King of Aragon, who invested substantial resources into bringing back spices to Spain in the 12th century. He was specifically looking for spices to put in wine, and was not alone among European monarchs at the time to have such a desire for spice.
Spices were all imported from plantations in Asia and Africa, which made them expensive. From the 8th until the 15th century, the Republic of Venice had the monopoly on spice trade with the Middle East, and along with it the neighboring Italian maritime republics and city-states. The trade made the region rich. It has been estimated that around 1,000 tons of pepper and 1,000 tons of the other common spices were imported into Western Europe each year during the Late Middle Ages. The value of these goods was the equivalent of a yearly supply of grain for 1.5 million people. The most exclusive was saffron, used as much for its vivid yellow-red color as for its flavor. Spices that have now fallen into obscurity in European cuisine include grains of paradise, a relative of cardamom which mostly replaced pepper in late medieval north French cooking, long pepper, mace, spikenard, galangal and cubeb.
Spain and Portugal were interested in seeking new routes to trade in spices and other valuable products from Asia. The control of trade routes and the spice-producing regions were the main reasons that Portuguese navigator Vasco da Gama sailed to India in 1499.[8] When da Gama discovered the pepper market in India, he was able to secure peppers for a much cheaper price than the ones demanded by Venice. At around the same time, Christopher Columbus returned from the New World. He described to investors new spices available there.[""
Another source of competition in the spice trade during the 15th and 16th century was the Ragusans from the maritime republic of Dubrovnik in southern Croatia.
The military prowess of Afonso de Albuquerque (1453–1515) allowed the Portuguese to take control of the sea routes to India. In 1506, he took the island of Socotra in the mouth of the Red Sea and, in 1507, Ormuz in the Persian Gulf. Since becoming the viceroy of the Indies, he took Goa in India in 1510, and Malacca on the Malay peninsula in 1511. The Portuguese could now trade directly with Siam, China, and the Maluku Islands.
With the discovery of the New World came new spices, including allspice, chili peppers, vanilla, and chocolate. This development kept the spice trade, with America as a late comer with its new seasonings, profitable well into the 19th century.
A spice may be available in several forms: fresh, whole dried, or pre-ground dried. Generally, spices are dried. Spices may be ground into a powder for convenience. A whole dried spice has the longest shelf life, so it can be purchased and stored in larger amounts, making it cheaper on a per-serving basis. A fresh spice, such as ginger, is usually more flavorful than its dried form, but fresh spices are more expensive and have a much shorter shelf life. Some spices are not always available either fresh or whole, for example turmeric, and often must be purchased in ground form. Small seeds, such as fennel and mustard seeds, are often used both whole and in powder form.
To grind a whole spice, the classic tool is mortar and pestle. Less labor-intensive tools are more common now: a microplane or fine grater can be used to grind small amounts; a coffee grinder is useful for larger amounts. A frequently used spice such as black pepper may merit storage in its own hand grinder or mill.
The flavor of a spice is derived in part from compounds (volatile oils) that oxidize or evaporate when exposed to air. Grinding a spice greatly increases its surface area and so increases the rates of oxidation and evaporation. Thus, flavor is maximized by storing a spice whole and grinding when needed. The shelf life of a whole dry spice is roughly two years; of a ground spice roughly six months. The "flavor life" of a ground spice can be much shorter. Ground spices are better stored away from light.
Some flavor elements in spices are soluble in water; many are soluble in oil or fat. As a general rule, the flavors from a spice take time to infuse into the food so spices are added early in preparation. This contrasts to herbs which are usually added late in preparation.
A study by the Food and Drug Administration of shipments of spices to the United States during fiscal years 2007-2009 showed about 7% of the shipments were contaminated by Salmonella bacteria, some of it antibiotic-resistant. As most spices are cooked before being served salmonella contamination often has no effect, but some spices, particularly pepper, are often eaten raw and present at table for convenient use. Shipments from Mexico and India, a major producer, were the most frequently contaminated. However, with newly developed radiation sterilization methods, the risk of Salmonella contamination is now lower.
Because they tend to have strong flavors and are used in small quantities, spices tend to add few calories to food, even though many spices, especially those made from seeds, contain high portions of fat, protein, and carbohydrate by weight. However, when used in larger quantity, spices can also contribute a substantial amount of minerals and other micronutrients, including iron, magnesium, calcium, and many others, to the diet. For example, a teaspoon of paprika contains about 1133 IU of Vitamin A, which is over 20% of the recommended daily allowance specified by the US FDA.
Most herbs and spices have substantial antioxidant activity, owing primarily to phenolic compounds, especially flavonoids, which influence nutrition through many pathways, including affecting the absorption of other nutrients. One study found cumin and fresh ginger to be highest in antioxidant activity. These antioxidants can also act as natural preservatives, preventing or slowing the spoilage of food, leading to a higher nutritional content in stored food.
India contributes 75% of global spice production.
The International Organization for Standardization addresses spices and condiments, along with related food additives, as part of the International Classification for Standards 67.220 series.
The Indian Institute of Spices Research in Kozhikode, Kerala, is devoted exclusively to conducting research for ten spice crops: black pepper, cardamom, cinnamon, clove, garcinia, ginger, nutmeg, paprika, turmeric, and vanilla.
Books
Articles | https://en.wikipedia.org/wiki?curid=26897 |
Spearmint
Spearmint, also known as garden mint, common mint, lamb mint and mackerel mint, is a species of mint, Mentha spicata, native to Europe and southern temperate Asia, extending from Ireland in the west to southern China in the east. It is naturalized in many other temperate parts of the world, including northern and southern Africa, North America and South America. It is used as a flavouring in food and herbal teas. The aromatic oil, called oil of spearmint, is also used as a flavouring and sometimes as a scent.
The species and its subspecies have many synonyms, including "Mentha crispa", "Mentha crispata" and "Mentha viridis".
Spearmint is a perennial herbaceous plant. It is tall, with variably hairless to hairy stems and foliage, and a wide-spreading fleshy underground rhizome from which it grows. The leaves are long and broad, with a serrated margin. The stem is square-shaped, a defining characteristic of the mint family of herbs. Spearmint produces flowers in slender spikes, each flower pink or white in colour, long, and broad. Spearmint flowers in the summer (from July to September in the northern hemisphere), and has relatively large seeds, which measure . The name 'spear' mint derives from the pointed leaf tips.
"Mentha spicata" varies considerably in leaf blade dimensions, the prominence of leaf veins, and pubescence.
"Mentha spicata" was first described scientifically by Carl Linnaeus in 1753. The epithet "spicata" means 'bearing a spike'. The species has two accepted subspecies, each of which has acquired a large number of synonyms:
The plant is a tetraploid species (2"n" = 48), which could be a result of hybridization and chromosome doubling. "Mentha longifolia" and "Mentha suaveolens" (2"n" = 24) are likely to be the contributing diploid species.
"Mentha spicata" hybridizes with other "Mentha" species, forming hybrids such as:
Mention of spearmint dates back to at least the 1st century AD, with references from naturalist Pliny and mentions in the Bible. Further records show descriptions of mint in ancient mythology. Findings of early versions of toothpaste using mint in the 14th century suggest widespread domestication by this point. It was introduced into England through the Romans by the 5th century, and the “Father of British Botany”, of the surname Turner, mentions mint as being good for the stomach. John Gerard's "Herbal" (1597) states that: "It is good against watering eyes and all manner of break outs on the head and sores. It is applied with salt to the biting of mad dogs," and that "They lay it on the stinging of wasps and bees with good success." He also mentions that "the smell rejoice the heart of man", for which cause they used to strew it in chambers and places of recreation, pleasure and repose, where feasts and banquets are made."
Spearmint is documented as being an important cash crop in Connecticut during the period of the American Revolution, at which time mint teas were noted as being a popular drink due to them not being taxed.
Spearmint can readily adapt to grow in various types of soil. Spearmint tends to thrive with plenty of organic material in full sun to part shade. The plant is also known to be found in moist habitats such as swamps or creeks, where the soil is sand or clay.
Spearmint ideally thrives in soils that are deep and well drained, moist, rich in nutrients and organic matter, and have a crumbly texture. pH range should be between 6.0 and 7.5.
Fungal diseases are common diseases in spearmint. Two main diseases are rust and leaf spot. "Puccinia menthae" is a fungus that causes the disease called “rust”. Rust affects the leaves of spearmint by producing pustules inducing the leaves to fall off. Leaf spot is a fungal disease that occurs when "Alternaria alernata" is present on the spearmint leaves. The infection looks like circular dark spot on the top side of the leaf. Other fungi that cause disease in spearmint are "Rhizoctonia solani", "Verticillium dahliae", "Phoma strasseri", and "Erysiphe cischoracearum".
Some nematode diseases in spearmint include root knot and root lesions. Nematode species that cause root knots in this plant are various "Meloidogyne" species. The other nematode species are "Pratylenchus" which cause root lesions.
Spearmint can be infected by tobacco ringspot virus. This virus can lead to stunted plant growth and deformation of the leaves in this plant. In China, spearmint have been seen with mosaic symptoms and deformed leaves. This is an indication that the plant can also be infected by the viruses, cucumber mosaic and tomato aspermy.
Spearmint grows well in nearly all temperate climates. Gardeners often grow it in pots or planters due to its invasive, spreading rhizomes.
Spearmint leaves can be used fresh, dried, or frozen. They can also be preserved in salt, sugar, sugar syrup, alcohol, or oil. The leaves lose their aromatic appeal after the plant flowers. It can be dried by cutting just before, or right (at peak) as the flowers open, about one-half to three-quarters the way down the stalk (leaving smaller shoots room to grow). Some dispute exists as to what drying method works best; some prefer different materials (such as plastic or cloth) and different lighting conditions (such as darkness or sunlight).
Spearmint is used for its aromatic oil, called oil of spearmint. The most abundant compound in spearmint oil is "R"-(–)-carvone, which gives spearmint its distinctive smell. Spearmint oil also contains significant amounts of limonene, dihydrocarvone, and 1,8-cineol. Unlike oil of peppermint, oil of spearmint contains minimal amounts of menthol and menthone. It is used as a flavouring for toothpaste and confectionery, and is sometimes added to shampoos and soaps.
Spearmint has been used traditionally as medicines for minor ailments such as fevers, and digestive disorders. There is research on spearmint extracts in the treatment of gout and as an antiemetic.
Spearmint essential oil has had success as a larvicide against mosquitoes. Using spearmint as a larvicide would be a greener alternative to synthetic insecticides due to their toxicity and negative effect to the environment.
Used as a fumigant, spearmint essential oil is an effective insecticide against adult moths.
The main chemical component of spearmint is the terpenoid carvone, which has been shown to aid in the inhibition of tumors. Perillyl alcohol, an additional terpenoid found in lower concentrations in spearmint, positively effects the regulation of various cell substances involved in cell growth and differentiation.
Studies on spearmint have shown varying results on the antioxidant effects of the plant and its extracts. Results have ranged from spearmint essential oil displaying considerable free radical scavenging activity to no antioxidant activity in spearmint essential oil, but strong activity in spearmint methanolic extract. Antioxidant activity has been shown to be significantly higher in spearmint that is dried at lower temperatures rather than high. It is suggested this is due to the degradation of phenolics at high temperatures. In experiments demonstrating antioxidant properties in spearmint oil, the major component, carvone, alone showed lower antioxidant activity.
Spearmint has been historically used for its antimicrobial activity, which is likely due to the high concentration of carvone. Its in vitro antibacterial activity has been compared to, and is even said to surpass, that of amoxicillin, penicillin, and streptomycin. Spearmint oil is found to have higher activity against Gram-positive bacteria compared to Gram-negative bacteria, which may be due to differing sensitivities to oils. The degree of antimicrobial activity varies with the type of microorganism tested.
Studies have found significant antiandrogen effects in spearmint, specifically following routine spearmint herbal tea ingestion. Antispasmodic effects have been displayed in spearmint oil and carvone, the main chemical component of spearmint.
Spearmint leaves are infused in water to make spearmint tea. Spearmint is an ingredient of Maghrebi mint tea. Grown in the mountainous regions of Morocco, this variety of mint possesses a clear, pungent, but mild aroma. Spearmint is an ingredient in several mixed drinks, such as the mojito and mint julep. Sweet tea, iced and flavoured with spearmint, is a summer tradition in the Southern United States. | https://en.wikipedia.org/wiki?curid=26899 |
Solar System
The Solar System is the gravitationally bound system of the Sun and the objects that orbit it, either directly or indirectly. Of the objects that orbit the Sun directly, the largest are the eight planets, with the remainder being smaller objects, the dwarf planets and small Solar System bodies. Of the objects that orbit the Sun indirectly—the moons—two are larger than the smallest planet, Mercury.
The Solar System formed 4.6 billion years ago from the gravitational collapse of a giant interstellar molecular cloud. The vast majority of the system's mass is in the Sun, with the majority of the remaining mass contained in Jupiter. The four smaller inner planets, Mercury, Venus, Earth and Mars, are terrestrial planets, being primarily composed of rock and metal. The four outer planets are giant planets, being substantially more massive than the terrestrials. The two largest, Jupiter and Saturn, are gas giants, being composed mainly of hydrogen and helium; the two outermost planets, Uranus and Neptune, are ice giants, being composed mostly of substances with relatively high melting points compared with hydrogen and helium, called volatiles, such as water, ammonia and methane. All eight planets have almost circular orbits that lie within a nearly flat disc called the ecliptic.
The Solar System also contains smaller objects. The asteroid belt, which lies between the orbits of Mars and Jupiter, mostly contains objects composed, like the terrestrial planets, of rock and metal. Beyond Neptune's orbit lie the Kuiper belt and scattered disc, which are populations of trans-Neptunian objects composed mostly of ices, and beyond them a newly discovered population of sednoids.
Within these populations, some objects are large enough to have rounded under their own gravity, though there is considerable debate as to how many there will prove to be.
Such objects are categorized as dwarf planets. Identified or accepted dwarf planets include the asteroid Ceres and the trans-Neptunian objects Pluto and Eris. In addition to these two regions, various other small-body populations, including comets, centaurs and interplanetary dust clouds, freely travel between regions. Six of the planets, the six largest possible dwarf planets, and many of the smaller bodies are orbited by natural satellites, usually termed "moons" after the Moon. Each of the outer planets is encircled by planetary rings of dust and other small objects.
The solar wind, a stream of charged particles flowing outwards from the Sun, creates a bubble-like region in the interstellar medium known as the heliosphere. The heliopause is the point at which pressure from the solar wind is equal to the opposing pressure of the interstellar medium; it extends out to the edge of the scattered disc. The Oort cloud, which is thought to be the source for long-period comets, may also exist at a distance roughly a thousand times further than the heliosphere. The Solar System is located in the Orion Arm, 26,000 light-years from the center of the Milky Way galaxy.
For most of history, humanity did not recognize or understand the concept of the Solar System. Most people up to the Late Middle Ages–Renaissance believed Earth to be stationary at the centre of the universe and categorically different from the divine or ethereal objects that moved through the sky. Although the Greek philosopher Aristarchus of Samos had speculated on a heliocentric reordering of the cosmos, Nicolaus Copernicus was the first to develop a mathematically predictive heliocentric system.
In the 17th century, Galileo discovered that the Sun was marked with sunspots, and that Jupiter had four satellites in orbit around it. Christiaan Huygens followed on from Galileo's discoveries by discovering Saturn's moon Titan and the shape of the rings of Saturn. Edmond Halley realised in 1705 that repeated sightings of a comet were recording the same object, returning regularly once every 75–76 years. This was the first evidence that anything other than the planets orbited the Sun. Around this time (1704), the term "Solar System" first appeared in English. In 1838, Friedrich Bessel successfully measured a stellar parallax, an apparent shift in the position of a star created by Earth's motion around the Sun, providing the first direct, experimental proof of heliocentrism. Improvements in observational astronomy and the use of unmanned spacecraft have since enabled the detailed investigation of other bodies orbiting the Sun.
The principal component of the Solar System is the Sun, a G2 main-sequence star that contains 99.86% of the system's known mass and dominates it gravitationally. The Sun's four largest orbiting bodies, the giant planets, account for 99% of the remaining mass, with Jupiter and Saturn together comprising more than 90%. The remaining objects of the Solar System (including the four terrestrial planets, the dwarf planets, moons, asteroids, and comets) together comprise less than 0.002% of the Solar System's total mass.
Most large objects in orbit around the Sun lie near the plane of Earth's orbit, known as the ecliptic. The planets are very close to the ecliptic, whereas comets and Kuiper belt objects are frequently at significantly greater angles to it. As a result of the formation of the Solar System planets, and most other objects, orbit the Sun in the same direction that the Sun is rotating (counter-clockwise, as viewed from above Earth's north pole). There are exceptions, such as Halley's Comet. Most of the larger moons orbit their planets in this "prograde" direction (with Triton being the largest "retrograde" exception) and most larger objects rotate themselves in the same direction (with Venus being a notable "retrograde" exception).
The overall structure of the charted regions of the Solar System consists of the Sun, four relatively small inner planets surrounded by a belt of mostly rocky asteroids, and four giant planets surrounded by the Kuiper belt of mostly icy objects. Astronomers sometimes informally divide this structure into separate regions. The inner Solar System includes the four terrestrial planets and the asteroid belt. The outer Solar System is beyond the asteroids, including the four giant planets. Since the discovery of the Kuiper belt, the outermost parts of the Solar System are considered a distinct region consisting of the objects beyond Neptune.
Most of the planets in the Solar System have secondary systems of their own, being orbited by planetary objects called natural satellites, or moons (two of which, Titan and Ganymede, are larger than the planet Mercury), and, in the case of the four giant planets, by planetary rings, thin bands of tiny particles that orbit them in unison. Most of the largest natural satellites are in synchronous rotation, with one face permanently turned toward their parent.
Kepler's laws of planetary motion describe the orbits of objects about the Sun. Following Kepler's laws, each object travels along an ellipse with the Sun at one focus. Objects closer to the Sun (with smaller semi-major axes) travel more quickly because they are more affected by the Sun's gravity. On an elliptical orbit, a body's distance from the Sun varies over the course of its year. A body's closest approach to the Sun is called its "perihelion", whereas its most distant point from the Sun is called its "aphelion". The orbits of the planets are nearly circular, but many comets, asteroids, and Kuiper belt objects follow highly elliptical orbits. The positions of the bodies in the Solar System can be predicted using numerical models.
Although the Sun dominates the system by mass, it accounts for only about 2% of the angular momentum. The planets, dominated by Jupiter, account for most of the rest of the angular momentum due to the combination of their mass, orbit, and distance from the Sun, with a possibly significant contribution from comets.
The Sun, which comprises nearly all the matter in the Solar System, is composed of roughly 98% hydrogen and helium. Jupiter and Saturn, which comprise nearly all the remaining matter, are also primarily composed of hydrogen and helium. A composition gradient exists in the Solar System, created by heat and light pressure from the Sun; those objects closer to the Sun, which are more affected by heat and light pressure, are composed of elements with high melting points. Objects farther from the Sun are composed largely of materials with lower melting points. The boundary in the Solar System beyond which those volatile substances could condense is known as the frost line, and it lies at roughly 5 AU from the Sun.
The objects of the inner Solar System are composed mostly of rock, the collective name for compounds with high melting points, such as silicates, iron or nickel, that remained solid under almost all conditions in the protoplanetary nebula. Jupiter and Saturn are composed mainly of gases, the astronomical term for materials with extremely low melting points and high vapour pressure, such as hydrogen, helium, and neon, which were always in the gaseous phase in the nebula. Ices, like water, methane, ammonia, hydrogen sulfide, and carbon dioxide, have melting points up to a few hundred kelvins. They can be found as ices, liquids, or gases in various places in the Solar System, whereas in the nebula they were either in the solid or gaseous phase. Icy substances comprise the majority of the satellites of the giant planets, as well as most of Uranus and Neptune (the so-called "ice giants") and the numerous small objects that lie beyond Neptune's orbit. Together, gases and ices are referred to as "volatiles".
The distance from Earth to the Sun is . For comparison, the radius of the Sun is . Thus, the Sun occupies 0.00001% (10−5 %) of the volume of a sphere with a radius the size of Earth's orbit, whereas Earth's volume is roughly one millionth (10−6) that of the Sun. Jupiter, the largest planet, is from the Sun and has a radius of , whereas the most distant planet, Neptune, is from the Sun.
With a few exceptions, the farther a planet or belt is from the Sun, the larger the distance between its orbit and the orbit of the next nearer object to the Sun. For example, Venus is approximately 0.33 AU farther out from the Sun than Mercury, whereas Saturn is 4.3 AU out from Jupiter, and Neptune lies 10.5 AU out from Uranus. Attempts have been made to determine a relationship between these orbital distances (for example, the Titius–Bode law), but no such theory has been accepted. The images at the beginning of this section show the orbits of the various constituents of the Solar System on different scales.
Some Solar System models attempt to convey the relative scales involved in the Solar System on human terms. Some are small in scale (and may be mechanical—called orreries)—whereas others extend across cities or regional areas. The largest such scale model, the Sweden Solar System, uses the 110-metre (361 ft) Ericsson Globe in Stockholm as its substitute Sun, and, following the scale, Jupiter is a 7.5-metre (25-foot) sphere at Stockholm Arlanda Airport, 40 km (25 mi) away, whereas the farthest current object, Sedna, is a 10 cm (4 in) sphere in Luleå, 912 km (567 mi) away.
If the Sun–Neptune distance is scaled to 100 metres, then the Sun would be about 3 cm in diameter (roughly two-thirds the diameter of a golf ball), the giant planets would be all smaller than about 3 mm, and Earth's diameter along with that of the other terrestrial planets would be smaller than a flea (0.3 mm) at this scale.
The Solar System formed 4.568 billion years ago from the gravitational collapse of a region within a large molecular cloud. This initial cloud was likely several light-years across and probably birthed several stars. As is typical of molecular clouds, this one consisted mostly of hydrogen, with some helium, and small amounts of heavier elements fused by previous generations of stars. As the region that would become the Solar System, known as the pre-solar nebula, collapsed, conservation of angular momentum caused it to rotate faster. The centre, where most of the mass collected, became increasingly hotter than the surrounding disc. As the contracting nebula rotated faster, it began to flatten into a protoplanetary disc with a diameter of roughly 200 AU and a hot, dense protostar at the centre. The planets formed by accretion from this disc, in which dust and gas gravitationally attracted each other, coalescing to form ever larger bodies. Hundreds of protoplanets may have existed in the early Solar System, but they either merged or were destroyed, leaving the planets, dwarf planets, and leftover minor bodies.
Due to their higher boiling points, only metals and silicates could exist in solid form in the warm inner Solar System close to the Sun, and these would eventually form the rocky planets of Mercury, Venus, Earth, and Mars. Because metallic elements only comprised a very small fraction of the solar nebula, the terrestrial planets could not grow very large. The giant planets (Jupiter, Saturn, Uranus, and Neptune) formed further out, beyond the frost line, the point between the orbits of Mars and Jupiter where material is cool enough for volatile icy compounds to remain solid. The ices that formed these planets were more plentiful than the metals and silicates that formed the terrestrial inner planets, allowing them to grow massive enough to capture large atmospheres of hydrogen and helium, the lightest and most abundant elements. Leftover debris that never became planets congregated in regions such as the asteroid belt, Kuiper belt, and Oort cloud. The Nice model is an explanation for the creation of these regions and how the outer planets could have formed in different positions and migrated to their current orbits through various gravitational interactions.
Within 50 million years, the pressure and density of hydrogen in the centre of the protostar became great enough for it to begin thermonuclear fusion. The temperature, reaction rate, pressure, and density increased until hydrostatic equilibrium was achieved: the thermal pressure equalled the force of gravity. At this point, the Sun became a main-sequence star. The main-sequence phase, from beginning to end, will last about 10 billion years for the Sun compared to around two billion years for all other phases of the Sun's pre-remnant life combined. Solar wind from the Sun created the heliosphere and swept away the remaining gas and dust from the protoplanetary disc into interstellar space, ending the planetary formation process. The Sun is growing brighter; early in its main-sequence life its brightness was 70% that of what it is today.
The Solar System will remain roughly as we know it today until the hydrogen in the core of the Sun has been entirely converted to helium, which will occur roughly 5 billion years from now. This will mark the end of the Sun's main-sequence life. At this time, the core of the Sun will contract with hydrogen fusion occurring along a shell surrounding the inert helium, and the energy output will be much greater than at present. The outer layers of the Sun will expand to roughly 260 times its current diameter, and the Sun will become a red giant. Because of its vastly increased surface area, the surface of the Sun will be considerably cooler (2,600 K at its coolest) than it is on the main sequence. The expanding Sun is expected to vaporize Mercury and render Earth uninhabitable. Eventually, the core will be hot enough for helium fusion; the Sun will burn helium for a fraction of the time it burned hydrogen in the core. The Sun is not massive enough to commence the fusion of heavier elements, and nuclear reactions in the core will dwindle. Its outer layers will move away into space, leaving a white dwarf, an extraordinarily dense object, half the original mass of the Sun but only the size of Earth. The ejected outer layers will form what is known as a planetary nebula, returning some of the material that formed the Sun—but now enriched with heavier elements like carbon—to the interstellar medium.
The Sun is the Solar System's star and by far its most massive component. Its large mass (332,900 Earth masses), which comprises 99.86% of all the mass in the Solar System, produces temperatures and densities in its core high enough to sustain nuclear fusion of hydrogen into helium, making it a main-sequence star. This releases an enormous amount of energy, mostly radiated into space as electromagnetic radiation peaking in visible light.
The Sun is a G2-type main-sequence star. Hotter main-sequence stars are more luminous. The Sun's temperature is intermediate between that of the hottest stars and that of the coolest stars. Stars brighter and hotter than the Sun are rare, whereas substantially dimmer and cooler stars, known as red dwarfs, make up 85% of the stars in the Milky Way.
The Sun is a population I star; it has a higher abundance of elements heavier than hydrogen and helium ("metals" in astronomical parlance) than the older population II stars. Elements heavier than hydrogen and helium were formed in the cores of ancient and exploding stars, so the first generation of stars had to die before the Universe could be enriched with these atoms. The oldest stars contain few metals, whereas stars born later have more. This high metallicity is thought to have been crucial to the Sun's development of a planetary system because the planets form from the accretion of "metals".
The vast majority of the Solar System consists of a near-vacuum known as the interplanetary medium. Along with light, the Sun radiates a continuous stream of charged particles (a plasma) known as the solar wind. This stream of particles spreads outwards at roughly 1.5 million kilometres per hour, creating a tenuous atmosphere that permeates the interplanetary medium out to at least 100 AU "(see )". Activity on the Sun's surface, such as solar flares and coronal mass ejections, disturbs the heliosphere, creating space weather and causing geomagnetic storms. The largest structure within the heliosphere is the heliospheric current sheet, a spiral form created by the actions of the Sun's rotating magnetic field on the interplanetary medium.
Earth's magnetic field stops its atmosphere from being stripped away by the solar wind. Venus and Mars do not have magnetic fields, and as a result the solar wind is causing their atmospheres to gradually bleed away into space. Coronal mass ejections and similar events blow a magnetic field and huge quantities of material from the surface of the Sun. The interaction of this magnetic field and material with Earth's magnetic field funnels charged particles into Earth's upper atmosphere, where its interactions create aurorae seen near the magnetic poles.
The heliosphere and planetary magnetic fields (for those planets that have them) partially shield the Solar System from high-energy interstellar particles called cosmic rays. The density of cosmic rays in the interstellar medium and the strength of the Sun's magnetic field change on very long timescales, so the level of cosmic-ray penetration in the Solar System varies, though by how much is unknown.
The interplanetary medium is home to at least two disc-like regions of cosmic dust. The first, the zodiacal dust cloud, lies in the inner Solar System and causes the zodiacal light. It was likely formed by collisions within the asteroid belt brought on by gravitational interactions with the planets. The second dust cloud extends from about 10 AU to about 40 AU, and was probably created by similar collisions within the Kuiper belt.
The inner Solar System is the region comprising the terrestrial planets and the asteroid belt. Composed mainly of silicates and metals, the objects of the inner Solar System are relatively close to the Sun; the radius of this entire region is less than the distance between the orbits of Jupiter and Saturn. This region is also within the frost line, which is a little less than 5 AU (about 700 million km) from the Sun.
The four terrestrial or inner planets have dense, rocky compositions, few or no moons, and no ring systems. They are composed largely of refractory minerals, such as the silicateswhich form their crusts and mantlesand metals, such as iron and nickel, which form their cores. Three of the four inner planets (Venus, Earth and Mars) have atmospheres substantial enough to generate weather; all have impact craters and tectonic surface features, such as rift valleys and volcanoes. The term "inner planet" should not be confused with "inferior planet", which designates those planets that are closer to the Sun than Earth is (i.e. Mercury and Venus).
Mercury ( from the Sun) is the closest planet to the Sun and on average, all seven other planets. The smallest planet in the Solar System (), Mercury has no natural satellites. Besides impact craters, its only known geological features are lobed ridges or rupes that were probably produced by a period of contraction early in its history. Mercury's very tenuous atmosphere consists of atoms blasted off its surface by the solar wind. Its relatively large iron core and thin mantle have not yet been adequately explained. Hypotheses include that its outer layers were stripped off by a giant impact, or that it was prevented from fully accreting by the young Sun's energy.
Venus (0.7 AU from the Sun) is close in size to Earth () and, like Earth, has a thick silicate mantle around an iron core, a substantial atmosphere, and evidence of internal geological activity. It is much drier than Earth, and its atmosphere is ninety times as dense. Venus has no natural satellites. It is the hottest planet, with surface temperatures over , most likely due to the amount of greenhouse gases in the atmosphere. No definitive evidence of current geological activity has been detected on Venus, but it has no magnetic field that would prevent depletion of its substantial atmosphere, which suggests that its atmosphere is being replenished by volcanic eruptions.
Earth (1 AU from the Sun) is the largest and densest of the inner planets, the only one known to have current geological activity, and the only place where life is known to exist. Its liquid hydrosphere is unique among the terrestrial planets, and it is the only planet where plate tectonics has been observed. Earth's atmosphere is radically different from those of the other planets, having been altered by the presence of life to contain 21% free oxygen. It has one natural satellite, the Moon, the only large satellite of a terrestrial planet in the Solar System.
Mars (1.5 AU from the Sun) is smaller than Earth and Venus (). It has an atmosphere of mostly carbon dioxide with a surface pressure of 6.1 millibars (roughly 0.6% of that of Earth). Its surface, peppered with vast volcanoes, such as Olympus Mons, and rift valleys, such as Valles Marineris, shows geological activity that may have persisted until as recently as 2 million years ago. Its red colour comes from iron oxide (rust) in its soil. Mars has two tiny natural satellites (Deimos and Phobos) thought to be either captured asteroids, or ejected debris from a massive impact early in Mars's history.
Asteroids except for the largest, Ceres, are classified as small Solar System bodies and are composed mainly of refractory rocky and metallic minerals, with some ice. They range from a few metres to hundreds of kilometres in size. Asteroids smaller than one meter are usually called meteoroids and micrometeoroids (grain-sized), depending on different, somewhat arbitrary definitions.
The asteroid belt occupies the orbit between Mars and Jupiter, between from the Sun. It is thought to be remnants from the Solar System's formation that failed to coalesce because of the gravitational interference of Jupiter. The asteroid belt contains tens of thousands, possibly millions, of objects over one kilometre in diameter. Despite this, the total mass of the asteroid belt is unlikely to be more than a thousandth of that of Earth. The asteroid belt is very sparsely populated; spacecraft routinely pass through without incident.
Ceres (2.77 AU) is the largest asteroid, a protoplanet, and a dwarf planet. It has a diameter of slightly under , and a mass large enough for its own gravity to pull it into a spherical shape. Ceres was considered a planet when it was discovered in 1801, and was reclassified to asteroid in the 1850s as further observations revealed additional asteroids. It was classified as a dwarf planet in 2006 when the definition of a planet was created.
Asteroids in the asteroid belt are divided into asteroid groups and families based on their orbital characteristics. Asteroid moons are asteroids that orbit larger asteroids. They are not as clearly distinguished as planetary moons, sometimes being almost as large as their partners. The asteroid belt also contains main-belt comets, which may have been the source of Earth's water.
Jupiter trojans are located in either of Jupiter's L4 or L5 points (gravitationally stable regions leading and trailing a planet in its orbit); the term is also used for small bodies in any other planetary or satellite Lagrange point. Hilda asteroids are in a 2:3 resonance with Jupiter; that is, they go around the Sun three times for every two Jupiter orbits.
The inner Solar System also contains near-Earth asteroids, many of which cross the orbits of the inner planets. Some of them are potentially hazardous objects.
The outer region of the Solar System is home to the giant planets and their large moons. The centaurs and many short-period comets also orbit in this region. Due to their greater distance from the Sun, the solid objects in the outer Solar System contain a higher proportion of volatiles, such as water, ammonia, and methane than those of the inner Solar System because the lower temperatures allow these compounds to remain solid.
The four outer planets, or giant planets (sometimes called Jovian planets), collectively make up 99% of the mass known to orbit the Sun. Jupiter and Saturn are together more than 400 times the mass of Earth and consist overwhelmingly of hydrogen and helium. Uranus and Neptune are far less massiveless than 20 Earth masses () eachand are composed primarily of ices. For these reasons, some astronomers suggest they belong in their own category, ice giants. All four giant planets have rings, although only Saturn's ring system is easily observed from Earth. The term "superior planet" designates planets outside Earth's orbit and thus includes both the outer planets and Mars.
Jupiter (5.2 AU), at , is 2.5 times the mass of all the other planets put together. It is composed largely of hydrogen and helium. Jupiter's strong internal heat creates semi-permanent features in its atmosphere, such as cloud bands and the Great Red Spot. Jupiter has 79 known satellites. The four largest, Ganymede, Callisto, Io, and Europa, show similarities to the terrestrial planets, such as volcanism and internal heating. Ganymede, the largest satellite in the Solar System, is larger than Mercury.
Saturn (9.5 AU), distinguished by its extensive ring system, has several similarities to Jupiter, such as its atmospheric composition and magnetosphere. Although Saturn has 60% of Jupiter's volume, it is less than a third as massive, at . Saturn is the only planet of the Solar System that is less dense than water. The rings of Saturn are made up of small ice and rock particles. Saturn has 82 confirmed satellites composed largely of ice. Two of these, Titan and Enceladus, show signs of geological activity. Titan, the second-largest moon in the Solar System, is larger than Mercury and the only satellite in the Solar System with a substantial atmosphere.
Uranus (19.2 AU), at , is the lightest of the outer planets. Uniquely among the planets, it orbits the Sun on its side; its axial tilt is over ninety degrees to the ecliptic. It has a much colder core than the other giant planets and radiates very little heat into space. Uranus has 27 known satellites, the largest ones being Titania, Oberon, Umbriel, Ariel, and Miranda.
Neptune (), though slightly smaller than Uranus, is more massive () and hence more dense. It radiates more internal heat, but not as much as Jupiter or Saturn. Neptune has 14 known satellites. The largest, Triton, is geologically active, with geysers of liquid nitrogen. Triton is the only large satellite with a retrograde orbit. Neptune is accompanied in its orbit by several minor planets, termed Neptune trojans, that are in 1:1 resonance with it.
The centaurs are icy comet-like bodies whose orbits have semi-major axes greater than Jupiter's (5.5 AU) and less than Neptune's (30 AU). The largest known centaur, 10199 Chariklo, has a diameter of about 250 km. The first centaur discovered, 2060 Chiron, has also been classified as comet (95P) because it develops a coma just as comets do when they approach the Sun.
Comets are small Solar System bodies, typically only a few kilometres across, composed largely of volatile ices. They have highly eccentric orbits, generally a perihelion within the orbits of the inner planets and an aphelion far beyond Pluto. When a comet enters the inner Solar System, its proximity to the Sun causes its icy surface to sublimate and ionise, creating a coma: a long tail of gas and dust often visible to the naked eye.
Short-period comets have orbits lasting less than two hundred years. Long-period comets have orbits lasting thousands of years. Short-period comets are thought to originate in the Kuiper belt, whereas long-period comets, such as Hale–Bopp, are thought to originate in the Oort cloud. Many comet groups, such as the Kreutz Sungrazers, formed from the breakup of a single parent. Some comets with hyperbolic orbits may originate outside the Solar System, but determining their precise orbits is difficult. Old comets that have had most of their volatiles driven out by solar warming are often categorised as asteroids.
Beyond the orbit of Neptune lies the area of the "trans-Neptunian region", with the doughnut-shaped Kuiper belt, home of Pluto and several other dwarf planets, and an overlapping disc of scattered objects, which is tilted toward the plane of the Solar System and reaches much further out than the Kuiper belt. The entire region is still largely unexplored. It appears to consist overwhelmingly of many thousands of small worlds—the largest having a diameter only a fifth that of Earth and a mass far smaller than that of the Moon—composed mainly of rock and ice. This region is sometimes described as the "third zone of the Solar System", enclosing the inner and the outer Solar System.
The Kuiper belt is a great ring of debris similar to the asteroid belt, but consisting mainly of objects composed primarily of ice. It extends between 30 and 50 AU from the Sun. Though it is estimated to contain anything from dozens to thousands of dwarf planets, it is composed mainly of small Solar System bodies. Many of the larger Kuiper belt objects, such as Quaoar, Varuna, and Orcus, may prove to be dwarf planets with further data. There are estimated to be over 100,000 Kuiper belt objects with a diameter greater than 50 km, but the total mass of the Kuiper belt is thought to be only a tenth or even a hundredth the mass of Earth. Many Kuiper belt objects have multiple satellites, and most have orbits that take them outside the plane of the ecliptic.
The Kuiper belt can be roughly divided into the "classical" belt and the resonances. Resonances are orbits linked to that of Neptune (e.g. twice for every three Neptune orbits, or once for every two). The first resonance begins within the orbit of Neptune itself. The classical belt consists of objects having no resonance with Neptune, and extends from roughly 39.4 AU to 47.7 AU. Members of the classical Kuiper belt are classified as cubewanos, after the first of their kind to be discovered, 15760 Albion (which previously had the provisional designation 1992 QB1), and are still in near primordial, low-eccentricity orbits.
The dwarf planet Pluto (39 AU average) is the largest known object in the Kuiper belt. When discovered in 1930, it was considered to be the ninth planet; this changed in 2006 with the adoption of a formal definition of planet. Pluto has a relatively eccentric orbit inclined 17 degrees to the ecliptic plane and ranging from 29.7 AU from the Sun at perihelion (within the orbit of Neptune) to 49.5 AU at aphelion. Pluto has a 3:2 resonance with Neptune, meaning that Pluto orbits twice round the Sun for every three Neptunian orbits. Kuiper belt objects whose orbits share this resonance are called plutinos.
Charon, the largest of Pluto's moons, is sometimes described as part of a binary system with Pluto, as the two bodies orbit a barycentre of gravity above their surfaces (i.e. they appear to "orbit each other"). Beyond Charon, four much smaller moons, Styx, Nix, Kerberos, and Hydra, orbit within the system.
Makemake (45.79 AU average), although smaller than Pluto, is the largest known object in the "classical" Kuiper belt (that is, a Kuiper belt object not in a confirmed resonance with Neptune). Makemake is the brightest object in the Kuiper belt after Pluto. It was assigned a naming committee under the expectation that it would prove to be a dwarf planet in 2008. Its orbit is far more inclined than Pluto's, at 29°.
Haumea (43.13 AU average) is in an orbit similar to Makemake, except that it is in a temporary 7:12 orbital resonance with Neptune.
It was named under the same expectation that it would prove to be a dwarf planet, though subsequent observations have indicated that it may not be a dwarf planet after all.
The scattered disc, which overlaps the Kuiper belt but extends out to about 200 AU, is thought to be the source of short-period comets. Scattered-disc objects are thought to have been ejected into erratic orbits by the gravitational influence of Neptune's early outward migration. Most scattered disc objects (SDOs) have perihelia within the Kuiper belt but aphelia far beyond it (some more than 150 AU from the Sun). SDOs' orbits are also highly inclined to the ecliptic plane and are often almost perpendicular to it. Some astronomers consider the scattered disc to be merely another region of the Kuiper belt and describe scattered disc objects as "scattered Kuiper belt objects". Some astronomers also classify centaurs as inward-scattered Kuiper belt objects along with the outward-scattered residents of the scattered disc.
Eris (68 AU average) is the largest known scattered disc object, and caused a debate about what constitutes a planet, because it is 25% more massive than Pluto and about the same diameter. It is the most massive of the known dwarf planets. It has one known moon, Dysnomia. Like Pluto, its orbit is highly eccentric, with a perihelion of 38.2 AU (roughly Pluto's distance from the Sun) and an aphelion of 97.6 AU, and steeply inclined to the ecliptic plane.
The point at which the Solar System ends and interstellar space begins is not precisely defined because its outer boundaries are shaped by two separate forces: the solar wind and the Sun's gravity. The limit of the solar wind's influence is roughly four times Pluto's distance from the Sun; this "heliopause", the outer boundary of the heliosphere, is considered the beginning of the interstellar medium. The Sun's Hill sphere, the effective range of its gravitational dominance, is thought to extend up to a thousand times farther and encompasses the theorized Oort cloud.
The heliosphere is a stellar-wind bubble, a region of space dominated by the Sun, which radiates at roughly 400 km/s its solar wind, a stream of charged particles, until it collides with the wind of the interstellar medium.
The collision occurs at the "termination shock", which is roughly 80–100 AU from the Sun upwind of the interstellar medium and roughly 200 AU from the Sun downwind. Here the wind slows dramatically, condenses and becomes more turbulent, forming a great oval structure known as the "heliosheath". This structure is thought to look and behave very much like a comet's tail, extending outward for a further 40 AU on the upwind side but tailing many times that distance downwind; evidence from "Cassini" and Interstellar Boundary Explorer spacecraft has suggested that it is forced into a bubble shape by the constraining action of the interstellar magnetic field.
The outer boundary of the heliosphere, the "heliopause", is the point at which the solar wind finally terminates and is the beginning of interstellar space. "Voyager 1" and "Voyager 2" are reported to have passed the termination shock and entered the heliosheath, at 94 and 84 AU from the Sun, respectively. "Voyager 1" is reported to have crossed the heliopause in August 2012.
The shape and form of the outer edge of the heliosphere is likely affected by the fluid dynamics of interactions with the interstellar medium as well as solar magnetic fields prevailing to the south, e.g. it is bluntly shaped with the northern hemisphere extending 9 AU farther than the southern hemisphere. Beyond the heliopause, at around 230 AU, lies the bow shock, a plasma "wake" left by the Sun as it travels through the Milky Way.
Due to a lack of data, conditions in local interstellar space are not known for certain. It is expected that NASA's Voyager spacecraft, as they pass the heliopause, will transmit valuable data on radiation levels and solar wind to Earth. How well the heliosphere shields the Solar System from cosmic rays is poorly understood. A NASA-funded team has developed a concept of a "Vision Mission" dedicated to sending a probe to the heliosphere.
90377 Sedna (520 AU average) is a large, reddish object with a gigantic, highly elliptical orbit that takes it from about 76 AU at perihelion to 940 AU at aphelion and takes 11,400 years to complete. Mike Brown, who discovered the object in 2003, asserts that it cannot be part of the scattered disc or the Kuiper belt because its perihelion is too distant to have been affected by Neptune's migration. He and other astronomers consider it to be the first in an entirely new population, sometimes termed "distant detached objects" (DDOs), which also may include the object , which has a perihelion of 45 AU, an aphelion of 415 AU, and an orbital period of 3,420 years. Brown terms this population the "inner Oort cloud" because it may have formed through a similar process, although it is far closer to the Sun. Sedna is very likely a dwarf planet, though its shape has yet to be determined. The second unequivocally detached object, with a perihelion farther than Sedna's at roughly 81 AU, is , discovered in 2012. Its aphelion is only half that of Sedna's, at 400–500 AU.
The Oort cloud is a hypothetical spherical cloud of up to a trillion icy objects that is thought to be the source for all long-period comets and to surround the Solar System at roughly 50,000 AU (around 1 light-year (ly)), and possibly to as far as 100,000 AU (1.87 ly). It is thought to be composed of comets that were ejected from the inner Solar System by gravitational interactions with the outer planets. Oort cloud objects move very slowly, and can be perturbed by infrequent events, such as collisions, the gravitational effects of a passing star, or the galactic tide, the tidal force exerted by the Milky Way.
Much of the Solar System is still unknown. The Sun's gravitational field is estimated to dominate the gravitational forces of surrounding stars out to about two light years (125,000 AU). Lower estimates for the radius of the Oort cloud, by contrast, do not place it farther than 50,000 AU. Despite discoveries such as Sedna, the region between the Kuiper belt and the Oort cloud, an area tens of thousands of AU in radius, is still virtually unmapped. There are also ongoing studies of the region between Mercury and the Sun. Objects may yet be discovered in the Solar System's uncharted regions.
Currently, the furthest known objects, such as Comet West, have aphelia around 70,000 AU from the Sun, but as the Oort cloud becomes better known, this may change.
The Solar System is located in the Milky Way, a barred spiral galaxy with a diameter of about 100,000 light-years containing more than 100 billion stars. The Sun resides in one of the Milky Way's outer spiral arms, known as the Orion–Cygnus Arm or Local Spur. The Sun lies between 25,000 and 28,000 light-years from the Galactic Centre, and its speed within the Milky Way is about 220 km/s, so that it completes one revolution every 225–250 million years. This revolution is known as the Solar System's galactic year. The solar apex, the direction of the Sun's path through interstellar space, is near the constellation Hercules in the direction of the current location of the bright star Vega. The plane of the ecliptic lies at an angle of about 60° to the galactic plane.
The Solar System's location in the Milky Way is a factor in the evolutionary history of life on Earth. Its orbit is close to circular, and orbits near the Sun are at roughly the same speed as that of the spiral arms. Therefore, the Sun passes through arms only rarely. Because spiral arms are home to a far larger concentration of supernovae, gravitational instabilities, and radiation that could disrupt the Solar System, this has given Earth long periods of stability for life to evolve. The Solar System also lies well outside the star-crowded environs of the galactic centre. Near the centre, gravitational tugs from nearby stars could perturb bodies in the Oort cloud and send many comets into the inner Solar System, producing collisions with potentially catastrophic implications for life on Earth. The intense radiation of the galactic centre could also interfere with the development of complex life. Even at the Solar System's current location, some scientists have speculated that recent supernovae may have adversely affected life in the last 35,000 years, by flinging pieces of expelled stellar core towards the Sun, as radioactive dust grains and larger, comet-like bodies.
The Solar System is in the Local Interstellar Cloud or Local Fluff. It is thought to be near the neighbouring G-Cloud but it is not known if the Solar System is embedded in the Local Interstellar Cloud, or if it is in the region where the Local Interstellar Cloud and G-Cloud are interacting. The Local Interstellar Cloud is an area of denser cloud in an otherwise sparse region known as the Local Bubble, an hourglass-shaped cavity in the interstellar medium roughly 300 light-years (ly) across. The bubble is suffused with high-temperature plasma, that suggests it is the product of several recent supernovae.
There are relatively few stars within ten light-years of the Sun. The closest is the triple star system Alpha Centauri, which is about 4.4 light-years away. Alpha Centauri A and B are a closely tied pair of Sun-like stars, whereas the small red dwarf, Proxima Centauri, orbits the pair at a distance of 0.2 light-year. In 2016, a potentially habitable exoplanet was confirmed to be orbiting Proxima Centauri, called Proxima Centauri b, the closest confirmed exoplanet to the Sun. The stars next closest to the Sun are the red dwarfs Barnard's Star (at 5.9 ly), Wolf 359 (7.8 ly), and Lalande 21185 (8.3 ly).
The largest nearby star is Sirius, a bright main-sequence star roughly 8.6 light-years away and roughly twice the Sun's mass and that is orbited by a white dwarf, Sirius B. The nearest brown dwarfs are the binary Luhman 16 system at 6.6 light-years. Other systems within ten light-years are the binary red-dwarf system Luyten 726-8 (8.7 ly) and the solitary red dwarf Ross 154 (9.7 ly). The closest solitary Sun-like star to the Solar System is Tau Ceti at 11.9 light-years. It has roughly 80% of the Sun's mass but only 60% of its luminosity. The closest known free-floating planetary-mass object to the Sun is WISE 0855−0714, an object with a mass less than 10 Jupiter masses roughly 7 light-years away.
Compared to many other planetary systems, the Solar System stands out in lacking planets interior to the orbit of Mercury. The known Solar System also lacks super-Earths (Planet Nine could be a super-Earth beyond the known Solar System). Uncommonly, it has only small rocky planets and large gas giants; elsewhere planets of intermediate size are typical—both rocky and gas—so there is no "gap" as seen between the size of Earth and of Neptune (with a radius 3.8 times as large). Also, these super-Earths have closer orbits than Mercury. This led to the hypothesis that all planetary systems start with many close-in planets, and that typically a sequence of their collisions causes consolidation of mass into few larger planets, but in case of the Solar System the collisions caused their destruction and ejection.
The orbits of Solar System planets are nearly circular. Compared to other systems, they have smaller orbital eccentricity. Although there are attempts to explain it partly with a bias in the radial-velocity detection method and partly with long interactions of a quite high number of planets, the exact causes remain undetermined.
This section is a sampling of Solar System bodies, selected for size and quality of imagery, and sorted by volume. Some omitted objects are larger than the ones included here, notably Eris, because these have not been imaged in high quality. | https://en.wikipedia.org/wiki?curid=26903 |
Silurian
The Silurian ( ) is a geologic period and system spanning 24.6 million years from the end of the Ordovician Period, at million years ago (Mya), to the beginning of the Devonian Period, Mya. The Silurian is the shortest period of the Paleozoic Era. As with other geologic periods, the rock beds that define the period's start and end are well identified, but the exact dates are uncertain by a few million years. The base of the Silurian is set at a series of major Ordovician–Silurian extinction events when up to 60% of marine genera were wiped out.
A significant evolutionary milestone during the Silurian was the diversification of jawed fish and bony fish. Multi-cellular life also began to appear on land in the form of small, bryophyte-like and vascular plants that grew beside lakes, streams, and coastlines, and terrestrial arthropods are also first found on land during the Silurian. However, terrestrial life would not greatly diversify and affect the landscape until the Devonian.
The Silurian system was first identified by British geologist Roderick Murchison, who was examining fossil-bearing sedimentary rock strata in south Wales in the early 1830s. He named the sequences for a Celtic tribe of Wales, the Silures, inspired by his friend Adam Sedgwick, who had named the period of his study the Cambrian, from the Latin name for Wales. | https://en.wikipedia.org/wiki?curid=26904 |
Siege
A siege is a military blockade of a city, or fortress, with the intent of conquering by attrition, or a well-prepared assault. This derives from . Siege warfare is a form of constant, low-intensity conflict characterized by one party holding a strong, static, defensive position. Consequently, an opportunity for negotiation between combatants is not uncommon, as proximity and fluctuating advantage can encourage diplomacy. The art of conducting and resisting sieges is called siege warfare, siegecraft, or poliorcetics.
A siege occurs when an attacker encounters a city or fortress that cannot be easily taken by a quick assault, and which refuses to surrender. Sieges involve surrounding the target to block the provision of supplies and the reinforcement or escape of troops (a tactic known as "investment"). This is typically coupled with attempts to reduce the fortifications by means of siege engines, artillery bombardment, mining (also known as sapping), or the use of deception or treachery to bypass defenses.
Failing a military outcome, sieges can often be decided by starvation, thirst, or disease, which can afflict either the attacker or defender. This form of siege, though, can take many months or even years, depending upon the size of the stores of food the fortified position holds.
The attacking force can circumvallate the besieged place, which is to build a line of earth-works, consisting of a rampart and trench, surrounding it. During the process of circumvallation, the attacking force can be set upon by another force, an ally of the besieged place, due to the lengthy amount of time required to force it to capitulate. A defensive ring of forts outside the ring of circumvallated forts, called contravallation, is also sometimes used to defend the attackers from outside.
Ancient cities in the Middle East show archaeological evidence of having had fortified city walls. During the Warring States era of ancient China, there is both textual and archaeological evidence of prolonged sieges and siege machinery used against the defenders of city walls. Siege machinery was also a tradition of the ancient Greco-Roman world. During the Renaissance and the early modern period, siege warfare dominated the conduct of war in Europe. Leonardo da Vinci gained as much of his renown from the design of fortifications as from his artwork.
Medieval campaigns were generally designed around a succession of sieges. In the Napoleonic era, increasing use of ever more powerful cannons reduced the value of fortifications. In the 20th century, the significance of the classical siege declined. With the advent of mobile warfare, a single fortified stronghold is no longer as decisive as it once was. While traditional sieges do still occur, they are not as common as they once were due to changes in modes of battle, principally the ease by which huge volumes of destructive power can be directed onto a static target. Modern sieges are more commonly the result of smaller hostage, militant, or extreme resisting arrest situations.
The Assyrians deployed large labour forces to build new palaces, temples, and defensive walls. Some settlements in the Indus Valley Civilization were also fortified. By about 3500 BC, hundreds of small farming villages dotted the Indus River floodplain. Many of these settlements had fortifications and planned streets.
The stone and mud brick houses of Kot Diji were clustered behind massive stone flood dikes and defensive walls, for neighbouring communities quarrelled constantly about the control of prime agricultural land. Mundigak (c. 2500 BC) in present-day south-east Afghanistan has defensive walls and square bastions of sun-dried bricks.
City walls and fortifications were essential for the defence of the first cities in the ancient Near East. The walls were built of mudbricks, stone, wood, or a combination of these materials, depending on local availability. They may also have served the dual purpose of showing presumptive enemies the might of the kingdom. The great walls surrounding the Sumerian city of Uruk gained a widespread reputation. The walls were in length, and up to in height.
Later, the walls of Babylon, reinforced by towers, moats, and ditches, gained a similar reputation. In Anatolia, the Hittites built massive stone walls around their cities atop hillsides, taking advantage of the terrain. In Shang Dynasty China, at the site of Ao, large walls were erected in the 15th century BC that had dimensions of in width at the base and enclosed an area of some squared. The ancient Chinese capital for the State of Zhao, Handan, founded in 386 BC, also had walls that were wide at the base; they were tall, with two separate sides of its rectangular enclosure at a length of 1,530 yd (1,400 m).
The cities of the Indus Valley Civilization showed less effort in constructing defences, as did the Minoan civilization on Crete. These civilizations probably relied more on the defence of their outer borders or sea shores. Unlike the ancient Minoan civilization, the Mycenaean Greeks emphasized the need for fortifications alongside natural defences of mountainous terrain, such as the massive Cyclopean walls built at Mycenae and other adjacent Late Bronze Age (c. 1600–1100 BC) centers of central and southern Greece.
Although there are depictions of sieges from the ancient Near East in historical sources and in art, there are very few examples of siege systems that have been found archaeologically. Of the few examples, several are noteworthy:
The earliest representations of siege warfare have been dated to the Protodynastic Period of Egypt, c. 3000 BC. These show the symbolic destruction of city walls by divine animals using hoes.
The first siege equipment is known from Egyptian tomb reliefs of the 24th century BC, showing Egyptian soldiers storming Canaanite town walls on wheeled siege ladders. Later Egyptian temple reliefs of the 13th century BC portray the violent siege of Dapur, a Syrian city, with soldiers climbing scale ladders supported by archers.
Assyrian palace reliefs of the 9th to 7th centuries BC display sieges of several Near Eastern cities. Though a simple battering ram had come into use in the previous millennium, the Assyrians improved siege warfare and used huge wooden tower-shaped battering rams with archers positioned on top.
In ancient China, sieges of city walls (along with naval battles) were portrayed on bronze 'hu' vessels, like those found in Chengdu, Sichuan in 1965, which have been dated to the Warring States period (5th to 3rd centuries BC).
An attacker's first act in a siege might be a surprise attack, attempting to overwhelm the defenders before they were ready or were even aware there was a threat. This was how William de Forz captured Fotheringhay Castle in 1221.
The most common practice of siege warfare was to lay siege and just wait for the surrender of the enemies inside or, quite commonly, to coerce someone inside to betray the fortification. During the medieval period, negotiations would frequently take place during the early part of the siege. An attacker – aware of a prolonged siege's great cost in time, money, and lives – might offer generous terms to a defender who surrendered quickly. The defending troops would be allowed to march away unharmed, often retaining their weapons. However, a garrison commander who was thought to have surrendered too quickly might face execution by his own side for treason.
As a siege progressed, the surrounding army would build earthworks (a line of circumvallation) to completely encircle their target, preventing food, water, and other supplies from reaching the besieged city. If sufficiently desperate as the siege progressed, defenders and civilians might have been reduced to eating anything vaguely edible – horses, family pets, the leather from shoes, and even each other.
The Hittite siege of a rebellious Anatolian vassal in the 14th century BC ended when the queen mother came out of the city and begged for mercy on behalf of her people. The Hittite campaign against the kingdom of Mitanni in the 14th century BC bypassed the fortified city of Carchemish. If the main objective of a campaign was not the conquest of a particular city, it could simply be passed by. When the main objective of the campaign had been fulfilled, the Hittite army returned to Carchemish and the city fell after an eight-day siege.
Disease was another effective siege weapon, although the attackers were often as vulnerable as the defenders. In some instances, catapults or similar weapons were used to fling diseased animals over city walls in an early example of biological warfare. If all else failed, a besieger could claim the booty of his conquest undamaged, and retain his men and equipment intact, for the price of a well-placed bribe to a disgruntled gatekeeper. The Assyrian Siege of Jerusalem in the 8th century BC came to an end when the Israelites bought them off with gifts and tribute, according to the Assyrian account, or when the Assyrian camp was struck by mass death, according to the Biblical account. Due to logistics, long-lasting sieges involving a minor force could seldom be maintained. A besieging army, encamped in possibly squalid field conditions and dependent on the countryside and its own supply lines for food, could very well be threatened with the disease and starvation intended for the besieged.
To end a siege more rapidly, various methods were developed in ancient and medieval times to counter fortifications, and a large variety of siege engines was developed for use by besieging armies. Ladders could be used to escalade over the defenses. Battering rams and siege hooks could also be used to force through gates or walls, while catapults, ballistae, trebuchets, mangonels, and onagers could be used to launch projectiles to break down a city's fortifications and kill its defenders. A siege tower, a substantial structure built to equal or greater height than the fortification's walls, could allow the attackers to fire down upon the defenders and also advance troops to the wall with less danger than using ladders.
In addition to launching projectiles at the fortifications or defenders, it was also quite common to attempt to undermine the fortifications, causing them to collapse. This could be accomplished by digging a tunnel beneath the foundations of the walls, and then deliberately collapsing or exploding the tunnel. This process is known as mining. The defenders could dig counter-tunnels to cut into the attackers' works and collapse them prematurely.
Fire was often used as a weapon when dealing with wooden fortifications. The Byzantine Empire used Greek fire, which contained additives that made it hard to extinguish. Combined with a primitive flamethrower, it proved an effective offensive and defensive weapon.
The universal method for defending against siege is the use of fortifications, principally walls and ditches, to supplement natural features. A sufficient supply of food and water was also important to defeat the simplest method of siege warfare: starvation. On occasion, the defenders would drive 'surplus' civilians out to reduce the demands on stored food and water.
During the Warring States period in China (481–221 BC), warfare lost its honorable, gentlemen's duty that was found in the previous era of the Spring and Autumn period, and became more practical, competitive, cut-throat, and efficient for gaining victory. The Chinese invention of the hand-held, trigger-mechanism crossbow during this period revolutionized warfare, giving greater emphasis to infantry and cavalry and less to traditional chariot warfare.
The philosophically pacifist Mohists (followers of the philosopher Mozi) of the 5th century BC believed in aiding the defensive warfare of smaller Chinese states against the hostile offensive warfare of larger domineering states. The Mohists were renowned in the smaller states (and the enemies of the larger states) for the inventions of siege machinery to scale or destroy walls. These included traction trebuchet catapults, eight-foot-high ballistas, a wheeled siege ramp with grappling hooks known as the Cloud Bridge (the protractible, folded ramp slinging forward by means of a counterweight with rope and pulley), and wheeled 'hook-carts' used to latch large iron hooks onto the tops of walls to pull them down.
When enemies attempted to dig tunnels under walls for mining or entry into the city, the defenders used large bellows (the type the Chinese commonly used in heating up a blast furnace for smelting cast iron) to pump smoke into the tunnels in order to suffocate the intruders.
Advances in the prosecution of sieges in ancient and medieval times naturally encouraged the development of a variety of defensive countermeasures. In particular, medieval fortifications became progressively stronger—for example, the advent of the concentric castle from the period of the Crusades—and more dangerous to attackers—witness the increasing use of machicolations and murder-holes, as well the preparation of hot or incendiary substances. Arrowslits (also called arrow loops or loopholes), sally ports (airlock-like doors) for sallies and deep water wells were also integral means of resisting siege at this time. Particular attention would be paid to defending entrances, with gates protected by drawbridges, portcullises, and barbicans. Moats and other water defenses, whether natural or augmented, were also vital to defenders.
In the European Middle Ages, virtually all large cities had city walls—Dubrovnik in Dalmatia is a well-preserved example—and more important cities had citadels, forts, or castles. Great effort was expended to ensure a good water supply inside the city in case of siege. In some cases, long tunnels were constructed to carry water into the city. Complex systems of tunnels were used for storage and communications in medieval cities like Tábor in Bohemia, similar to those used much later in Vietnam during the Vietnam War.
Until the invention of gunpowder-based weapons (and the resulting higher-velocity projectiles), the balance of power and logistics definitely favored the defender. With the invention of gunpowder, cannon and mortars and howitzers (in modern times), the traditional methods of defense became less effective against a determined siege.
Although there are numerous ancient accounts of cities being sacked, few contain any clues to how this was achieved. Some popular tales existed on how the cunning heroes succeeded in their sieges. The best-known is the Trojan Horse of the Trojan War, and a similar story tells how the Canaanite city of Joppa was conquered by the Egyptians in the 15th century BC. The Biblical Book of Joshua contains the story of the miraculous Battle of Jericho.
A more detailed historical account from the 8th century BC, called the Piankhi stela, records how the Nubians laid siege to and conquered several Egyptian cities by using battering rams, archers, and slingers and building causeways across moats.
During the Peloponnesian War, one hundred sieges were attempted and fifty-eight ended with the surrender of the besieged area.
Alexander the Great's army successfully besieged many powerful cities during his conquests. Two of his most impressive achievements in siegecraft took place in the Siege of Tyre and the Siege of the Sogdian Rock. His engineers built a causeway that was originally wide and reached the range of his torsion-powered artillery, while his soldiers pushed siege towers housing stone throwers and light catapults to bombard the city walls.
Most conquerors before him had found Tyre, a Phoenician island-city about 1 km from the mainland, impregnable. The Macedonians built a mole, a raised spit of earth across the water, by piling stones up on a natural land bridge that extended underwater to the island, and although the Tyrians rallied by sending a fire ship to destroy the towers, and captured the mole in a swarming frenzy, the city eventually fell to the Macedonians after a seven-month siege. In complete contrast to Tyre, Sogdian Rock was captured by stealthy attack. Alexander used commando-like tactics to scale the cliffs and capture the high ground, and the demoralized defenders surrendered.
The importance of siege warfare in the ancient period should not be underestimated. One of the contributing causes of Hannibal's inability to defeat Rome was his lack of siege engines, thus, while he was able to defeat Roman armies in the field, he was unable to capture Rome itself. The legionary armies of the Roman Republic and Empire are noted as being particularly skilled and determined in siege warfare. An astonishing number and variety of sieges, for example, formed the core of Julius Caesar's mid-1st-century BC conquest of Gaul (modern France).
In his "Commentarii de Bello Gallico" ("Commentaries on the Gallic War"), Caesar describes how, at the Battle of Alesia, the Roman legions created two huge fortified walls around the city. The inner circumvallation, , held in Vercingetorix's forces, while the outer contravallation kept relief from reaching them. The Romans held the ground in between the two walls. The besieged Gauls, facing starvation, eventually surrendered after their relief force met defeat against Caesar's auxiliary cavalry.
The Sicarii Zealots who defended Masada in AD 73 were defeated by the Roman legions, who built a ramp 100 m high up to the fortress's west wall.
During the Roman-Persian Wars, siege warfare was extensively being used by both sides.
The early Muslims, led by the Islamic prophet Muhammad, made extensive use of sieges during military campaigns. The first use was during the Invasion of Banu Qaynuqa. According to Islamic tradition, the invasion of Banu Qaynuqa occurred in 624 AD. The Banu Qaynuqa were a Jewish tribe expelled by Muhammad for allegedly breaking the treaty known as the Constitution of Medina by pinning the clothes of a Muslim woman, which led to her being stripped naked. A Muslim killed a Jew in retaliation, and the Jews in turn killed the Muslim man. This escalated to a chain of revenge killings, and enmity grew between Muslims and the Banu Qaynuqa, leading to the siege of their fortress. The tribe eventually surrendered to Muhammad, who initially wanted to kill the members of Banu Qaynuqa but ultimately yielded to Abdullah ibn Ubayy's insistence and agreed to expel the Qaynuqa.
The second siege was during the Invasion of Banu Nadir. According to "The Sealed Nectar", the siege did not last long; the Banu Nadir Jews willingly offered to comply with the Muhammad's order and leave Madinah. Their caravan counted 600 loaded camels, including their chiefs, Huyai bin Akhtab, and Salam bin Abi Al-Huqaiq, who left for Khaibar, whereas another party shifted to Syria. Two of them embraced Islam, Yameen bin ‘Amr and Abu Sa‘d bin Wahab, and so they retained their personal wealth. Muhammad seized their weapons, land, houses, and wealth. Amongst the other booty he managed to capture, there were 50 armours, 50 helmets, and 340 swords. This booty was exclusively Muhammad's because no fighting was involved in capturing it. He divided the booty at his own discretion among the early Emigrants and two poor Helpers, Abu Dujana and Suhail bin Haneef.
Other examples include the Invasion of Banu Qurayza in February–March 627 and the Siege of Ta'if in January 630.
In the Middle Ages, the Mongol Empire's campaign against China (then comprising the Western Xia Dynasty, Jin Dynasty, and Southern Song dynasty) by Genghis Khan until Kublai Khan, who eventually established the Yuan Dynasty in 1271, with their armies was extremely effective, allowing the Mongols to sweep through large areas. Even if they could not enter some of the more well-fortified cities, they used innovative battle tactics to grab hold of the land and the people:
Another Mongol tactic was to use catapults to launch corpses of plague victims into besieged cities. The disease-carrying fleas from the bodies would then infest the city, and the plague would spread, allowing the city to be easily captured, although this transmission mechanism was not known at the time. In 1346, the bodies of Mongol warriors of the Golden Horde who had died of plague were thrown over the walls of the besieged Crimean city of Kaffa (now Feodosiya). It has been speculated that this operation may have been responsible for the advent of the Black Death in Europe. The Black Death is estimated to have killed 30%–60% of Europe's population.
On the first night while laying siege to a city, the leader of the Mongol forces would lead from a white tent: if the city surrendered, all would be spared. On the second day, he would use a red tent: if the city surrendered, the men would all be killed, but the rest would be spared. On the third day, he would use a black tent: no quarter would be given.
However, the Chinese were not completely defenseless, and from AD 1234 until 1279, the Southern Song Chinese held out against the enormous barrage of Mongol attacks. Much of this success in defense lay in the world's first use of gunpowder (i.e. with early flamethrowers, grenades, firearms, cannons, and land mines) to fight back against the Khitans, the Tanguts, the Jurchens, and then the Mongols.
The Chinese of the Song period also discovered the explosive potential of packing hollowed cannonball shells with gunpowder. Written later around 1350 in the "Huo Long Jing", this manuscript of Jiao Yu recorded an earlier Song-era cast-iron cannon known as the 'flying-cloud thunderclap eruptor' (fei yun pi-li pao). The manuscript stated that (Wade–Giles spelling):
The shells ("phao") are made of cast iron, as large as a bowl and shaped like a ball. Inside they contain half a pound of 'magic' gunpowder ("shen huo"). They are sent flying towards the enemy camp from an eruptor ("mu phao"); and when they get there a sound like a thunder-clap is heard, and flashes of light appear. If ten of these shells are fired successfully into the enemy camp, the whole place will be set ablaze...
During the Ming Dynasty (AD 1368–1644), the Chinese were very concerned with city planning in regards to gunpowder warfare. The site for constructing the walls and the thickness of the walls in Beijing's Forbidden City were favoured by the Chinese Yongle Emperor (r. 1402–1424) because they were in pristine position to resist cannon volley and were built thick enough to withstand attacks from cannon fire.
"For more, see Technology of the Song dynasty."
The introduction of gunpowder and the use of cannons brought about a new age in siege warfare. Cannons were first used in Song dynasty China during the early 13th century, but did not become significant weapons for another 150 years or so. In early decades, cannons could do little against strong castles and fortresses, providing little more than smoke and fire. By the 16th century, however, they were an essential and regularized part of any campaigning army, or castle's defences.
The greatest advantage of cannons over other siege weapons was the ability to fire a heavier projectile, farther, faster, and more often than previous weapons. They could also fire projectiles in a straight line, so that they could destroy the bases of high walls. Thus, 'old fashioned' walls – that is, high and, relatively, thin – were excellent targets, and, over time, easily demolished. In 1453, the great walls of Constantinople, the capital of the Byzantine Empire, were broken through in "just" six weeks by the 62 cannons of Mehmed II's army.
However, new fortifications, designed to withstand gunpowder weapons, were soon constructed throughout Europe. During the Renaissance and the early modern period, siege warfare continued to dominate the conduct of the European wars.
Once siege guns were developed, the techniques for assaulting a town or fortress became well known and ritualized. The attacking army would surround a town. Then the town would be asked to surrender. If they did not comply, the besieging army would surround the town with temporary fortifications to stop sallies from the stronghold or relief getting in. The attackers would next build a length of trenches parallel to the defences (these are known as the "First parallel") and just out of range of the defending artillery. They would dig a trench (known as a Forward) towards the town in a zigzag pattern so that it could not be enfiladed by defending fire. Once they were within artillery range, they would dig another parallel (the Second Parallel) trench and fortify it with gun emplacements. This technique is commonly called entrenchment.
If necessary, using the first artillery fire for cover, the forces conducting the siege would repeat the process until they placed their guns close enough to be laid (aimed) accurately to make a breach in the fortifications. In order to allow the forlorn hope and support troops to get close enough to exploit the breach, more zigzag trenches could be dug even closer to the walls, with more parallel trenches to protect and conceal the attacking troops. After each step in the process, the besiegers would ask the besieged to surrender. If the forlorn hope stormed the breach successfully, the defenders could expect no mercy.
The castles that in earlier years had been formidable obstacles were easily breached by the new weapons. For example, in Spain, the newly equipped army of Ferdinand and Isabella was able to conquer Moorish strongholds in Granada in 1482–1492 that had held out for centuries before the invention of cannons.
In the early 15th century, Italian architect Leon Battista Alberti wrote a treatise entitled "De Re aedificatoria", which theorized methods of building fortifications capable of withstanding the new guns. He proposed that walls be "built in uneven lines, like the teeth of a saw". He proposed star-shaped fortresses with low, thick walls.
However, few rulers paid any attention to his theories. A few towns in Italy began building in the new style late in the 1480s, but it was only with the French invasion of the Italian peninsula in 1494–1495 that the new fortifications were built on a large scale. Charles VIII invaded Italy with an army of 18,000 men and a horse-drawn siege-train. As a result, he could defeat virtually any city or state, no matter how well defended. In a panic, military strategy was completely rethought throughout the Italian states of the time, with a strong emphasis on the new fortifications that could withstand a modern siege.
The most effective way to protect walls against cannonfire proved to be depth (increasing the width of the defences) and angles (ensuring that attackers could only fire on walls at an oblique angle, not square on). Initially, walls were lowered and backed, in front and behind, with earth. Towers were reformed into triangular bastions. This design matured into the "trace italienne". Star-shaped fortresses surrounding towns and even cities with outlying defences proved very difficult to capture, even for a well-equipped army. Fortresses built in this style throughout the 16th century did not become fully obsolete until the 19th century, and were still in use throughout World War I (though modified for 20th-century warfare). During World War II, "trace italienne" fortresses could still present a formidable challenge, for example, in the last days of World War II, during the Battle in Berlin, that saw some of the heaviest urban fighting of the war, the Soviets did not attempt to storm the Spandau Citadel (built between 1559 and 1594), but chose to invest it and negotiate its surrender.
However, the cost of building such vast modern fortifications was incredibly high, and was often too much for individual cities to undertake. Many were bankrupted in the process of building them; others, such as Siena, spent so much money on fortifications that they were unable to maintain their armies properly, and so lost their wars anyway. Nonetheless, innumerable large and impressive fortresses were built throughout northern Italy in the first decades of the 16th century to resist repeated French invasions that became known as the Italian Wars. Many stand to this day.
In the 1530s and '40s, the new style of fortification began to spread out of Italy into the rest of Europe, particularly to France, the Netherlands, and Spain. Italian engineers were in enormous demand throughout Europe, especially in war-torn areas such as the Netherlands, which became dotted by towns encircled in modern fortifications. The densely populated areas of Northern Italy and the United Provinces (the Netherlands) were infamous for their high degree of fortification of cities. It made campaigns in these areas very hard to successfully conduct, considering even minor cities had to be captured by siege within the span of the campaigning season. In the Dutch case, the possibility of flooding large parts of the land provided an additional obstacle to besiegers, for example at the Siege of Leiden. For many years, defensive and offensive tactics were well balanced, leading to protracted and costly wars such as Europe had never known, involving more and more planning and government involvement. The new fortresses ensured that war rarely extended beyond a series of sieges. Because the new fortresses could easily hold 10,000 men, an attacking army could not ignore a powerfully fortified position without serious risk of counterattack. As a result, virtually all towns had to be taken, and that was usually a long, drawn-out affair, potentially lasting from several months to years, while the members of the town were starved to death. Most battles in this period were between besieging armies and relief columns sent to rescue the besieged.
At the end of the 17th century, two influential military engineers, the French Marshal Vauban and the Dutch military engineer Menno van Coehoorn, developed modern fortification to its pinnacle, refining siege warfare without fundamentally altering it: ditches would be dug; walls would be protected by glacis; and bastions would enfilade an attacker. Both engineers developed their ideas independently, but came to similar general rules regarding defensive construction and offensive action against fortifications. Both were skilled in conducting sieges and defences themselves. Before Vauban and Van Coehoorn, sieges had been somewhat slapdash operations. Vauban and Van Coehoorn refined besieging to a science with a methodical process that, if uninterrupted, would break even the strongest fortifications. Examples of their styles of fortifications are Arras (Vauban) and the no-longer-existent fortress of Bergen op Zoom (Van Coehoorn). The main differences between the two lay in the difference in terrain on which Vauban and Van Coehoorn constructed their defences: Vauban in the sometimes more hilly and mountainous terrain of France, Van Coehoorn in the flat and floodable lowlands of the Netherlands.
Planning and maintaining a siege is just as difficult as fending one off. A besieging army must be prepared to repel both sorties from the besieged area and also any attack that may try to relieve the defenders. It was thus usual to construct lines of trenches and defenses facing in both directions. The outermost lines, known as the lines of contravallation, would surround the entire besieging army and protect it from attackers.
This would be the first construction effort of a besieging army, built soon after a fortress or city had been invested. A line of circumvallation would also be constructed, facing in towards the besieged area, to protect against sorties by the defenders and to prevent the besieged from escaping. The next line, which Vauban usually placed at about 600 meters from the target, would contain the main batteries of heavy cannons so that they could hit the target without being vulnerable themselves. Once this line was established, work crews would move forward, creating another line at 250 meters. This line contained smaller guns. The final line would be constructed only 30 to 60 meters from the fortress. This line would contain the mortars and would act as a staging area for attack parties once the walls were breached. Van Coehoorn developed a small and easily movable mortar named the coehorn, variations of which were used in sieges until the 19th century. It would also be from this line that miners working to undermine the fortress would operate.
The trenches connecting the various lines of the besiegers could not be built perpendicular to the walls of the fortress, as the defenders would have a clear line of fire along the whole trench. Thus, these lines (known as saps) needed to be sharply jagged.
Another element of a fortress was the citadel. Usually, a citadel was a "mini fortress" within the larger fortress, sometimes designed as a reduit, but more often as a means of protecting the garrison from potential revolt in the city. The citadel was used in wartime and peacetime to keep the residents of the city in line.
As in ages past, most sieges were decided with very little fighting between the opposing armies. An attacker's army was poorly served, incurring the high casualties that a direct assault on a fortress would entail. Usually, they would wait until supplies inside the fortifications were exhausted or disease had weakened the defenders to the point that they were willing to surrender. At the same time, diseases, especially typhus, were a constant danger to the encamped armies outside the fortress, and often forced a premature retreat. Sieges were often won by the army that lasted the longest.
An important element of strategy for the besieging army was whether or not to allow the encamped city to surrender. Usually, it was preferable to graciously allow a surrender, both to save on casualties, and to set an example for future defending cities. A city that was allowed to surrender with minimal loss of life was much better off than a city that held out for a long time and was brutally butchered at the end. Moreover, if an attacking army had a reputation of killing and pillaging regardless of a surrender, then other cities' defensive efforts would be redoubled. Usually, a city would surrender (with no honour lost) when its inner lines of defence were reached by the attacker. In case of refusal, however, the inner lines would have to be stormed by the attacker and the attacking troops would be seen to be justified in sacking the city.
Siege warfare dominated in Western Europe for most of the 17th and 18th centuries. An entire campaign, or longer, could be used in a single siege (for example, Ostend in 1601–1604; La Rochelle in 1627–1628). This resulted in extremely prolonged conflicts. The balance was that, while siege warfare was extremely expensive and very slow, it was very successful—or, at least, more so than encounters in the field. Battles arose through clashes between besiegers and relieving armies, but the principle was a slow, grinding victory by the greater economic power. The relatively rare attempts at forcing pitched battles (Gustavus Adolphus in 1630; the French against the Dutch in 1672 or 1688) were almost always expensive failures.
The exception to this rule were the English. During the English Civil War, anything which tended to prolong the struggle, or seemed like want of energy and avoidance of a decision, was bitterly resented by the men of both sides. In France and Germany, the prolongation of a war meant continued employment for the soldiers, but in England, both sides were looking to end the war quickly. Even when in the end the New Model Army—a regular professional army—developed the original decision-compelling spirit permeated the whole organisation, as was seen when pitched against regular professional continental troops the Battle of the Dunes during the Interregnum.
Experienced commanders on both sides in the English Civil War recommended the abandonment of garrisoned fortifications for two primary reasons. The first, as for example proposed by the Royalist Sir Richard Willis to King Charles, was that by abandoning the garrisoning of all but the most strategic locations in one's own territory, far more troops would be available for the field armies, and it was the field armies which would decide the conflict. The other argument was that by slighting potential strong points in one's own territory, an enemy expeditionary force, or local enemy rising, would find it more difficult to consolidate territorial gains against an inevitable counterattack. Sir John Meldrum put forward just such an argument to the Parliamentary Committee of Both Kingdoms, to justify his slighting of Gainsborough in Lincolnshire.
Sixty years later, during the War of the Spanish Succession, the Duke of Marlborough preferred to engage the enemy in pitched battles, rather than engage in siege warfare, although he was very proficient in both types of warfare.
On 15 April 1746, the day before the Battle of Culloden, at Dunrobin Castle, a party of William Sutherland's militia conducted the last siege fought on the mainland of Great Britain against Jacobite members of Clan MacLeod.
In the French Revolutionary and Napoleonic Wars, new techniques stressed the division of armies into all-arms corps that would march separately and only come together on the battlefield. The less-concentrated army could now live off the country and move more rapidly over a larger number of roads.
Fortresses commanding lines of communication could be bypassed and would no longer stop an invasion. Since armies could not live off the land indefinitely, Napoleon Bonaparte always sought a quick end to any conflict by pitched battle. This military revolution was described and codified by Clausewitz.
Advances in artillery made previously impregnable defences useless. For example, the walls of Vienna that had held off the Turks in the mid-17th century were no obstacle to Napoleon in the early 19th.
Where sieges occurred (such as the Siege of Delhi and the Siege of Cawnpore during the Indian Rebellion of 1857), the attackers were usually able to defeat the defences within a matter of days or weeks, rather than weeks or months as previously. The great Swedish white-elephant fortress of Karlsborg was built in the tradition of Vauban and intended as a reserve capital for Sweden, but it was obsolete before it was completed in 1869.
Railways, when they were introduced, made possible the movement and supply of larger armies than those that fought in the Napoleonic Wars. It also reintroduced siege warfare, as armies seeking to use railway lines in enemy territory were forced to capture fortresses which blocked these lines.
During the Franco-Prussian War, the battlefield front-lines moved rapidly through France. However, the Prussian and other German armies were delayed for months at the Siege of Metz and the Siege of Paris, due to the greatly increased firepower of the defending infantry, and the principle of detached or semi-detached forts with heavy-caliber artillery. This resulted in the later construction of fortress works across Europe, such as the massive fortifications at Verdun. It also led to the introduction of tactics which sought to induce surrender by bombarding the civilian population within a fortress, rather than the defending works themselves.
The Siege of Sevastopol during the Crimean War and the Siege of Petersburg (1864–1865) during the American Civil War showed that modern citadels, when improved by improvised defences, could still resist an enemy for many months. The Siege of Plevna during the Russo-Turkish War (1877–1878) proved that hastily constructed field defences could resist attacks prepared without proper resources, and were a portent of the trench warfare of World War I.
Advances in firearms technology without the necessary advances in battlefield communications gradually led to the defence again gaining the ascendancy. An example of siege during this time, prolonged during 337 days due to the isolation of the surrounded troops, was the Siege of Baler, in which a reduced group of Spanish soldiers was besieged in a small church by the Philippine rebels in the course of the Philippine Revolution and the Spanish–American War, until months after the Treaty of Paris, the end of the conflict.
Furthermore, the development of steamships availed greater speed to blockade runners, ships with the purpose of bringing cargo, e.g. food, to cities under blockade, as with Charleston, South Carolina during the American Civil War.
Mainly as a result of the increasing firepower (such as machine guns) available to defensive forces, First World War trench warfare briefly revived a form of siege warfare. Although siege warfare had moved out from an urban setting because city walls had become ineffective against modern weapons, trench warfare was nonetheless able to use many of the techniques of siege warfare in its prosecution (sapping, mining, barrage and, of course, attrition), but on a much larger scale and on a greatly extended front.
More traditional sieges of fortifications took place in addition to trench sieges. The Siege of Tsingtao was one of the first major sieges of the war, but the inability for significant resupply of the German garrison made it a relatively one-sided battle. The Germans and the crew of an Austro-Hungarian protected cruiser put up a hopeless defence and, after holding out for more than a week, surrendered to the Japanese, forcing the German East Asia Squadron to steam towards South America for a new coal source.
The other major siege outside Europe during the First World War was in Mesopotamia, at the Siege of Kut. After a failed attempt to move on Baghdad, stopped by the Ottomans at the bloody Battle of Ctesiphon, the British and their large contingent of Indian sepoy soldiers were forced to retreat to Kut, where the Ottomans under German General Baron Colmar von der Goltz laid siege. The British attempts to resupply the force via the Tigris river failed, and rationing was complicated by the refusal of many Indian troops to eat cattle products. By the time the garrison fell on 29 April 1916, starvation was rampant. Conditions did not improve greatly under Turkish imprisonment. Along with the battles of Tanga, Sandfontein, Gallipoli, and Namakura, it would be one of Britain's numerous embarrassing colonial defeats of the war.
The largest sieges of the war, however, took place in Europe. The initial German advance into Belgium produced four major sieges: the Battle of Liège, the Battle of Namur, the Siege of Maubeuge, and the Siege of Antwerp. All four would prove crushing German victories, at Liège and Namur against the Belgians, at Maubeuge against the French and at Antwerp against a combined Anglo-Belgian force. The weapon that made these victories possible were the German Big Berthas and the Skoda 305 mm Model 1911 siege mortars, one of the best siege mortars of the war, on loan from Austria-Hungary. These huge guns were the decisive weapon of siege warfare in the 20th century, taking part at Przemyśl, the Belgian sieges, on the Italian Front and Serbian Front, and even being reused in World War II.
At the second Siege of Przemyśl, the Austro-Hungarian garrison showed an excellent knowledge of siege warfare, not only waiting for relief, but sending sorties into Russian lines and employing an active defence that resulted in the capture of the Russian General Lavr Kornilov. Despite its excellent performance, the garrison's food supply had been requisitioned for earlier offensives, a relief expedition was stalled by the weather, ethnic rivalries flared up between the defending soldiers, and a breakout attempt failed. When the commander of the garrison Hermann Kusmanek finally surrendered, his troops were eating their horses and the first attempt of large-scale air supply had failed. It was one of the few great victories obtained by either side during the war; 110,000 Austro-Hungarian prisoners were marched back to Russia. Use of aircraft for siege running, bringing supplies to areas under siege, would nevertheless prove useful in many sieges to come.
The largest siege of the war, and arguably the roughest, most gruesome battle in history, was the Battle of Verdun. Whether the battle can be considered true siege warfare is debatable. Under the theories of Erich von Falkenhayn, it is more distinguishable as purely attrition with a coincidental presence of fortifications on the battlefield. When considering the plans of Crown Prince Wilhelm, purely concerned with taking the citadel and not with French casualty figures, it can be considered a true siege. The main fortifications were Fort Douaumont, Fort Vaux, and the fortified city of Verdun itself. The Germans, through the use of huge artillery bombardments, flamethrowers, and infiltration tactics, were able to capture both Vaux and Douaumont, but were never able to take the city, and eventually lost most of their gains. It was a battle that, despite the French ability to fend off the Germans, neither side won. The German losses were not worth the potential capture of the city, and the French casualties were not worth holding the symbol of her defence.
The development of the armoured tank and improved infantry tactics at the end of World War I swung the pendulum back in favour of manoeuvre, and with the advent of Blitzkrieg in 1939, the end of traditional siege warfare was at hand. The Maginot Line would be the prime example of the failure of immobile, post–World War I fortifications. Although sieges would continue, it would be in a totally different style and on a reduced scale.
The Blitzkrieg of the Second World War truly showed that fixed fortifications are easily defeated by manoeuvre instead of frontal assault or long sieges. The great Maginot Line was bypassed, and battles that would have taken weeks of siege could now be avoided with the careful application of air power (such as the German paratrooper capture of Fort Eben-Emael, Belgium, early in World War II).
The most important siege was the Siege of Leningrad, that lasted over 29 months, about half of the duration of the entire Second World War. The siege of Leningrad resulted in the deaths of some one million of the city's inhabitants. Along with the Battle of Stalingrad, the Siege of Leningrad on the Eastern Front was the deadliest siege of a city in history. In the west, apart from the Battle of the Atlantic, the sieges were not on the same scale as those on the European Eastern front; however, there were several notable or critical sieges: the island of Malta, for which the population won the George Cross and Tobruk. In the South-East Asian Theatre, there was the siege of Singapore, and in the Burma Campaign, sieges of Myitkyina, the Admin Box, Imphal, and Kohima, which was the high-water mark for the Japanese advance into India.
The siege of Sevastopol saw the use of the heaviest and most powerful individual siege engines ever to be used: the German 800mm railway gun and the 600mm siege mortar. Though a single shell could have disastrous local effect, the guns were susceptible to air attack in addition to being slow to move.
Throughout the war both the Western Allies and the Germans tried to supply forces besieged behind enemy lines with ad-hock airbridges. Sometimes these attempts failed, as happened to the besieged German Sixth Army the siege of Stalingrad, and sometimes they succeeded as happened during the Battle of the Admin Box (5 – 23 February 1944) and, during the short Siege of Bastogne (December 1944).
The logistics of strategic airbridge operations were developed by the Americans flying military transport aircraft from India to China over the Hump (1942–1945), to resupply the Chinese war effort of Chiang Kai-shek, and to the USAAF XX Bomber Command (during Operation Matterhorn).
Tactical airbridge methods were developed and, as planned, used extensively for supplying the Chindits during Operation Thursday (February – May 1944). The Chindits a specially trained division of the British and Indian armies were flown deep behind Japanese front lines in the South-East Asian theatre to jungle clearings in Burma where they set up fortified airheads from which they salled out to attack Japanese lines of communications, while defending the bases from Japanese counter attacks. The bases were re-supplied by air with casualties flown out by returning aircraft. When the Japanese attacked in strength the Chindits abandoned the bases and either moved to new bases, or back to Allied lines.
Several times during the Cold War the western powers had to use their airbridge expertise.
In both Vietnamese cases, the Viet Minh and NLF were able to cut off the opposing army by capturing the surrounding rugged terrain. At Dien Bien Phu, the French were unable to use air power to overcome the siege and were defeated. However, at Khe Sanh, a mere 14 years later, advances in air power—and a reduction in Vietnamese anti-aircraft capability—allowed the United States to withstand the siege. The resistance of US forces was assisted by the PAVN and PLAF forces' decision to use the Khe Sanh siege as a strategic distraction to allow their mobile warfare offensive, the first Tet Offensive, to unfold securely.
The Siege of Khe Sanh displays typical features of modern sieges, as the defender has greater capacity to withstand the siege, the attacker's main aim is to bottle operational forces or create a strategic distraction, rather than take the siege to a conclusion.
In neighbouring Cambodia, at that time known as the Khmer Republic, the Khmer Rouge used siege tactics to cut off supplies from Phnom Penh to other government-held enclaves in an attempt to break the will of the government to continue fighting.
In 1972, during the Easter offensive, the Siege of An Lộc Vietnam occurred. ARVN troops and U.S. advisers and air power successfully defeated communist forces. The Battle of An Lộc pitted some 6,350 ARVN men against a force three times that size. During the peak of the battle, ARVN had access to only one 105 mm howitzer to provide close support, while the enemy attack was backed by an entire artillery division. ARVN had no tanks, the NVA communist forces had two armoured regiments. ARVN prevailed after over two months of continuous fighting. As General Paul Vanuxem, a French veteran of the Indochina War, wrote in 1972 after visiting the liberated city of An Lộc: "An Lộc was the Verdun of Vietnam, where Vietnam received as in baptism the supreme consecration of her will."
During the Yugoslav Wars in the 1990s, Republika Srpska forces besieged Sarajevo, the capital of Bosnia-Herzegovina. The siege lasted from 1992 until 1996.
Numerous sieges haven taken place during the Syrian Civil War, such as the Siege of Homs, Siege of Kobanî, Siege of Deir ez-Zor (2014–2017) and Siege of al-Fu'ah and Kafriya.
Siege tactics continue to be employed in police conflicts. This has been due to a number of factors, primarily risk to life, whether that of the police, the besieged, bystanders, or hostages. Police make use of trained negotiators, psychologists, and, if necessary, force, generally being able to rely on the support of their nation's armed forces if required.
One of the complications facing police in a siege involving hostages is Stockholm syndrome, where sometimes hostages can develop a sympathetic rapport with their captors. If this helps keep them safe from harm, this is considered to be a good thing, but there have been cases where hostages have tried to shield the captors during an assault or refused to cooperate with the authorities in bringing prosecutions.
The 1993 police siege on the Branch Davidian church in Waco, Texas, lasted 51 days, an atypically long police siege. Unlike traditional military sieges, police sieges tend to last for hours or days, rather than weeks, months, or years.
In Britain, if the siege involves perpetrators who are considered by the British Government to be terrorists, and if an assault is to take place, the civilian authorities hand command and control over to the military. The threat of such an action ended the Balcombe Street siege in 1975, but the Iranian Embassy siege in 1980 ended in a military assault and the deaths of all but one of the hostage-takers.
Historiography | https://en.wikipedia.org/wiki?curid=26905 |
Saint Lawrence Seaway
The Saint Lawrence Seaway () is a system of locks, canals, and channels in Canada and the United States that permits oceangoing vessels to travel from the Atlantic Ocean to the Great Lakes of North America, as far inland as Duluth, Minnesota at the western end of Lake Superior. The seaway is named for the Saint Lawrence River, which flows from Lake Ontario to the Atlantic Ocean. Legally, the seaway extends from Montreal, Quebec, to Lake Erie and includes the Welland Canal.
The Saint Lawrence River portion of the seaway is not a continuous canal; rather, it consists of several stretches of navigable channels within the river, a number of locks, and canals along the banks of the Saint Lawrence River to bypass several rapids and dams. A number of the locks are managed by the St. Lawrence Seaway Management Corporation in Canada, and others in the United States by the Saint Lawrence Seaway Development Corporation; the two bodies together advertise the seaway as part of "Highway H2O". The section of the river from Montreal to the Atlantic is under Canadian jurisdiction, regulated by the offices of Transport Canada in the Port of Quebec.
The Saint Lawrence Seaway was preceded by several other canals. In 1871, locks on the Saint Lawrence allowed transit of vessels long, wide, and deep. The First Welland Canal, constructed between 1824 and 1829, had a minimum lock size of long, wide, and deep, but it was generally too small to allow passage of larger oceangoing ships. The Welland Canal's minimum lock size was increased to long, wide, and deep for the Second Welland Canal; to long, wide, and deep with the Third Welland Canal; and to long, wide, and deep for the current (Fourth) Welland Canal.
The first proposals for a binational comprehensive deep waterway along the Saint Lawrence were made in the 1890s. In the following decades, developers proposed a hydropower project as inseparable from the seaway; the various governments and seaway supporters believed the deeper water to be created by the hydro project was necessary to make the seaway channels feasible for oceangoing ships. U.S. proposals for development up to and including the First World War met with little interest from the Canadian federal government. But the two national governments submitted Saint Lawrence plans to a group for study. By the early 1920s, both "The Wooten-Bowden Report" and the International Joint Commission recommended the project.
Although the Liberal Prime Minister William Lyon Mackenzie King was reluctant to proceed, in part because of opposition to the project in Quebec, in 1932 he and the U.S. representative signed a treaty of intent. This treaty was submitted to the U.S. Senate in November 1932 and hearings continued until a vote was taken on March 14, 1934. The majority voted in favor of the treaty, but it failed to gain the necessary two-thirds vote for ratification. Later attempts between the governments in the 1930s to forge an agreement came to naught due to opposition by the Ontario government of Mitchell Hepburn and the government of Quebec. In 1936, John C. Beukema, head of the Great Lakes Harbors Association and a member of the Great Lakes Tidewater Commission, was among a delegation of eight from the Great Lakes states to meet at the White House with US President Franklin D. Roosevelt to obtain his support for the seaway concept.
Beukema and Saint Lawrence Seaway proponents were convinced a nautical link would lead to development of the communities and economies of the Great Lakes region by permitting the passage of oceangoing ships. In this period, exports of grain, along with other commodities, to Europe were an important part of the national economy. Negotiations on the treaty resumed in 1938, and by January 1940 substantial agreement was reached between Canada and the United States. By 1941, President Roosevelt and Prime Minister Mackenzie King made an executive agreement to build the joint hydro and navigation works, but this failed to receive the assent of the U.S. Congress. Proposals for the seaway were met with resistance; the primary opposition came from interests representing harbors on the Atlantic and Gulf coasts and internal waterways and from the railroad associations. The railroads carried freight and goods between the coastal ports and the Great Lakes cities.
After 1945, proposals to introduce tolls to the seaway were not sufficient to gain support for the project by the U.S. Congress. Growing impatient, and with Ontario desperate for the power to be generated by hydroelectricity, Canada began to consider developing the project alone. This seized the imagination of Canadians, engendering a groundswell of nationalism around the Saint Lawrence. Canadian Prime Minister Louis St. Laurent advised U.S. President Harry S. Truman on September 28, 1951, that Canada was unwilling to wait for the United States and would build a seaway alone; the Canadian Parliament authorized the founding of the Saint Lawrence Seaway Authority on December 21 of that year. Fueled by this support, Saint Laurent's administration decided during 1951 and 1952 to construct the waterway alone, combined with the Moses-Saunders Power Dam. (This became the joint responsibility of Ontario and New York: as a hydropower dam would change the water levels, it required bilateral cooperation.)
The International Joint Commission issued an order of approval for joint construction of the dam in October 1952. U.S. Senate debate on the bill began on January 12, 1953, and the bill emerged from the House of Representatives Committee of Public Works on February 22, 1954. It received approval by the Senate and the House by May 1954. The first positive action to enlarge the seaway was taken on May 13, 1954, when U.S. President Dwight D. Eisenhower signed the Wiley-Dondero Seaway Act to authorize joint construction and establish the Saint Lawrence Seaway Development Corporation as the U.S. authority. The need for cheap haulage of Quebec-Labrador iron ore was one of the arguments that finally swung the balance in favor of the seaway. Groundbreaking ceremonies took place in Massena, New York, on August 10, 1954. That year John C. Beukema was appointed by Eisenhower to the five-member St. Lawrence Seaway Advisory Board.
In May 1957, the Connecting Channels Project was begun by the United States Army Corps of Engineers. By 1959, Beukema was on board the U.S. Coast Guard cutter "Maple" for the first trip through the U.S. locks, which opened up the Great Lakes to oceangoing ships. On April 25, 1959, large, deep-draft ocean vessels began streaming to the heart of the North American continent through the seaway, a project supported by every administration from Woodrow Wilson through Eisenhower.
In the United States, Dr. N.R. Danelian (who was the director of the 13-volume Saint Lawrence Seaway Survey in the U.S. Department of Navigation (1932–63)), worked with the U.S. Secretary of State on Canadian-U.S. issues regarding the seaway, persevering through 15 years to gain passage by Congress of the Seaway Act. He later became president of the Great Lakes St. Lawrence Association to promote seaway development to benefit the American heartland. The seaway was heavily promoted by the Eisenhower administration, which had been concerned with a lack of US control.
The seaway opened in 1959 and cost C$470 million, $336.2 million of which was paid by the Canadian government. Elizabeth II, Queen of Canada and American President Dwight D. Eisenhower formally opened the seaway with a short cruise aboard the royal yacht after addressing crowds in Saint-Lambert, Quebec. 22,000 workers were employed at one time or another on the project, a 2,300-mile-long superhighway for ocean freighters. Port of Milwaukee director Harry C. Brockel forecast just before the Seaway opened in 1959 that "The St. Lawrence Seaway will be the greatest single development of this century in its effects on Milwaukee's future growth and prosperity." Lester Olsen, president of the Milwaukee Association of Commerce, said, "The magnitude and potential of the St. Lawrence Seaway and the power project stir the imagination of the world."
The seaway's opening is often credited with making the Erie Canal obsolete and causing the severe economic decline of several cities along the canal in Upstate New York. By the turn of the 20th century, the Erie Canal had been largely supplanted by the railroads, which had been constructed across New York and could carry freight more quickly and cheaply. Upstate New York's economic decline was precipitated by numerous factors, only some of which had to do with the Saint Lawrence Seaway.
Under the Canada Marine Act (1998), the Canadian portions of the seaway were set up with a non-profit corporate structure; this legislation also introduced changes to federal ports.
Great Lakes and seaway shipping generates $3.4 billion in business revenue annually in the United States. In 2002, ships moved 222 million tons of cargo through the seaway. Overseas shipments, mostly of inbound steel and outbound grain, accounted for 15.4 million tons, or 6.9%, of the total cargo moved. In 2004, seaway grain exports accounted for about 3.6% of U.S. overseas grain shipments, according to the U.S. Grains Council. In a typical year, seaway steel imports account for around 6% of the U.S. annual total. The toll revenue obtained from ocean vessels is about 25–30% of cargo revenue. The Port of Duluth shipped just over 2.5 million metric tons of grain, which is less than the port typically moved in the decade before the seaway opened Lake Superior to deep-draft oceangoing vessels in 1959.
International changes have affected shipping through the seaway. Europe is no longer a major grain importer; large U.S. export shipments are now going to South America, Asia, and Africa. These destinations make Gulf and West Coast ports more critical to 21st-century grain exports. Referring to the seaway project, a retired Iowa State University economics professor who specialized in transportation issues said, "It probably did make sense, at about the time it (the Seaway) was constructed and conceived, but since then everything has changed."
Certain seaway users have been concerned about the low water levels of the Great Lakes that have been recorded since 2010.
The Panama Canal was completed in 1914 and also serves oceangoing traffic. In the 1950s, seaway designers chose not to build the locks to match the size of ships permitted by the 1914 locks at the Panama Canal (, known as the Panamax limit). Instead, the seaway locks were built to match the smaller locks of Welland Canal, which opened in 1932. The seaway locks permit passage of a ship long by feet wide (the Seawaymax limit).
The U.S. Army Corps of Engineers conducted a study to expand the Saint Lawrence Seaway, but the plan was scrapped in 2011 because of tight budgets.
There are seven locks in the Saint Lawrence River portion of the seaway. From downstream to upstream they are:
Water Level Elevations:
There are eight locks on the Welland Canal. From the north to the south, there is lock 1 at Port Weller, followed by Lock 2 and then Lock 3, a site with a visitors' information centre and museum in St. Catharines, Ontario. There are four locks in Thorold, Ontario, including twin-flight locks 4, 5 and 6, with Lock 7 leading up to the main channel. The Lake Erie level control lock sits in Port Colborne, Ontario.
The size of vessels that can traverse the seaway is limited by the size of locks. Locks on the St. Lawrence and on the Welland Canal are long, wide, and deep. The maximum allowed vessel size is slightly smaller: long, wide, and deep. Many vessels designed for use on the Great Lakes following the opening of the seaway were built to the maximum size permissible by the locks, known informally as Seawaymax or Seaway-Max. Large vessels of the lake freighter fleet are built on the lakes and cannot travel downstream beyond the Welland Canal. On the remaining Great Lakes, these ships are constrained only by the largest lock on the Great Lakes Waterway, the Poe Lock at the Soo Locks (at Sault Ste. Marie), which is long, wide and deep.
A vessel's draft is another obstacle to passage on the seaway, particularly in connecting waterways such as the Saint Lawrence River. The depth in the seaway's channels is (Panamax-depth) downstream of Quebec City, between Quebec City and Deschaillons, to Montreal, and upstream of Montreal. Channel depths and limited lock sizes meant only 10% of current oceangoing ships, which have been built much larger than in the 1950s, can traverse the entire seaway. Proposals to expand the seaway, dating from as early as the 1960s, have been rejected since the late 20th century as too costly. In addition, researchers, policy makers, and the public are much more aware of the environmental issues that have accompanied seaway development and are reluctant to open the Great Lakes to more invasions of damaging species, as well as associated issues along the canals and river. Questions have been raised as to whether such infrastructure costs could ever be recovered. Lower water levels in the Great Lakes have also posed problems for some vessels in recent years, and pose greater issues to communities, industries, and agriculture in the region.
While the seaway is (as of 2010) mostly used for shipping bulk cargo, the possibility of its use for large-scale container shipping is under consideration as well. If the expansion project goes ahead, feeder ships would take containers from the port of Oswego on Lake Ontario in upstate New York to Melford International Terminal in Nova Scotia for transfer to larger oceangoing ships.
A website hosts measurements of wind, water, levels and water temperatures. A real-time interactive map of seaway locks, vessels, and ports is available at. The NOAA-funded Great Lakes Water Level Dashboard compiles statistics on water depth at various points along the seaway.
To create a navigable channel through the Long Sault rapids and to allow hydroelectric stations to be established immediately upriver from Cornwall, Ontario, and Massena, New York, Lake St. Lawrence was created behind a dam. This required the condemnation and acquisition by the government of all the properties of six villages and three hamlets in Ontario; these are now collectively known as The Lost Villages. The area was flooded on July 1, 1958, creating the lake. There was also inundation on the New York side of the border, and the village of Louisville Landing was submerged.
A notable adverse environmental effect of the operation of the seaway has been the introduction of numerous invasive species of aquatic animals into the Great Lakes Basin. The zebra mussel has been most damaging in the Great Lakes and through its invasion of related rivers, waterways, and city water facilities.
The seaway, along with the Saint Lawrence River it passes through, also provides opportunities for outdoor recreation, such as boating, camping, fishing, and scuba diving. Invasive species and artificial water level controls imposed by the seaway have had a negative impact on recreational fishing. Of note, the Old Power House near Lock 23 (near Morrisburg, Ontario) became an attractive site for scuba divers. The submerged stone building has become covered with barnacles and is home to an abundance of underwater life.
The seaway passes through the Saint Lawrence River, which provides a number of divable shipwrecks within recreational scuba limits (shallower than ). The region also offers technical diving, with some wrecks lying at . The water temperature can be as warm as during the mid- to late summer months. The first of Lake Ontario is warmed and enters the Saint Lawrence River, as the fast-moving water body has no thermocline circulation.
On July 12, 2010, "Richelieu" (owned by Canada Steamship Lines) ran aground after losing power near the Côte-Sainte-Catherine lock. The grounding punctured a fuel tank, spilling an estimated 200 tonnes of diesel fuel, covering approximately 500 m2. The seaway and lock were shut down to help contain the spill.
The seaway is important for American and Canadian international trade. It handles 40–50 million annual tons of cargo. About 50% of this cargo carried travels to and from international ports in Europe, the Middle East, and Africa. The rest comprises coastal trade, or short sea shipping, between various American and Canadian ports. Among international shippers are found:
The Saint Lawrence Seaway (along with ports in Quebec) is the main route for Ontario grain exports to overseas markets. Its fees are publicly known, and were limited in 2013 to an increase of 3%. A trained pilot is required for any foreign trade vessel. A set of rules and regulations are available to help transit.
Commercial vessel transit information is hosted on the U.S. Saint Lawrence Seaway Development Corporation website.
Since 1997, international cruise liners have been known to transit the seaway. The Hapag-Lloyd "Christopher Columbus" carried 400 passengers to Duluth, Minnesota, that year. Since then, the number of annual seaway cruising passengers has increased to 14,000.
Every year, more than 2,000 recreational boats, of more than 20 ft and one ton, transit the seaway. The tolls have been fixed for 2017 at $30 per lock. There is a $5 per lock discount for payment in advance. Lockages are scheduled 12 hours a day between the hours of 07:00 and 19:00 from June 15 to September 15.
A list of organisations that serve the seaway in some fashion, such as chambers of commerce and municipal or port authorities, is available at the SLSDC website. A 56-page electronic "Great Lakes St. Lawrence Seaway System" Directory is published by Harbor House Publishers.
"Map of the North American Great Lakes and the Saint Lawrence Seaway from 1959," depicting the entire length beginning at the Gulf of Saint Lawrence in the east to the western-most terminus at Lake Superior.
Notes
Further reading | https://en.wikipedia.org/wiki?curid=26908 |
Silvio Berlusconi
Silvio Berlusconi ( , ; born 29 September 1936) is an Italian media tycoon and politician who has served as Prime Minister of Italy in four governments. He has served as a Member of the European Parliament (MEP) since July 2019.
Berlusconi is the controlling shareholder of Mediaset and owned the Italian football club A.C. Milan from 1986 to 2017. He is nicknamed "Il Cavaliere" (The Knight) for his Order of Merit for Labour, although he voluntarily resigned from this order in March 2014. In 2018, "Forbes" magazine ranked him as the 190th richest man in the world with a net worth of US$8.0 billion. In 2009, "Forbes" ranked him 12th in the List of The World's Most Powerful People due to his domination of Italian politics, throughout more than twenty years at the head of the centre-right coalition.
Berlusconi was Prime Minister for nine years in total, making him the longest serving post-war Prime Minister of Italy, and the third longest-serving since Italian unification, after Benito Mussolini and Giovanni Giolitti. He was the leader of the centre-right party Forza Italia from 1994 to 2009, and its successor party The People of Freedom from 2009 to 2013. Since November 2013, he has led a revived Forza Italia. Berlusconi was the senior G8 leader from 2009 until 2011 and he currently holds the record for hosting G8 Summits (having hosted three Summits in Italy). After serving nearly 19 years as member of the Chamber of Deputies, Italy's lower house, after the 2013 general election he became a member of the Senate.
On 1 August 2013, he was convicted of tax-fraud by the court of final instance, Court of Cassation, confirming his four-year prison sentence (of which three years are automatically pardoned) along with a public office ban for two years. As his age exceeded 70 years, he was exempted from direct imprisonment, and instead served his sentence by doing unpaid social community work. Because he had been sentenced to a gross imprisonment for more than two years, a new Italian anti-corruption law led to the Senate expelling and barring him from serving in any legislative office for six years. Berlusconi has pledged to stay leader of Forza Italia throughout his custodial sentence and public office ban. After his ban ended, Berlusconi ran for and was successfully elected as an MEP at the 2019 European Parliament election.
Berlusconi was the first person to assume the premiership without having held any prior government or administrative offices. He is known for his populist political style and brash, overbearing personality. In his long tenure, he was often accused of being an authoritarian leader and a strongman. Berlusconi still remains a controversial figure who divides public opinion and political analysts. Supporters emphasize his leadership skills and charismatic power, his fiscal policy based on tax reduction, and his ability to maintain strong and close foreign relations with both the United States and Russia. In general, critics address his performance as a politician, and the ethics of his government practices in relation to his business holdings. Issues with the former include accusations of having mismanaged the state budget and of increasing the Italian government debt. The second criticism concerns his vigorous pursuit of his personal interests while in office, including benefitting from his own companies' growth due to policies promoted by his governments, having vast conflicts of interest due to ownership of a media empire with which he has restricted freedom of information and finally, being blackmailed as leader because of his turbulent private life.
Berlusconi was born in Milan in 1936, where he was raised in a middle-class family. His father, Luigi Berlusconi (1908–1989), was a bank employee, and his mother, Rosa Bossi (1911–2008), a housewife. Silvio was the first of three children; he had a sister, Maria Francesca Antonietta Berlusconi (1943–2009), and has a brother, Paolo Berlusconi (born 1949).
After completing his secondary school education at a Salesian college, he studied law at the Università Statale in Milan, graduating (with honours) in 1961, with a thesis on the legal aspects of advertising. Berlusconi was not required to serve the standard one-year stint in the Italian army which was compulsory at the time. During his university studies, he was an upright bass player in a group formed with the now Mediaset Chairman and amateur pianist Fedele Confalonieri and occasionally performed as a cruise ship crooner. In later life, he wrote A.C. Milan's anthem with the Italian music producer and pop singer Tony Renis and Forza Italia's anthem with the opera director Renato Serio. With the Neapolitan singer Mariano Apicella, he wrote two Neapolitan song albums: "Meglio 'na canzone" in 2003 and "L'ultimo amore" in 2006.
In 1965, he married Carla Elvira Dall'Oglio, and they had two children: Maria Elvira, better known as Marina (born 1966), and Pier Silvio (born 1969). By 1980, Berlusconi had established a relationship with the actress Veronica Lario (born Miriam Bartolini), with whom he subsequently had three children: Barbara (born 1984), Eleonora (born 1986) and Luigi (born 1988). He was divorced from Dall'Oglio in 1985, and married Lario in 1990. By this time, Berlusconi was a well-known entrepreneur, and his wedding was a notable social event. One of his best men was Bettino Craxi, a former prime minister and leader of the Italian Socialist Party. In May 2009, Lario announced that she was to file for divorce.
On 28 December 2012, Berlusconi was ordered to pay his ex-wife Veronica Lario $48 million a year in a divorce settlement that was filed Christmas Day, and he will keep the $100 million house they live in with their three children.
In addition to his five children, Berlusconi has ten grandchildren.
In April 2017, Berlusconi appeared in a video promoting a vegetarian Easter campaign. Berlusconi was shown cuddling lambs he had adopted to save from slaughtering for the traditional Easter Sunday feast. He has neither confirmed nor denied whether he himself is a vegetarian, however.
Berlusconi's business career began in construction. In the late 1960s, he built Milano Due (Italian for "Milan Two"), a development of 4,000 residential apartments east of Milan. It was a residential centre in the Italian town of Segrate and was built as a new town by Edilnord, a Berlusconi owned company associated with the Fininvest group.
The main peculiarity of Milano Due is a system of walkways and bridges that connects the whole neighbourhood, so that it is possible to walk around without ever intersecting traffic. The works started in 1970, and were completed in 1979. Distinctive landmarks are the sporting facilities, a small artificial lake and a children's playground.
The profits from this venture provided the seed money for his advertising agency.
Berlusconi first entered the media world in 1973, by setting up a small cable television company, TeleMilano, to service units built on his Segrate properties. It began transmitting in September the following year. TeleMilano was the first Italian private television channel, and later evolved into Canale 5, the first national private TV station.
After buying two further channels, Berlusconi relocated the station to central Milan in 1977 and began broadcasting over the airwaves.
In 1978, Berlusconi founded his first media group, Fininvest, and joined the Propaganda Due masonic lodge. In the five years leading up to 1983 he earned some 113 billion Italian lire (€58.3 million). The funding sources are still unknown because of a complex system of holding companies, despite investigations conducted by various state attorneys.
Fininvest soon expanded into a country-wide network of local TV stations which had similar programming, forming, in effect, a single national network. This was seen as breaching the Italian public broadcaster RAI's statutory monopoly by creating a national network, which was later abolished. In 1980, Berlusconi founded Italy's first private national network, Canale 5, followed shortly thereafter by Italia 1, which was bought from the Rusconi family in 1982, and Rete 4, which was bought from Mondadori in 1984. He then launched three international sister networks: La Cinq (which began operations in 1986), Tele 5 (which launched in 1988), and Telecinco (which launched in 1989). La Cinq and Tele 5 ceased operations in 1992 and were later replaced by La Cinquième and DSF, respectively.
Berlusconi created the first and only Italian commercial TV empire. He was assisted by his connections to Bettino Craxi, secretary-general of the Italian Socialist Party and also prime minister of Italy at that time, whose government passed, on 20 October 1984, an emergency decree legalising the nationwide transmissions made by Berlusconi's television stations. This was in response to judgements on 16 October 1984, in Turin, Pescara and Rome, enforcing a law which previously restricted nationwide broadcasting to RAI, that had ordered these private networks to cease transmitting.
After political turmoil in 1985, the decree was approved definitively. But for some years, Berlusconi's three channels remained in a legal limbo, and were not allowed to broadcast news and political commentary. They were elevated to the status of full national TV channels in 1990, by the so-called Mammì law.
In 1995, Berlusconi sold a portion of his media holdings, first to the German media group Kirch Group (now bankrupt) and then by public offer. In 1999, Berlusconi expanded his media interests by forming a partnership with Kirch called the "Epsilon MediaGroup".
On 9 July 2011, a Milan court ordered Fininvest to pay 560 million euros in damages to Compagnie Industriali Riunite in a long-running legal dispute.
On 5 August 2016, Fininvest announced the signing of a preliminary agreement to sell all of their shares of A.C. Milan to Sino-Europe Sports Investment Management Changxing Co.Ltd. The deal was scheduled to be finalized by the end of 2016. On 13 April 2017, Berlusconi sold A.C. Milan to Rossoneri Sport Investment Lux for a total of €830 million after a 31-year reign.
Berlusconi rapidly rose to the forefront of Italian politics in January 1994. He was elected to the Chamber of Deputies for the first time and appointed as Prime Minister following the 1994 parliamentary elections, when Forza Italia gained a relative majority a mere three months after having been launched. However, his cabinet collapsed after nine months, due to internal disagreements among the coalition parties. In the April 1996 snap parliamentary elections, Berlusconi was defeated by the centre-left candidate Romano Prodi. In the May 2001 parliamentary elections, he was again the centre-right candidate for Prime Minister and won against the centre-left candidate Francesco Rutelli. Berlusconi then formed his second and third cabinets, until 2006. Berlusconi was leader of the centre-right coalition in the April 2006 parliamentary elections, which he lost by a very narrow margin, his opponent again being Romano Prodi. He was re-elected in the parliamentary elections of April 2008 following the collapse of Prodi's government and sworn in for a third time as Prime Minister on 8 May 2008.
After losing his majority in parliament amid growing fiscal problems related to the European debt crisis, Berlusconi resigned as Prime Minister on 16 November 2011. In February 2013 Berlusconi has led the People of Freedom and its right-wing allies in the campaign for the parliamentary elections. Although he initially planned to run for a fifth term as Prime Minister, as part of the agreement with the Lega Nord he would instead plan to lead the coalition without becoming Prime Minister. Berlusconi's Centre-right coalition gained 29% of votes, ranking second, after the centre-left coalition Italy Common Good led by Pier Luigi Bersani. Subsequently, the PdL was supporting the government of Enrico Letta, together with the Democratic Party and the centrist Civic Choice of former Prime Minister Mario Monti.
He was criticised for his electoral coalitions with right wing populist parties (the Lega Nord and the National Alliance) and for apologetic remarks about Mussolini, though he also officially apologised for Italy's actions in Libya during colonial rule. While in power, Berlusconi maintained ownership of Mediaset, the largest media company in Italy, and was criticised for his dominance of the Italian media. His leadership was also undermined by sex scandals.
Berlusconi's political career began in 1994, when he entered politics, reportedly admitting to Indro Montanelli and Enzo Biagi that he was forced to do so to avoid imprisonment. He subsequently served as Prime Minister of Italy from 1994 to 1995, 2001 to 2006, and 2008 to 2011. His career was racked with controversies and trials; amongst these was his failure to honour his promise to sell his personal assets in Mediaset, the largest television broadcaster in Italy, in order to dispel any perceived conflicts of interest.
In the early 1990s, the "Pentapartito" – the five governing parties, Christian Democracy ("Democrazia Cristiana"), the Italian Socialist Party, the Italian Social-Democratic Party, the Italian Republican Party and the Italian Liberal Party – lost much of their electoral strength almost overnight due to a large number of judicial investigations concerning the financial corruption of many of their foremost members (see the Mani Pulite affair). This led to a general expectation that upcoming elections would be won by the Democratic Party of the Left, the heirs to the former Italian Communist Party, and their Alliance of Progressives coalition – unless an alternative arose. On 26 January 1994, Berlusconi announced his decision to enter politics, ("enter the field", in his own words) presenting his own political party, Forza Italia, on a platform focused on defeating "the Communists". His political aim was to convince the voters of the Pentapartito, who were shocked and confused by Mani Pulite scandals, that Forza Italia offered both a fresh uniqueness and the continuation of the pro-western free market policies followed by Italy since the end of the Second World War. Shortly after he decided to enter the political arena, investigators into the Mani Pulite affair were said to be close to issuing warrants for the arrest of Berlusconi and senior executives of his business group. During his political career Berlusconi repeatedly stated that the Mani Pulite investigations were led by communist prosecutors who wanted to establish a soviet-style government in Italy.
In order to win the March 1994 general election, Berlusconi formed two separate electoral alliances: Pole of Freedoms ("Polo delle Libertà") with the Lega Nord ("Northern League") in northern Italian districts, and another, the Pole of Good Government ("Polo del Buon Governo"), with the National Alliance ("Alleanza Nazionale"; heir to the Italian Social Movement) in central and southern regions. In a pragmatic move, he did not ally with the latter in the North because the League disliked them. As a result, Forza Italia was allied with two parties that were not allied with each other.
Berlusconi launched a massive campaign of electoral advertisements on his three TV networks, and preparing his top advertising salesmen with seminars and screen tests, of whom 50 were subsequently elected despite an absence legislative experience. He subsequently won the elections, with Forza Italia garnering 21% of the popular vote, more than any other single party. One of the most significant promises that he made in order to secure victory was that his government would create "one million more jobs". He was appointed Prime Minister in 1994, but his term in office was short because of the inherent contradictions in his coalition: the League, a regional party with a strong electoral base in northern Italy, was at that time fluctuating between federalist and separatist positions, and the National Alliance was a nationalist party that had yet to renounce neo-fascism at the time.
In December 1994, following the leaking to the press of news of a fresh investigation by Milan magistrates, Umberto Bossi, leader of the Lega Nord, left the coalition claiming that the electoral pact had not been respected, forcing Berlusconi to resign from office and shifting the majority's weight to the centre-left. Lega Nord also resented the fact that many of its MPs had switched to Forza Italia, allegedly lured by promises of more prestigious portfolios. In 1998, various articles attacking Berlusconi were published by Lega Nord's official newspaper "La Padania", with titles such as "La Fininvest è nata da Cosa Nostra" – "Fininvest (Berlusconi's principal company) was founded by the Mafia".
Berlusconi remained as caretaker prime minister for a little over a month, until his replacement by a technocratic government headed by Lamberto Dini. Dini had been a key minister in the Berlusconi cabinet, and Berlusconi said the only way he would support a technocratic government would be if Dini headed it. In the end, however, Dini was supported by most of the opposition parties, but not by Forza Italia and Lega Nord. In 1996, Berlusconi and his coalition lost the elections and were replaced by a centre-left government led by Romano Prodi.
In 2001, Berlusconi ran again, as leader of the right-wing coalition House of Freedoms (), which included the Union of Christian and Centre Democrats, the Lega Nord, the National Alliance and other parties. Berlusconi's success in the May 2001 general election led to him becoming Prime Minister once more, with the coalition receiving 49.6% of the vote for the Chamber of Deputies and 42.5% for the Senate.
On the television interviews programme "Porta a Porta", during the last days of the electoral campaign, Berlusconi created a powerful impression on the public by undertaking to sign a so-called "Contratto con gli Italiani" (), an idea copied outright by his advisor Luigi Crespi from Newt Gingrich's Contract with America introduced six weeks before the 1994 US Congressional election. This was considered to be a creative masterstroke in his 2001 bid for prime ministership. Berlusconi committed in this contract to improve several aspects of the Italian economy and life. Firstly, he undertook to simplify the complex tax system by introducing just two income tax rates (33% for those earning over 100,000 euros, and 23% for anyone earning less than that figure: anyone earning less than 11,000 euros a year would not be taxed). Secondly, he promised to halve the unemployment rate. Thirdly, he committed to financing and developing a massive new public works programme. Fourthly, he promised to raise the minimum monthly pension rate to 516 euros. Fifthly, he would reduce crime by introducing police officers to patrol all local zones and areas in Italy's major cities. Berlusconi promised to not stand for re-election in 2006 if he failed to honour at least four of these five promises.
Opposition parties claim Berlusconi was not able to achieve the goals he promised in his "Contratto con gli Italiani". Some of his partners in government, especially the National Alliance and the Union of Christian and Centre Democrats, admitted the Government fell short of the promises made in the agreement, attributing the failure to an unforeseeable downturn in global economic conditions. Berlusconi himself consistently asserted that he achieved all the goals of the agreement, and said his Government provided "un miracolo continuo" (a continuous miracle) that made all 'earlier governments pale' (by comparison). He attributed the widespread failure to recognise these achievements to a campaign of mystification and vilification in the print media, asserting that 85% of newspapers were opposed to him. Luca Ricolfi, an independent analyst, held that Berlusconi had managed to deliver only one promise out of five, the one concerning minimum pension rates. According to Ricolfi, the other four promises were not honoured, in particular the undertakings on tax simplification and crime reduction.
The House of Freedoms did not do as well in the 2003 local elections as it did in the 2001 national elections. In common with many other European governing groups, in the 2004 elections to the European Parliament, gaining 43.37% support. Forza Italia's support was also reduced from 29.5% to 21.0% (in the 1999 European elections Forza Italia had 25.2%). As an outcome of these results the other coalition parties, whose electoral results were more satisfactory, asked Berlusconi and Forza Italia for greater influence in the government's political line.
In the 2005 regional elections (3 April/4 April 2005), centre-left candidates the for regional presidencies won in 12 out of 14 regions where control of local governments and presidencies were at stake. Berlusconi's coalition held only two of the regions (Lombardy and Veneto) up for re-election. Three parties, Union of Christian and Centre Democrats, National Alliance and New Italian Socialist Party, threatened to withdraw from the Berlusconi government. Berlusconi after some hesitation, then presented to the President of the Republic a request for the dissolution of his government on 20 April 2005. On 23 April, he formed a new government with the same allies, reshuffling ministers and amending the government programme. A key point demanded by the Union of Christian and Centre Democrats (and to a lesser extent by National Alliance) for their continued support was that the strong focus on tax reduction be reduced.
A key point in the Berlusconi government's programme was a planned reform of the Italian Constitution, which Berlusconi considered to be 'inspired by the Soviets', an issue on which the coalition parties themselves initially had significantly different opinions. The Lega Nord insisted on a federalist reform (devolution of more power to the regions) as a condition for remaining in the coalition. The National Alliance party pushed for a so-called 'strong premiership' (more powers to the Prime Minister), intended as a counterweight to any federalist reform, in order to preserve the integrity of the nation. The Union of Christian and Centre Democrats asked for a proportional electoral law that would not damage small parties, and was generally more willing to discuss compromises with the moderate wing of the opposition.
Difficulties in negotiating an agreement caused some internal unrest in the Berlusconi government in 2003, but they were mostly overcome and the law (including devolution of powers to the regions, Federal Senate and "strong premiership") was passed by the Senate in April 2004; it was slightly modified by the Chamber of Deputies in October 2004, and again in October 2005, and finally approved by the Senate on 16 November 2005, with a narrow majority. Approval in a referendum is necessary in order to amend the Italian Constitution without a qualified two-thirds parliamentary majority. The referendum was held on 25–26 July 2006 and resulted in the rejection of the constitutional reform, with 61.3% of voters casting ballots against it.
Operating under a new electoral law written unilaterally by the governing parties with strong criticism from the parliamentary opposition, the April 2006 general election was held. The results of this election handed Romano Prodi's centre-left coalition, known as The Union, (Berlusconi's opposition) a very thin majority: 49.8% against 49.7% for the centre-right coalition House of Freedoms in the Lower House, and a two-senator lead in the Senate (158 senators for The Union and 156 for the House of Freedoms). The Court of Cassation subsequently validated the voting procedures and determined that the election process was constitutional.
According to the new electoral rules, The Union, (nicknamed "The Soviet Union" by Berlusconi with a margin of only 25,224 votes (out of over 38 million voters), nevertheless won 348 seats (compared to 281 for the House of Freedoms) in the lower house as a result of a majority premium given to whichever coalition of parties was awarded more votes.
This electoral law, approved shortly before the election by Berlusconi's coalition in an attempt to improve their chances of winning the election, led to the coalition's defeat and gave Prodi the chance to form a new cabinet. However, Prodi's coalition consisted of a large number of smaller parties. If only one of these nine parties that formed The Union withdrew its support to Prodi, his government would have collapsed. This situation was also the result of the new ""diabolic"" electoral system.
Centrist parties such as the Union of Christian and Centre Democrats immediately conceded The Union's victory, while other parties, like Berlusconi's Forza Italia and the Northern League, refused to accept its validity, right up until 2 May 2006, when Berlusconi submitted his resignation to President Ciampi.
In the run-up to the 2006 general election, there had been talk among some of the coalition members of the House of Freedoms about a possible merger into a "united party of moderates and reformers". Forza Italia, the National Alliance party of Gianfranco Fini, and the Union of Christian and Centre Democrats of Pier Ferdinando Casini all seemed interested in the project. Soon after the election, however, Casini started to distance his party from its historical allies.
On 2 December 2006, during a major demonstration of the centre-right in Rome against the government led by Romano Prodi, Berlusconi proposed the foundation of a ""Freedom Party"", arguing that the people and voters of the different political movements aligned to the demonstration were all part of a ""people of freedom"".
On 18 November 2007, after claiming the collection of more than 7 million signatures (including that of Umberto Bossi) demanding that the President of the Republic Giorgio Napolitano call a fresh election, Berlusconi announced from the running board of a car in a crowded Piazza San Babila in Milan that Forza Italia would soon merge or transform into The People of Freedom party, also known as the PdL (Il Popolo della Libertà). Berlusconi also stated that this new political movement could include the participation of other parties. Both supporters and critics of the new party called Berlusconi's announcement ""the running board revolution"".
After the sudden fall of the Prodi II Cabinet on 24 January, the break-up of The Union coalition and the subsequent political crisis (which paved the way for a fresh general election in April 2008), Berlusconi, Gianfranco Fini and other party leaders finally agreed on 8 February 2008 to form a joint list named "The People of Freedom" (), allied with the Lega Nord of Umberto Bossi and with the Sicilian Movement for Autonomy of Raffaele Lombardo.
In the snap parliamentary elections held on 13/14 April 2008, this coalition won against Walter Veltroni's centre-left coalition in both houses of the Italian Parliament.
In the 315-member Senate of the Republic, Berlusconi's coalition won 174 seats to Veltroni's 134. In the lower house, Berlusconi's conservative bloc led by a margin of 9% of the vote: 46.5% (344 seats) to 37.5% (246 seats). Berlusconi capitalised on discontent over the nation's stagnating economy and the unpopularity of Prodi's government. His declared top priorities were to remove piles of rubbish from the streets of Naples and to improve the state of the Italian economy, which had under-performed the rest of the Eurozone for years. He also said he was open to working with the opposition, and pledged to fight tax avoidance and tax evasion, reform the judicial system and reduce public debt. He intended to reduce the number of Cabinet ministers to 12. Berlusconi and his ministers (Berlusconi IV Cabinet) were sworn in on 8 May 2008.
On 21 November 2008, the National Council of Forza Italia, chaired by Alfredo Biondi and attended by Berlusconi himself, dissolved Forza Italia and established The People of Freedom, whose inauguration took place on 27 March 2009, the 15th anniversary of Berlusconi's first electoral victory.
While Forza Italia had never held a formal party congress to formulate its rules, procedures, and democratic balloting for candidates and issues, (since 1994 three party conventions of Forza Italia have been held, all of them resolving to support Berlusconi and reelecting him by acclamation) on 27 March 2009, at the foundation congress of the People of Freedom political movement the statute of the new party was subject to a vote of approval. On 5,820 voting delegates, 5,811 voted in favour, 4 against and 5 abstained. During that political congress Berlusconi was elected as Chairman of the People of Freedom by a show of hands. According to the official minutes of the congress the result favoured Berlusconi, with 100 per cent of the delegates voting for him.
Between 2009 and 2010, Gianfranco Fini, former leader of the national conservative National Alliance (AN) and President of the Italian Chamber of Deputies, became a vocal critic of the leadership of Berlusconi. Fini departed from party's majority line on several issues but, most of all, he was a proponent of a more structured party organisation. His criticism was aimed at the leadership style of Berlusconi, who tends to rely on his personal charisma to lead the party from the centre and supports a less structured form of party, a movement-party that organises itself only at election times.
On 15 April 2010, an association named Generation Italy was launched in order to better represent Fini's views within the party and push for a different form of party organisation. On 22 April 2010 the National Committee of the PdL convened in Rome for the first time in a year. The conflict between Fini and Berlusconi was covered live on television. At the end of the day, a resolution proposed by Berlusconi's loyalists was put before the assembly and approved by a landslide margin. On 29 July 2010, the party executive released a document in which Fini was described as "incompatible" with the political line of the PdL and unable to perform his job of President of the Chamber of Deputies in a neutral way. Berlusconi asked Fini to step down, and the executive proposed the suspension from party membership of three MPs who had harshly criticised Berlusconi and accused some party members of criminal offences. As response, Fini and his followers formed their own groups in both chambers under the name of Future and Freedom (FLI). It was soon clear that FLI would leave the PdL and become an independent party. On 7 November, during a convention in Bastia Umbra, Fini asked Berlusconi to step down from his post of Prime Minister and proposed a new government including the Union of the Centre (UdC). A few days later, the four FLI members of the government resigned. On 14 December, FLI voted against Berlusconi in a vote of confidence in the Chamber of Deputies, a vote nonetheless won by Berlusconi by 314 to 311.
In May 2011, PdL suffered a big blow in local elections. Particularly painful was the loss of Milan, Berlusconi's hometown and party stronghold. In response to this and to conflicts within party ranks, Angelino Alfano, the Justice minister, was chosen as national secretary in charge of reorganising and renewing the party. The appointment of 40-year-old Alfano, a former Christian Democrat and later leader of Forza Italia in Sicily, was unanimously decided by the party executive. On 1 July, the National Council modified the party's constitution and Alfano was elected secretary almost unanimously. In his acceptance speech, Alfano proposed the introduction of primaries.
On 10 October, the Chamber of Deputies rejected the law on the budget of the State proposed by the government. As a result of this event Berlusconi moved for a confidence vote in the Chamber on 14 October, he won the vote with just 316 votes to 310, minimum required to retain a majority. An increasing number of Deputies continued to cross the floor and join the opposition and on 8 November the Chamber approved the law on the budget of the State previously rejected but with only 308 votes, while opposition parties didn't participate in the vote to highlight that Berlusconi lost his majority. After the vote, Berlusconi announced his resignation after Parliament passed economic reforms. Among other things, his perceived failure to tackle Italy's debt crisis with an estimated debt sum of €1.9 trillion ($2.6 trillion) had urged Berlusconi to leave office. The popularity of this decision was reflected in the fact that while he was resigning crowds sang the hallelujah portion of George Frederick Handel's "Messiah", complete with some vocal accompaniment; there was also dancing in the streets outside the Quirinal Palace, the official residence of the President of Italy, where Berlusconi went to tender his resignation.
Austerity mesures were passed, raising €59.8 billion from spending cuts and tax raises, including freezing public-sector salaries until 2014 and gradually increasing the retirement age for women in the private sector from 60 in 2014 to 65 in 2026. The resignation also came at a difficult time for Berlusconi, as he was involved in numerous trials for corruption, fraud and sex offences. He was often found guilty in lower courts but used loopholes in Italy's legal system to evade incarceration.
Berlusconi had also failed to meet some of his pre-election promises and had failed to prevent economic decline and introduce serious reforms. Many believed that the problems and doubts over Berlusconi's leadership and his coalition were one of the factors that contributed to market anxieties over an imminent Italian financial disaster, which could have a potentially catastrophic effect on the 17-nation eurozone and the world economy. Many critics of Berlusconi accused him of using his power primarily to protect his own business ventures. Umberto Bossi, leader of the Northern League, a partner in Berlusconi's right-wing coalition, was quoted as informing reporters outside parliament, "We asked the prime minister to step aside."
On 12 November 2011, after a final meeting with his cabinet, Berlusconi met Italian President Giorgio Napolitano at the Palazzo del Quirinale to tend his resignation. As he arrived at the presidential residence, a hostile crowd gathered with banners insulting Berlusconi and throwing coins at the car. After his resignation, the booing and jeering continued as he left in his convoy, with the public shouting words such as "buffoon", "dictator" and "mafioso". Following Berlusconi's resignation, Mario Monti formed a new government that would remain in office until the next scheduled elections in 2013. On 16 November, Monti announced that he had formed a Cabinet and was sworn in as Prime Minister of Italy, also appointing himself as Minister of Economy and Finances.
In the following years Berlusconi often expressed his point of view regarding his resignation in 2011. He accused Angela Merkel, Nicolas Sarkozy, Christine Lagarde and Giorgio Napolitano, along with other global economic and financial powers, to have plotted against him and forced him to resign, because he refused to accept a loan from the International Monetary Fund, which according to him, would have sold the country to the IMF. This theory was confirmed by the former Prime Minister of Spain José Luis Rodríguez Zapatero.
In December 2012, Berlusconi announced on television that he would run again to become Prime Minister. Berlusconi said his party's platform would include opposition to Monti's economic performance, which he said put Italy into a "recessive spiral without end." He also told the media, on the sidelines of A.C. Milan's practice session (the football club he owns along with Mediaset, the largest media outlet in the country): "I race to win. To win, everyone said there had to be a tested leader. It's not that we did not look for one. We did, and how! But there isn't one...I'm doing it out of a sense of responsibility."
On 7 January 2013, Berlusconi announced he had made a coalition agreement (Centre-right coalition) with Lega Nord (LN); as part of it, PdL would support Roberto Maroni's bid for the presidency of Lombardy, and he will run as "leader of the coalition", but suggested he could accept a role as Minister of Economy under a cabinet headed by another People of Freedom member, such as Angelino Alfano. Later that day, LN leader Maroni confirmed his party will not support Berlusconi being appointed as Prime Minister in the case of an electoral win.
Berlusconi's coalition gained 29.1% of votes and 125 seats in the Chamber of Deputies, 30.7% of votes and 117 seats in the Senate.
In April 2013, Berlusconi's People of Freedom announced his support of the government of Enrico Letta, together with the Democratic Party and the centrist Civic Choice, of former Prime Minister Mario Monti.
In June 2013, Berlusconi announced the refoundation of his first party Forza Italia. On 18 September the new party was launched and officially founded on 16 November. After the foundation of Forza Italia, Berlusconi announced that his new party will be opposed to the Grand coalition government of Enrico Letta; but the new political position taken by Berlusconi caused dissent in the movement, and the "governmental" wing of Forza Italia, led by the Vice-Prime Minister and Minister of Interior Angelino Alfano split from FI and founded a Christian democratic party called New Centre-Right, in support of the government.
1 August 2013, he was convicted of tax-fraud by the court of final instance, the Court of Cassation, confirming his four-year prison sentence (of which three years are automatically pardoned) along with a public office ban for two years. As his age exceeded 70 years, he was exempted from direct imprisonment, and instead served his sentence by doing unpaid social community work. Because he was sentenced to a gross imprisonment of more than two years, a new Italian anticorruption law resulted in the Senate expelling and barring him from serving in any legislative office for six years. Berlusconi has pledged to stay leader of Forza Italia throughout his custodial sentence and public office ban. However he was not able to campaign for his party and, according to him, this was the main reason for declining opinion poll numbers, which are putting the party steadily in fourth place, behind the Democratic Party, the Five Star Movement and FI's long-time coalition partner Lega Nord.
In March 2017 he expressed his intention to run once again as centre-right candidate for the premiership, even if he is banned from public office until 2019; the 2018 general election was his seventh one as the centre-right frontunner. However, the general election resulted in the Lega Nord winning more seats than Forza Italia, and no electoral coalition winning an outright majority.
In January 2019, Berlusconi expressed his intention to run for candidacy in the 2019 European Parliament election in Italy. In the election, Forza Italia received only 8.8% of votes, the worst result in its history. Berlusconi was elected in the Parliament, becoming the oldest member of the assembly.
Berlusconi and his cabinets have had a strong tendency to support American foreign policies, despite the policy divide between the U.S. and many founding members of the European Union (Germany, France, Belgium) during the Bush administration. Under Berlusconi's lead, the Italian Government also shifted its traditional position on foreign policy from being the most pro-Arab western government towards a greater friendship with Israel and Turkey than in the past. This resulted in a rebalancing of relations between all the Mediterranean countries, to reach "equal closeness" with them. Berlusconi is one of the strongest supporters of Turkey's application to accede to the European Union. In order to support Turkey's application the Italian Premier invited Prime Minister Erdoğan to take part in a meeting of the European leaders of Denmark, France, Germany, the Netherlands, Spain, Sweden, and the United Kingdom, gathered in L'Aquila for the 2009 G8 summit. Italy, with Berlusconi in office, became a solid ally of the United States due to his support for the War in Afghanistan and the Iraq War following the 2003 invasion of Iraq in the War on Terror. On 30 January 2003, Berlusconi signed "The letter of the eight" supporting US. policy on Iraq.
Berlusconi, in his meetings with United Nations Secretary-General Kofi Annan and U.S. President George W. Bush, said that he pushed for "a clear turnaround in the Iraqi situation" and for a quick handover of sovereignty to the government chosen by the Iraqi people. Italy had some 3,200 troops deployed in Southern Iraq, the third largest contingent there after the American and British forces. When Romano Prodi became Prime Minister, Italian troops were gradually withdrawn from Iraq in the second half of 2006 with the last soldiers leaving the country in December of that year.
In November 2007, Italy's state-owned energy company Eni signed an agreement with Russian state-owned Gazprom to build the South Stream pipeline. Investigating Italian parliament members discovered that Central Energy Italian Gas Holding (CEIGH), a part of the Centrex Group, was to play a major role in the lucrative agreement. Bruno Mentasti-Granelli, a close friend of Berlusconi, owned 33 percent of CEIGH. The Italian parliament blocked the contract and accused Berlusconi of having a personal interest in the Eni-Gazprom agreement.
Berlusconi is among the most vocal supporters of closer ties between Russia and the European Union. In an article published in Italian media on 26 May 2002, he said that the next step in Russia's growing integration with the West should be EU membership. On 17 November 2005, Berlusconi commented, in relation to the prospect of such membership, that he is "convinced that even if it is a dream ... it is not too distant a dream and I think it will happen one day." The Prime Minister of Italy has made similar comments on other occasions as well.
Berlusconi had a warm relationship with Vladimir Putin. In September 2014, Berlusconi accused the United States, NATO and EU of "a ridiculously and irresponsibly sanctioning approach to the Russian Federation, which cannot but defend Ukrainian citizens of Russian origin that it considers brothers".
The two leaders often described their relationship as a close friendship, continuing to organize bilateral meetings even after Berlusconi's resignation in November 2011.
Under Berlusconi, Italy was an ally of Israel. Benjamin Netanyahu said of Berlusconi: "We are lucky that there is a leader such as yourself. I don't believe we have a better friend in the international community." Berlusconi has been noted for his close and friendly relationship with Israeli Prime Minister Netanyahu. Netanyahu describes Berlusconi as " one of the greatest friends". Berlusconi believed that Israel should be made an EU member, stating that "My greatest desire, as long as I am a protagonist in politics, is to bring Israel into membership of the European Union". Berlusconi has strongly defended Israel in its conflict with the Palestinians, continuing his support for Israel after leaving office.
While Berlusconi was in office, Israel and Italy negotiated a $1 billion deal whereby Israel builds reconnaissance satellites for Italy, while Israel purchases the M-346 training plane for its air-force.
Berlusconi visited Alexander Lukashenko in Belarus in 2009. Berlusconi became the first Western leader to visit Lukashenko since Lukashenko came to power in 1994. At a press conference, Berlusconi paid compliments to Lukashenko and said "Good luck to you and your people, whom I know love you".
On 5 April 2009, at the EU-US summit in Prague Berlusconi proposed an eight-point road map to accelerate the Euro-Atlantic integration of the western Balkans. During that summit the Italian Foreign Minister Franco Frattini urged his European colleagues to send "visible and concrete" signs to the countries concerned (Serbia, Kosovo, Bosnia, Montenegro, Croatia, Macedonia, and Albania).
On 30 August 2008, the Libyan leader Muammar Gaddafi and Italian Prime Minister Berlusconi signed a historic cooperation treaty in Benghazi. Under its terms, Italy would pay $5 billion to Libya as compensation for its former military occupation. In exchange, Libya would take measures to combat illegal immigration coming from its shores and boost investment in Italian companies. The treaty was ratified by the Italian government on 6 February 2009, and by Libya on 2 March, during a visit to Tripoli by Berlusconi. In June Gaddafi made his first visit to Rome, where he met Prime Minister Berlusconi, Italian President of the Republic Giorgio Napolitano and Senate's Speaker Renato Schifani.
Gaddafi also took part in the G8 summit in L'Aquila in July as Chairman of the African Union. During the summit a warm handshake between US President Barack Obama and Muammar Gaddafi took place (the first time the Libyan leader had been greeted by a serving US president). Later, at the summit's official dinner hosted by President Giorgio Napolitano, US and Libyan leaders upset protocol by sitting next to Italian Prime Minister and G8 host Berlusconi. (According to protocol, Gaddafi should have sat three places away from Berlusconi).
However, when Gaddafi faced a civil war in 2011, Italy imposed a freeze on some Libyan assets linked to him and his family, pursuant to a United Nations-sponsored regime and then bombed the country with the violation of Libya of the No-Fly Zone. After the death of Gaddafi, Italy recognized the National Transitional Council as the government of Libya.
Berlusconism () is a term used in the Western media and by a few Italian analysts to describe the political positions of Berlusconi.
The term "Berlusconismo" arose in the 1980s, with a strongly positive meaning, as a synonym for "entrepreneurial optimism", that is, as an entrepreneurial spirit which is not shaken by difficulties, and believes that problems can be solved. However, in the 21st century, the meaning has changed.
According to the Italian definition given by the online vocabulary of the Encyclopedia Institute, Berlusconismo has a wide range of meanings, all having their origins in the figure of Berlusconi, and the political movement inspired by him: the "thought movement", but also to "social phenomenon", and, even, the phenomenon "of custom" bound to his entrepreneurial and political figure. The term is also used to refer to a certain laissez-faire vision supported by him, not in the economy and markets, but also in relation to politics.
According to Berlusconi's political and entrepreneurial opponents, "Berlusconismo" is only a form of demagogic populism, comparable to Fascism, in part because Berlusconi has defended aspects of the regime of Benito Mussolini, even though he has criticised the racial Fascist laws and the alliance with Nazi Germany. In 2013, he returned to calling Mussolini a good leader whose biggest mistake was signing up to exterminate the Jews. Contrastingly his supporters compare "Berlusconismo" to French Gaullism and Argentinian Peronism.
Berlusconi defines himself as moderate, liberal, and a free trader, but he is often accused of being a populist and a conservative. After his resignation in 2011, Berlusconi has become increasingly Eurosceptic, and he is often critical of German Chancellor Angela Merkel.
One of Berlusconi's main leadership tactics is to use the party as an apparatus to reach power (defined as a "light party", because of a lack of a complex structure). This is decidedly comparable to the political tactics used by Charles De Gaulle in France. Another feature of great importance is emphasis on a "liberal revolution", summarised by the "Contract with the Italians" of 2001. "The strong reformism is added to these pillars, principally on the form of the Italian state and the constitution" including the passage from a Parliamentary Republic to a Presidential one, a higher election threshold, the abolition of Senate, a halving in the number of deputies, the abolition of the provinces and the reform of the judiciary, with separation of the careers between magistrates and magistrates's civil responsibility, from Berlusconi considered impartial. Berlusconi has declared himself to be persecuted by judges, having undergone 34 trials, accusing them of being manipulated by left-wingers and comparing himself to Enzo Tortora, victim of a miscarriage of justice.
More recently, Berlusconi has declared himself favourable to civil unions.
A number of writers and political commentators consider Berlusconi's political success a precedent for the 2016 election of real estate tycoon Donald Trump as the 45th President of the United States, with most noting Berlusconi's panned Prime Ministerial tenure and therefore making the comparison in dismay. Roger Cohen of "The New York Times" argued, "Widely ridiculed, endlessly written about, long unscathed by his evident misogyny and diverse legal travails, Berlusconi proved a Teflon politician [...] Nobody who knows Berlusconi and has watched the rise and rise of Donald Trump can fail to be struck by the parallels." "The New York Times" also published an interactive quiz called "Name That Narcissist", which compiled quotes from both politicians with the reader having to guess whether each one was uttered by Berlusconi or Trump. In "The Daily Beast", Barbie Latza Nadeau wrote, "If Americans are wondering just what a Trump presidency would look like, they only need to look at the traumatized remains of Italy after Berlusconi had his way." During the 2016 presidential election, "Politico" described Berlusconi as the closest parallel to Trump in a historical world leader and that his tenure as Italy's Prime Minister is a good bellwether of what a Trump presidency would be like.
In a piece written for "Slate" and published in April 2017, Lorenzo Newman noted the similarities in the career trajectories between the two men - "Both grew their fortunes on allegedly mafia-linked real-estate developments, transitioned into successful careers as media moguls, and, against all odds, ascended to the helm of their respective national governments" - but also highlighted their shared tendency to question and undermine established institutions such as the judiciary and the press, the way that neither of them had been accepted by their countries' respective establishments despite their wealth, and how they channelled the resulting resentment into a populist form of politics by "portraying themselves as everymen, if not in wealth, then in language, tone (and) aspirations". He also pointed out other commonalities, such as responding to concerns about conflicts of interest by delegating responsibility for running their businesses to family members.
Andrej Babiš, the current Prime Minister of the Czech Republic has also been compared to Silvio Berlusconi due to his media ownership, business activities, political influence and legal problems with a prison sentence hanging over him. An article published by the American magazine Foreign Policy drew parallels between the two, labelling Babiš with the portmanteau nickname "Babisconi".
As of April 2014, after the Unipol case had been completed with Berlusconi being acquitted due to the statute of limitations, Berlusconi is involved in three ongoing court trials.
In February 2012, Milan prosecutors brought charges against Berlusconi for alleged abuse of office connected with the publication of confidential wiretaps by the Italian newspaper "Il Giornale", which is owned by Berlusconi's brother, in 2005. The publication of the conversations between then Governor of the Bank of Italy Antonio Fazio, senior management of Unipol and Italian centre-left politician Piero Fassino was a breach of secrecy rules and was seen at the time as an attempt to discredit Berlusconi's political rivals. Their publication also eventually led to the collapse of the proposed takeover of Banca Nazionale del Lavoro by Unipol and the resignation of Fazio. The head of the company used by Italian prosecutors to record the conversations has been previously convicted of stealing the recordings and making them available to Berlusconi. On 7 February 2012, at an initial court hearing, Berlusconi denied he had listened to the tapes and ordered their publication. On 7 March 2013, Berlusconi was sentenced to a one-year jail term. On 31 March 2014, the Milan Court of Appeal ruled that whilst the evidence did not clear Paolo and Silvio Berlusconi from guilt, they were both acquitted due to the statutes of limitations, although a €80,000 compensatory award to Fassino was upheld.
In February 2013, Berlusconi was under investigation for corruption and illegal financing of political parties by the public prosecutor of Naples, in the figures of Vincenzo Piscitelli, Henry John Woodcock, Francesco Curcio, Alessandro Milita and Fabrizio Vanorio. He is accused of bribing in 2006, with €3 million (of which 1 million and 2 million declared to the tax authorities in black), directed to Senator (the former leader of the Italians in the World party) to facilitate its passage into the ranks of the Berlusconi-led coalition House of Freedoms. Along with Berlusconi, a journalist (Valter Lavitola) at the head of the socialist newspaper "L'Avanti!" was also investigated, and Sergio De Gregorio self-confessed being the recipient of the bribery.
On 23 October 2013, Berlusconi and Valter Lavitola were both indicted by the judge for preliminary hearings, Amelia Primavera. For Senator De Gregorio the process has already been closed in a preliminary hearing, because he opted to voluntarily confess and bargained a reduced sentence of 20 months in prison for the crime. The court hearing at first-instance for the indicted Berlusconi, has been scheduled to start on 11 February 2014. During the court proceedings, ex-senator (a former member of The Olive Tree party) also testified to have been offered a bribe from Berlusconi by another ex-Senator (a former member of the defunct Christian Democrats), to change political sides and join Silvio Berlusconi's center-right bloc, so that they together could cause the fall of the Romano Prodi government in 2006–08. According to the prosecutors, Valter Lavitola was also working on behalf of Berlusconi as a go-between attempting to also bribe other senators.
Berlusconi has repeatedly questioned the legitimacy of the legal degree of the former Operation "Clean Hands" magistrate and leader of the Italy of Values party, Antonio Di Pietro, when he during a 2008 election rally and in an episode of the talk show Porta a Porta in March 2008 he repeatedly claimed that Di Pietro had not obtained his degree by passing the exams, but with the aid of the secret services, in order to have a judge placed in the system to overturn the parties of the so-called First Republic. Di Pietro subsequently sued Berlusconi for aggravated defamation in June 2008. The public prosecutor concluded the preliminary investigation 13 November 2009, by indicting Berlusconi for the defamation offence referred to in Article 595 paragraph 2 of the Criminal Code. The Italian Chamber of Deputies then intervened in the case by passing a decree 22 September 2010, granting all Italian parliamentarians "absolute immunity" for words spoken while elected.
On 5 October 2010, the court in Viterbo ruled that Berlusconi could not be judged or punished, because of the parliamentary immunity enshrined in Article 68 of the Italian constitution forbidding any legal prosecutions against words spoken by parliamentarians in the course of their "exercise of parliamentary duties", in conjunction with the Chamber of Deputies recently having voted for a decree to grant Berlusconi absolute immunity for any spoken words while serving as a deputy. On 19 January 2012, this judgement was set aside by the Supreme Court, which ruled that Berlusconi had been speaking during a campaign rally and not in an institutional setting; meaning he was not covered by the immunity protection provided for by Article 68, and consequently should face a new trial to be held either at the Viterbo court or the Constitutional Court.
On 10 January 2013, the Viterbo court decided to transfer the case for judgement directly to the Constitutional Court. The Constitutional Court ruled on 20 June 2013, that the Chamber of Deputies decree having extended Berlusconi's immunity beyond what was provided for by the constitution, was a case with conflict of powers and should be disregarded. This mean that Berlusconi does not enjoy any special immunity protection for his spoken words during election campaigns, and that a court case now shall be held by the constitutional court, to decide the merits of the case. Before the case against Berlusconi can begin, the Italian Chamber of Deputies however shall be called for trial to defend and explain the reasons for passing their unconstitutional law from 2010. The court hearing against the Chamber of Deputies took place on 8 July 2014, where the constitutional court was asked to deem the concerned Chamber of Deputies decree to be unconstitutional and annul it, by the Court of Rome and the Viterbo court. On 18 July 2014, the Constitutional Court indeed ruled the decree to be unconstitutional and annulled it; meaning that the civil court proceedings against Berlusconi now can continue.
In February 2011, Berlusconi was charged with paying for sex with nightclub dancer Karima El Mahroug (also known by the stage name "Ruby Rubacuori") between February and May 2010, when she was one year below the legal 18 years age-limit for providing sexual services. He was also charged with abusing his political powers in an attempt to cover up the relationship (by trying to persuade the police to release the girl while she was under arrest for theft, based on a false claim that she was a relative of Hosni Mubarak's).
The fast-track trial opened on 6 April and was adjourned until 31 May. El Mahroug's lawyer said that Mahroug would not be attaching herself to the case as a civil complainant and denies that she ever made herself available for money. Another alleged victim, Giorgia Iafrate, also decided not to be a party to the case. In January 2013, judges rejected an application from Berlusconi's lawyers to have the trial adjourned so that it would not interfere with Italy's 2013 general election in which Berlusconi was participating. On 24 June 2013, Berlusconi was found guilty of paying El Mahroug for sex when she was 17 years old, and of abusing his powers in an ensuing cover up. He was sentenced by the Court of First Instance to seven years in jail, and banned from public office for life. In January 2014, Silvio Berlusconi deposited an appeal against the judgment, requesting complete absolution. The appeal process began on 20 June. On 18 July 2014, the Italian appeals court announced the appeal had been successful and the convictions against Berlusconi were being overturned. According to the court's published summary of the judgement, Berlusconi was acquitted from the extortion charges (abuse of power) because "the fact does not exist" and from the child prostitution charge because "the fact is not a crime". The more detailed court reasoning for acquittal will be published within 90 days, and the prosecutor stated he would then most likely appeal the decision to the Court of Cassation.
In addition to the ongoing court trials, Berlusconi is currently also involved in the following two ongoing legal investigations, which will evolve to become an ongoing court trial if the judge at the preliminary hearing indict him of the alleged crime:
, Berlusconi has only been convicted by the final appeal instance in 1 out of 32 court cases.
The Mediaset trial was launched in April 2005, with indictment of fourteen people (including Berlusconi) for having committed:
(A) false accounting and embezzlement in order to mask payments of substantial "black funds", committed in 1988–94.
(B) tax fraud equal in total to more than €62 million (120bn lira), committed in 1988–98.
Both indictments were related to achievement of personal tax evasion, through illicit trade of movie rights between Mediaset and secret fictive foreign companies situated in tax haven nations, causing fictive losses for Mediaset, with the trade gains being accumulated by the foreign companies owned by the indicted tax fraudsters, who ultimately had the gains paid out as personal profit without paying tax in Italy. In 2007, the court case at first-instance had not yet been launched, and the prosecutors dropped the (A) charges against Berlusconi due to the statute of limitations, and for the same reason the (B) charges were narrowed down to the 1994–98 period, in which the prosecutor charged Berlusconi for having committed a personal tax evasion of €7.3 million.
On 26 October 2012, Berlusconi was sentenced to four years of punishment by an Italian court for tax evasion. The charges were in relation to a scheme to purchase overseas film rights at inflated prices through offshore companies. The four-year term was longer than the three years and eight months the prosecutors had requested, but was shortened to one year in accord with a 2006 amnesty law intended to reduce prison overcrowding. Berlusconi and his co-defendants were also ordered to pay a 10 million euro fine and were banned from holding public office for three years.
On 8 May 2013, the Court of Appeals in Milan confirmed the four-year prison sentence, and extended the public office ban to five years. On 1 August 2013, the Court of Cassation (final appeal) confirmed the sentence of 4 years, of which the last three years are automatically pardoned. The decision marked the first time that Berlusconi received a definitive sentence, despite being on trial nearly 30 times during the last 25 years. In regards of calculating the exact length of the public office ban, the Court of Cassation asked the lower court to re-judge this, because of prosecutors having presented new legal arguments for the ban to be reduced from five to three years. However, a new anti-corruption law passed in late 2012, referred to as the , will bar Berlusconi from seeking elective office for six years, independently of the court's final ruling regarding the length of the public office ban. The ramification of his public office ban is that it makes him ineligible to serve any public office, but technically he will still be allowed as a non-candidate to continue leading his party and centre-right coalition in election campaigns. A similar situation occurred in March 2013, when the leader of the Five Star Movement, Beppe Grillo, convicted over a road accident in 1988, led his party's 2013 election campaign despite not being able to run for public office because of a rule established within his movement.
Berlusconi, due to being over 70 years of age, will not be placed direct in prison, but can instead decide if he want to serve his one-year jail term either by a house arrest at one of his private residences or by doing community service. As the gross prison term exceeds two years, the "Severino law" prompts the Italian senate to vote if Berlusconi shall be forced to resign his current senate seat immediately, or alternatively allowing the court imposed ban on holding public office only to take effect by the end of his current legislative term. The pending senate vote, combined with anger over Berlusconi's conviction – a poll indicated 42% of the public believe he has been unfairly persecuted by the magistrates – present a serious political challenge for the fragile ruling coalition. On 3 August, Berlusconi suggested that unless a "solution" to his predicament could be found, Italy was at "risk of a civil war". The following day, thousands of supporters gathered in front of his house in protest of the ruling.
On 30 August, the Italian President Giorgio Napolitano announced he had not selected Berlusconi as one of the new four lifetime senators, which are granted the privileges of being a lawmaker for life with some protected legal immunity, meaning they can continue working in politics even after being convicted guilty for criminal offences that otherwise would lead to ban from serving one of the public offices. A Senate committee will begin its deliberations on 9 September, to decide if Berlusconi's public office ban shall start immediately or by the end of his current legislative term. Before the committee decision becomes effective, it will need also to be approved by a vote in the full Senate.
The deliberations of the Senate committee are expected to last for several weeks, before they reach a decision. According to the "Severino law", which became enacted by the Monti government in December 2012, anyone sentenced to more than two years in prison is deemed ineligible to hold public office for a period of six years (or eight years if convicted for "abuse of power"), and should immediately be expelled from the parliament. Berlusconi has argued that the "Severino law" can not be used to expel persons convicted for crimes committed before December 2012, and pleaded for the proceedings to be postponed until the European Court of Human Rights or Italy's constitutional court had ruled, whether or not he was correct about his interpretation of the law. Berlusconi also stated that he in any case had decided to appeal the court ruling against him to the European Court of Human Rights, as he still claimed the ruling itself to be a political and unjust attempt by his opponents, to deprive him of his political power. The response by Prime Minister Enrico Letta's centre-left Democratic Party was however to reject Berlusconi's plea, accusing him of only launching time-wasting maneuvers. Berlusconi's PDL party then made a threat to withdraw their support for the government if the Senate committee expelled Berlusconi as senator. The Democratic Party replied by warning the PDL that they would reject any blackmail attempts, and in any case only would vote in the Senate committee according to the standard of the Italian law. Ahead of the Senate committee's voting, the leading criminal lawyer Paola Severino, who helped design the "Severino law", stated to the La Repubblica newspaper that this specific law in her professional opinion clearly also applied for crimes being committed before its enactment in December 2012.
On 10 September, at the second day of the Senate deliberations, the Democratic Party stated they intended to vote down all three PDL submitted motions to delay the Senate deliberations, and accused PDL of obstructing the work of the Senate committee by playing delaying tactics. Renato Brunetta, floor leader of the PDL in the lower house, responded by saying "If the Democratic Party and Grillo's people decide this evening to vote against the proposals, the Democratic Party will bring down the Letta government". The meeting at the second day ended with PDL agreeing to drop their series of technical objections to try to halt the hearings, on the agreement that each of the committee members could speak at greater length in a broad discussion on the merits of the case. On 18 September, Berlusconi made a national televised speech, in which he pledged to stay as party leader of a revived Forza Italia, no matter if the Senate would end up deciding to expel him or not. On 25 September, the PDL parliamentary group agreed on a resolution to threaten the Senate, that if Berlusconi would be expelled, then all PDL parliamentarians would immediately "reflect on and decide according to his or her conscience", whether or not to show sympathy with Berlusconi by resigning their own seats in the Senate. The Senate Committee nevertheless voted 15:8 in support for a recommendation to expel Berlusconi on 4 October, and ten days later submitted a final report about the case, so that it can be scheduled for a final vote in the full Senate by early November. The "Rules of Procedure Committee" decided at its meeting on 30 October, by the votes 7:6, that Berlusconi's expulsion vote shall not be conducted as a secret vote but as an open public vote. On 27 November 2013, the Senate voted 192:113 for enforcement of Berlusconi's immediate expulsion and a six-year ban from serving any legislative office.
Berlusconi was expected to start serving his four-year prison sentence (reduced to one year), either under house arrest or doing unpaid social community service, in mid-October 2013. In mid-October he informed the court that he preferred to serve the sentence by doing community service. Because of bureaucracy in the legal court system, it was however expected his one-year-long full-time community service would only start in around April 2014. On 19 October, the Milan appeal court ruled that Berlusconi's public office ban should be reduced from five to two years; which was later also confirmed by the Court of Cassation. The court imposed this public office ban, however this did not change the fact that Berlusconi according to the "Severino law" received a ban preventing him from running as a candidate in legislative elections for a prolonged six-year period, which effectively superseded the shorter court imposed public office ban. Berlusconi began his community service at a Catholic care home centre on 9 May 2014, where he is required to work four hours a week for a year with elderly dementia patients.
As of 2017, Berlusconi's appeal regarding his six-year public office ban was pending before the European Court of Human Rights.
Berlusconi has been involved in many controversies and over 20 court cases during his political career, including being sentenced to four years imprisonment and a five-year ban from public office by the Court of Appeals for €7M tax evasion (and €280M slush fund) on 8 May 2013, confirmed by the Court of Cassation on 1 August 2013. Due to a general pardon, his imprisonment was reduced to one year, which due to his age can be served either as a house arrest at his private residence or as community service.
On 24 June 2013, Berlusconi was found guilty of paying an underage prostitute for sex, and of abusing his powers in an ensuing cover up. He was sentenced to seven years in jail, and banned from public office for life. He was acquitted from the sex charges by the Italy appeals court on Friday, 18 July 2014.
According to journalists Marco Travaglio and Enzo Biagi, Berlusconi entered politics to save his companies from bankruptcy and himself from convictions. From the very beginning he said it clearly to his associates. Berlusconi's supporters hailed him as the "novus homo", an outsider who was going to bring a new efficiency to the public bureaucracy and reform the state from top to bottom.
Berlusconi was investigated for forty different inquests in less than two years.
− Berlusconi's governments passed laws that shortened statutory terms for tax fraud. Romano Prodi, who defeated Berlusconi in 2006, claimed that these were "ad personam laws," meant to solve Berlusconi's problems and defend his interests.
Berlusconi's extensive control over the media has been widely criticised by some analysts, some press freedom organisations, and extensively by several Italian newspapers, national and private TV channels, by opposition leaders and in general members of opposition parties, who allege that Italy's media has limited freedom of expression. However such coverage of the complaint in practice put under discussion the point of the complaint itself. The "Freedom of the Press 2004 Global Survey", an annual study issued by the American organisation Freedom House, downgraded Italy's ranking from 'Free' to 'Partly Free' due to Berlusconi's influence over RAI, a ranking which, in "Western Europe" was shared only with Turkey (). Reporters Without Borders states that in 2004, "The conflict of interests involving Prime Minister Berlusconi and his vast media empire was still not resolved and continued to threaten news diversity". In April 2004, the International Federation of Journalists joined the criticism, objecting to the passage of a law vetoed by Carlo Azeglio Ciampi in 2003, which critics believe is designed to protect Berlusconi's reported 90% control of the Italian national media.
Berlusconi owns via Mediaset 3 of 7 national TV channels: (Canale 5, Italia 1, and Rete 4). In 2002, Luciano Violante, a prominent member of the Left, said in a speech in Parliament: "Honourable Anedda, I invite you to ask the honourable Berlusconi, because he certainly knows that he received a full guarantee in 1994, when the government changed—that TV stations would not be touched. He knows it and the Honourable Letta knows it." The authors of the book "Inciucio" cite this sentence as evidence for the idea that the Left made a deal with Berlusconi in 1994, in which a promise was made not to honour a law in the Constitutional Court of Italy that would have required Berlusconi to give up one of his three TV channels in order to uphold pluralism and competition. According to the authors, this would be an explanation of why the Left, despite having won the 1996 elections, did not pass a law to solve the conflicts of interest between media ownership and politics.
Berlusconi's influence over RAI became evident when in Sofia, Bulgaria he expressed his views on journalists Enzo Biagi and Michele Santoro, and comedian Daniele Luttazzi. Berlusconi said that they "use television as a criminal means of communication". They lost their jobs as a result. This statement was called by critics ""Editto Bulgaro"".
The TV broadcasting of a satirical programme called "RAIot" was censored in November 2003 after the comedian Sabina Guzzanti made outspoken criticism of the Berlusconi media empire. Mediaset, one of Berlusconi's companies, sued RAI over Guzzanti's program, demanding 20 million euros for "damages"; in November 2003 the show was cancelled by the president of RAI, Lucia Annunziata. The details of the event were made into a Michael Moore-style documentary called "Viva Zapatero!", which was produced by Guzzanti.
Mediaset, Berlusconi's television group, has stated that it uses the same criteria as the public (state-owned) television RAI in assigning a proper visibility to all the most important political parties and movements (the so-called 'Par Condicio') – which has been since often disproved. In March 2006, on the television channel Rai Tre, in a television interview with Lucia Annunziata during her talk show, "In 1/2 h", he stormed out of the studio because of a disagreement with the host regarding the economic consequences of his government. In November 2007, allegations of news manipulation caused the departure from RAI of Berlusconi's personal assistant.
Enrico Mentana, the news anchor long seen as a guarantor of Canale 5's independence, walked out in April 2008, saying that he no longer felt "at home in a group that seems like an electoral campaign committee".
On 24 June 2009, Berlusconi during the Confindustria young members congress in Santa Margherita Ligure, Italy has invited the advertisers to interrupt or boycott the advertising contracts with the magazines and newspapers published by Gruppo Editoriale L'Espresso, in particular "la Repubblica" and the newsmagazine "L'espresso", calling the publishing group "shameless", claiming that it was fuelling the economic crisis by discussing it extensively and accusing it of making a "subversive attack" against him. The publishing group announced it would begin legal proceedings against Berlusconi, given the "criminal and civil relevance" of his remarks.
On 12 October 2009, Berlusconi during the Confindustria Monza and Brianza members' congress, again invited the industrialists present to join a "widespread rebellion" against a "newspaper that hadn't any limits in discrediting the government and the country and indoctrinating foreign newspapers".
In October 2009, Reporters Without Borders secretary-general declared that Berlusconi "is on the verge of being added to our list of Predators of Press Freedom", which would be a first for a European leader. He also added that Italy will probably be ranked last in the European Union in the upcoming edition of the RWB press freedom index.
One of Berlusconi's strongest critics in the media outside Italy is the British weekly "The Economist" (nicknamed by Berlusconi ""The Ecommunist""), which in its issue of 26 April 2001 carried a title on its front cover, 'Why Silvio Berlusconi is unfit to lead Italy'. The war of words between Berlusconi and "The Economist" has gained notoriety, with Berlusconi taking the publication to court in Rome and "The Economist" publishing letters against him. The magazine claimed that the documentation contained in its article proves that Berlusconi is 'unfit' for office because of his numerous conflicts of interest. Berlusconi claimed the article contained "a series of old accusations" that was an "insult to truth and intelligence".
According to "The Economist" findings, Berlusconi, while Prime Minister of Italy, retained in effective control of 90% of all national television broadcasting. This figure included stations he owns directly as well as those over which he had indirect control by dint of his position as Prime Minister and his ability to influence the choice of the management bodies of these stations. "The Economist" has also claimed that the Italian Prime Minister is corrupt and self-serving. A key journalist for "The Economist", David Lane, has set out many of these charges in his book "Berlusconi's Shadow".
Lane points out that Berlusconi has not defended himself in court against the main charges, but has relied upon political and legal manipulations, most notably by changing the statute of limitation to prevent charges being completed in the first place. In order to publicly prove the truth of the documented accusations contained in their articles, the newspaper has publicly challenged Berlusconi to sue "The Economist" for libel. Berlusconi did so, losing versus "The Economist", and being charged for all the trial costs on 5 September 2008, when the Court in Milan issued a judgment rejecting all Mr Berlusconi's claims and sentenced him to compensate for "The Economist"'s legal expenses.
In June 2011, "The Economist" published a strong article dealing with Mr. Berlusconi, titled "The man who screwed an entire country".
On some occasions, laws passed by the Berlusconi administration have effectively delayed ongoing trials involving him. For example, the law reducing punishment for all cases of false accounting and the law on "legitimate suspicion", which allowed defendants to request their cases to be moved to another court if they believe that the local judges are biased against them. Because of these legislative actions, political opponents accuse Berlusconi of passing these laws for the purpose of protecting himself from legal charges. "La Repubblica", for example, sustained that Berlusconi passed 17 different laws which have advantaged himself. Berlusconi and his allies, on the other hand, maintain that such laws are consistent with everyone's right to a rapid and just trial, and with the principle of "presumption of innocence" ("garantismo"); furthermore, they claim that Berlusconi is being subjected to a political "witch hunt", orchestrated by certain (allegedly left-wing) judges.
Berlusconi and his government quarrelled with the Italian judiciary often. His administration attempted to pass a judicial reform intended to limit the flexibility of judges and magistrates in their decision-making. Critics said it would instead limit the magistracy's independence by "de facto" subjecting the judiciary to the executive's control. The reform was met by almost unanimous dissent from the Italian judges, but was passed by the Italian parliament in December 2004. It was vetoed by the Italian President, Carlo Azeglio Ciampi.
During the night hours between 5 and 6 March 2010, the Berlusconi-led Italian government passed a decree "interpreting" the electoral law to let the PDL candidate run for governor in Lazio after she had failed to properly register for the elections. The Italian Constitution states that electoral procedures can only be changed in Parliament, and must not be changed by governmental decree. Italy's President, whose endorsement of the decree was required by law, said that the measure taken by the government may not violate the Constitution.
Berlusconi has never been tried on charges relating to the "Cosa Nostra", although several Mafia turncoats have stated that Berlusconi had connections with the Sicilian criminal association. The claims arise mostly from the hiring of Vittorio Mangano, who was accused of being a "mafioso", as a gardener and stable-man at Berlusconi's Villa San Martino in Arcore, a small town near Milan. It was Berlusconi's friend Marcello Dell'Utri who introduced Mangano to Berlusconi in 1973. Berlusconi denied any ties to the Mafia. Marcello Dell'Utri even stated that the Mafia did not exist at all.
In 2004, Dell'Utri, co-founder of Forza Italia, was sentenced to nine years by a Palermo court on charge of "external association to the Mafia", a sentence describing Dell'Utri as a mediator between the economic interests of Berlusconi and members of the criminal organisation. Berlusconi refused to comment on the sentence. In 2010, Palermo's appeals court cut the sentence to seven years but fully confirmed Dell'Utri's role as a link between Berlusconi and the Mafia until 1992.
In 1996, a Mafia informer, Salvatore Cancemi, declared that Berlusconi and Dell'Utri were in direct contact with Salvatore Riina, head of the Sicilian Mafia in the 1980s and 1990s. Cancemi disclosed that Fininvest, through Marcello Dell'Utri and mafioso Vittorio Mangano, had paid Cosa Nostra 200 million lire (between 100,000 and 200,000 of today's euro) annually. The alleged contacts, according to Cancemi, were to lead to legislation favourable to Cosa Nostra, in particular reforming the harsh 41-bis prison regime. The underlying premise was that Cosa Nostra would support Berlusconi's Forza Italia party in return for political favours. After a two-year investigation, magistrates closed the inquiry without charges. They did not find evidence to corroborate Cancemi's allegations. Similarly, a two-year investigation, also launched on evidence from Cancemi, into Berlusconi's alleged association with the Mafia was closed in 1996.
According to yet another Mafia turncoat, Antonino Giuffrè – arrested on 16 April 2002 – the Mafia turned to Berlusconi's Forza Italia party to look after the Mafia's interests, after the decline in the early 1990s of the ruling Christian Democratic party, whose leaders in Sicily looked after the Mafia's interests in Rome. The Mafia's fall out with the Christian Democrats became clear when Salvo Lima was killed in March 1992. "The Lima murder marked the end of an era," Giuffrè told the court. "A new era opened with a new political force on the horizon which provided the guarantees that the Christian Democrats were no longer able to deliver. To be clear, that party was Forza Italia." Dell'Utri was the go-between on a range of legislative efforts to ease pressure on mafiosi in exchange for electoral support, according to Giuffrè. "Dell'Utri was very close to Cosa Nostra and a very good contact point for Berlusconi," he said. Mafia boss Bernardo Provenzano told Giuffrè that they "were in good hands" with Dell'Utri, who was a "serious and trustworthy person". Provenzano stated that the Mafia's judicial problems would be resolved within 10 years of 1992, thanks to the undertakings given by Forza Italia.
Giuffrè also said that Berlusconi himself used to be in touch with Stefano Bontade, a top Mafia boss, in the mid-1970s. At the time Berlusconi still was just a wealthy real estate developer and started his private television empire. Bontade visited Berlusconi's villa in Arcore through his contact Vittorio Mangano. Berlusconi's lawyer dismissed Giuffrè's testimony as "false" and an attempt to discredit the Prime Minister and his party. Giuffrè said that other Mafia representatives who were in contact with Berlusconi included the Palermo Mafia bosses Filippo Graviano and Giuseppe Graviano. The Graviano brothers allegedly dealt directly with Berlusconi through the businessman Gianni Letta, somewhere between September/October 1993. The alleged pact with the Mafia fell apart in 2002. Cosa Nostra had achieved nothing.
Dell'Utri's lawyer, Enrico Trantino, dismissed Giuffrè's allegations as an "anthology of hearsay". He said Giuffrè had perpetuated the trend that every new turncoat would attack Dell'Utri and the former Christian Democrat prime minister Giulio Andreotti in order to earn money and judicial privileges.
In October 2009, Gaspare Spatuzza, a Mafioso turncoat in 2008, has confirmed Giuffrè statements. Spatuzza testified that his boss Giuseppe Graviano had told him in 1994, that Berlusconi was bargaining with the Mafia, concerning a political-electoral agreement between Cosa Nostra and Berlusconi's Forza Italia. Spatuzza said Graviano disclosed the information to him during a conversation in a bar Graviano owned in the upscale Via Veneto district of the Italian capital Rome. Dell'Utri was the intermediary, according to Spatuzza. Dell'Utri has dismissed Spatuzza's allegations as "nonsense". Berlusconi's lawyer and MP for the PdL, Niccolò Ghedini said that "the statements given by Spatuzza about prime minister Berlusconi are baseless and can be in no way verified."
After the 11 September 2001 attacks in New York, Berlusconi said: "We must be aware of the superiority of our civilisation, a system that has guaranteed well-being, respect for human rights and – in contrast with Islamic countries – respect for religious and political rights, a system that has as its value understanding of diversity and tolerance." This declaration caused an uproar, not only in the Arab and Muslim world, but also all around Europe, including Italy. Subsequently, Berlusconi told the press: "We are aware of the crucial role of moderate Arab countries... I am sorry that words that have been misunderstood have offended the sensitivity of my Arab and Muslim friends."
After the family of Eluana Englaro (who had been comatose for 17 years) succeeded in having her right to die recognized by the judges and getting doctors to start the process of allowing her to die in the way established by the court, Berlusconi issued a decree to stop the doctor from letting her die. Stating that, "This is murder. I would be failing to rescue her. I'm not a Pontius Pilate". Berlusconi went on to defend his decision by claiming that she was "in the condition to have babies", arguing that comatose women were still subject to menstruation.
During his long career as Prime Minister, Berlusconi has had to deal with massive immigration from the coast of North Africa. To limit illegal immigration, the Berlusconi's government promulgated the "Bossi-Fini law" in 2002. This law took the name by the leaders of the two right-wing allied parties in Berlusconi's government coalition, Umberto Bossi of Lega Nord and Gianfranco Fini of National Alliance.
The law provides the expulsion, issued by the Prefect of the Province where an illegal foreign immigrant is found, and is immediately performed with the assistance at the border of the police. Illegal immigrants without valid identity documents, are taken to detention centers, set up by the "Turco-Napolitano law", in order to be identified. The law provides for the issuance of residence permits to persons who provide proof of having a job for their maintenance budget. To this general rule you add the special residence permits and those in the application of the right to asylum.
The standard allows the repatriation to the country of origin on the high seas, on the basis of bilateral agreements between Italy and neighboring countries, which commit the police forces of their respective countries to cooperate in the prevention of illegal immigration. If the illegal immigrant ships dock on Italian soil, the identification of those entitled to political asylum and the supply of medical treatment and care is undertaken by the marine police force. The law had been severely criticised by the centre-left opposition.
In 2013, the European Parliament asked Italy to modify the "Bossi-Fini law" because it was too restrictive and severe.
Berlusconi has developed a reputation for making gaffes or insensitive remarks.
On 2 July 2003, Berlusconi suggested that German Social democratic MEP Martin Schulz, who had criticised his domestic policies, should play a Nazi concentration camp guard in a film. Berlusconi insisted that he was joking, but accused Schulz and others to be "bad-willing tourists of democracy". This incident caused a brief cooling of Italy's relationship with Germany.
Addressing traders at the New York Stock Exchange in September 2003, Berlusconi listed a series of reasons to invest in Italy, the first of which was that "we have the most beautiful secretaries in the world". This remark resulted in remonstration among female members of parliament, who took part in a one-day cross-party protest. Berlusconi's list also included the claim that Italy had "fewer communists, and those who are still here deny having been one".
In 2003, during an interview with Nicholas Farrell, then editor of "The Spectator", Berlusconi claimed that Mussolini "had been a benign dictator who did not murder opponents but sent them 'on holiday. In 2013, he returned to calling Mussolini a good leader whose biggest mistake was signing up to exterminate the Jews.
Berlusconi had made disparaging remarks about Finnish cuisine during negotiations to decide on the location of the European Food Safety Authority in 2001. He caused further offence in 2005, when he claimed that during the negotiations he had had to "dust off his playboy charms" in order to persuade the Finnish president, Tarja Halonen, to concede that the EFSA should be based in Parma instead of Finland, and compared Finnish smoked reindeer unfavourably to culatello. The Italian ambassador in Helsinki was summoned by the Finnish foreign minister. One of Berlusconi's ministers later 'explained' the comment by saying that "anyone who had seen a picture of Halonen must have been aware that he had been joking". Halonen took the incident in good humour, retorting that Berlusconi had "overestimated his persuasion skills". The Finnish pizza chain Kotipizza responded by launching a variety of pizza called "Pizza Berlusconi", using smoked reindeer as the topping. The pizza won first prize in America's Plate International pizza contest in March 2008.
In March 2006, Berlusconi alleged that Chinese communists under Mao Zedong had "boiled [children] to fertilise the fields". His opponent Romano Prodi criticised Berlusconi for offending the Chinese people and called his comments 'unthinkable'.
In the run-up to the 2008 Italian general election, Berlusconi was accused of sexism for saying that female politicians from the right were "more beautiful" and that "the left has no taste, even when it comes to women". In 2008 Berlusconi criticised the composition of the Council of Ministers of the Spanish Government as being too 'pink' by virtue of the fact that it had (once the President of the Council, José Luis Rodríguez Zapatero, is counted) an equal number of men and women. He also stated that he doubted that such a composition would be possible in Italy given the "prevalence of men" in Italian politics.
Also in 2008, Berlusconi caused controversy at a joint press conference with Russian president Vladimir Putin. When a journalist from the Russian paper Nezavisimaya Gazeta asked a question about Mr. Putin's personal relationships, Berlusconi made a gesture towards the journalist imitating a gunman shooting.
On 6 November 2008, two days after Barack Obama was elected the first black US President, Berlusconi referred to Obama as "young, handsome and even tanned": On 26 March 2009 he said "I'm paler [than Mr. Obama], because it's been so long since I went sunbathing. He's more handsome, younger and taller."
On 24 January 2009, Berlusconi announced his aim to increase the numbers of military patrolling the Italian cities from 3,000 to 30,000 in order to crack down on what he called an "evil army" of criminals. Responding to a female journalist who asked him if this tenfold increase in patrolling soldiers would be enough to secure Italian women from being raped, he said: "We could not field a big enough force to avoid this risk [of rape]. We would need as many soldiers as beautiful women and I don't think that would be possible, because our women are so beautiful." Opposition leaders called the remarks insensitive and in bad taste. Berlusconi retorted that he had merely wanted to compliment Italian women. Other critics accused him of creating a police state.
Two days after the 2009 L'Aquila earthquake, Berlusconi suggested that people left homeless should view their experience as a camping weekend.
Subsequently, at a tent camp on the outskirts of L'Aquila housing some of the more than 30,000 people who lost their homes during the 2009 earthquake he said to an African priest: "you have a nice tan."
In October 2010, Berlusconi was chastised by the Vatican newspaper "L'Osservatore Romano" after he was filmed telling "offensive and deplorable jokes", including one whose punchline was similar to one of the gravest blasphemies in the Italian language. It was also revealed he had made another antisemitic joke a few days previously. Berlusconi responded to the allegations by saying the jokes were "neither an offence nor a sin, but merely a laugh".
On 1 November 2010, after once again being accused of involvement in juvenile prostitution, he suggested that an audience at the Milan trade fair should stop reading newspapers: "Don't read newspapers any more because they deceive you. [...] I am a man who works hard all day long and if sometimes I look at some good-looking girl, it's better to be fond of pretty girls than to be gay". The remarks were immediately condemned by Arcigay, Italy's main gay rights organisation.
On 13 July 2011, according to a leaked telephone surveillance transcript, Berlusconi told his presumed blackmailer Valter Lavitola: "The only thing they can say about me is that I screw around [...] Now they're spying on me, controlling my phone calls. I don't give a fuck. In a few months [...] I'll be leaving this shit country that makes me sick."
On 27 January 2013, on the occasion of the Holocaust Remembrance Day, Berlusconi said the Italian fascist dictator Benito Mussolini, except for passing anti-Jewish laws in 1938, only had done "good things" for Italy; and also said Mussolini from a strategic point of view did the right thing in siding with Adolf Hitler during World War II, because Hitler at the point of time when the alliance was made had appeared to be winning the war.
Berlusconi's career as an entrepreneur is also often questioned by his detractors. The allegations made against him generally include suspicions about the extremely rapid increase of his activity in the construction industry in the years 1961–63, hinting at the possibility that in those years he received money from unknown and possibly illegal sources. These accusations are regarded by Berlusconi and his supporters as empty slander, trying to undermine Berlusconi's reputation as a self-made man. Also frequently cited by opponents are events dating to the 1980s, including supposed "exchanges of favours" between Berlusconi and Bettino Craxi, the former Socialist prime minister and leader of the Italian Socialist Party convicted in 1994, for various corruption charges. The Milan magistrates who indicted and successfully convicted Mr. Craxi in their "Clean Hands" investigation laid bare an entrenched system in which businessmen paid hundreds of millions of dollars to political parties or individual politicians in exchange for sweetheart deals with Italian state companies and the government itself. Berlusconi acknowledges a personal friendship with Craxi.
On 28 May 2013, Berlusconi and his entourage launched an online initiative which consisted in the recruitment of volunteers, who are available to defend Berlusconi from the convictions of Milan's prosecutors, who are dealing with his trials, and who, Berlusconi often accused of being communists and anti-democratic.
Simone Furlan, the creator of the Freedom Army said in an interview: "There comes a time in life, when you realize that fighting for an ideal is no longer a choice but an obligation. We civil society we were helpless spectators of the "War of the Twenty Years" which saw Berlusconi fight and defend against slanderous accusations of all kinds, the result of a judicial persecution without precedent in history".
This initiative, launched as "Freedom Army", has been immediately nicknamed "Silvio's Army" by the media, and it was severely condemned, by the Democratic Party, the Five Star Movement and Left Ecology Freedom.
In December 2007, the audio recording of a phone call between Berlusconi, then leader of the opposition, and Agostino Saccà (general director of RAI) were published by the magazine "L'espresso" and caused a scandal in the media.
The wiretap was part of an investigation by the Public Prosecutor Office of Naples, where Berlusconi was investigated for corruption.
In the phone call, Saccà expresses words of impassioned political support to Berlusconi and criticises the behaviour of Berlusconi's allies. Berlusconi urges Saccà to broadcast a telefilm series which was strongly advocated by his ally Umberto Bossi. Saccà laments that many people have spread rumours on this agreement causing problems to him. Then Berlusconi asks Saccà to find a job in RAI for a young woman explicitly telling him that this woman would serve as an asset in a secret exchange with a senator of the majority who would help him to cause Prodi, with his administration, to fall. After the publication of these wiretaps, Berlusconi has been accused by other politicians and by some journalists of political corruption through the exploitation of prostitution. Berlusconi said, in his own defence: "In the entertainment world everybody knows that, in certain situations in RAI TV you work only if you prostitute yourself or if you are leftist. I have intervened on behalf of some personalities who are not leftists and have been completely set apart by RAI TV." In the US State Department's 2011 Trafficking in Persons report authorized by Secretary of State Hillary Clinton Mr. Berlusconi was explicitly named as a person involved in the "commercial sexual exploitation of a Moroccan child".
At the end of April 2009, Berlusconi's wife Veronica Lario, who would divorce him several years later, wrote an open letter expressing her anger at Berlusconi's choice of young, attractive female candidates—some with little or no political experience—to represent the party in the 2009 European Parliament elections. Berlusconi demanded a public apology, claiming that for the third time his wife had "done this to me in the middle of an election campaign", and stated that there was little prospect of his marriage continuing. On 3 May, Lario announced she was filing for divorce. She claimed that Berlusconi had not attended his own sons' 18th birthday parties, and that she "cannot remain with a man who consorts with minors" and "is not well".
Noemi Letizia, the girl in question, gave interviews to the Italian press, revealing that she calls Berlusconi ""papi"" ("daddy"), that they often spent time together in the past, and that Berlusconi would take care of her career as showgirl or politician, whichever she opted to pursue. Berlusconi claimed that he knew Letizia only through her father and that he never met her alone without her parents.
On 14 May, "la Repubblica" published an article alleging many inconsistencies in Berlusconi's story and asked him to answer ten questions to clarify the situation.
Ten days later, Letizia's ex-boyfriend, Luigi Flaminio, claimed that Berlusconi had contacted Letizia personally in October 2008 and said she had spent a week without her parents at Berlusconi's Sardinian villa around New Year's Eve 2009, a fact confirmed later by her mother. On 28 May 2009, Berlusconi said that he had never had "spicy" relations with Letizia, and said that if any such thing had occurred, he would have resigned immediately.
On 17 June 2009, Patrizia D'Addario, a 42-year-old escort and retired actress from Bari, Italy, claimed that she had been recruited twice to spend the evening with Berlusconi. Berlusconi denied any knowledge of D'Addario being a paid escort: "I have never paid a woman... I have never understood what satisfaction there is if the pleasure of conquest is absent". He also accused an unspecified person of manoeuvring and bribing D'Addario.
On 26 June 2009, the "ten questions" to Berlusconi were reformulated by "la Repubblica" newspaper, and subsequently republished multiple times. On 28 August 2009, Berlusconi sued Gruppo Editoriale L'Espresso, the owner company of the newspaper, and classified the ten questions as "defamatory" and "rhetorical".
Berlusconi's lifestyle has raised eyebrows in Catholic circles, with vigorous criticism being expressed in particular by the newspaper "Avvenire", owned by the Conferenza Episcopale Italiana (Conference of Italian Bishops). This was followed by the publication in the newspaper il Giornale (owned by the Berlusconi family) of details with regard to legal proceedings against the editor of "Avvenire", Dino Boffo, which seemed to implicate him for a harassment case against the wife of his ex-partner. Dino Boffo has always declared the details of the proceedings to be false, although he has not denied the basic premise.
After a period of tense exchanges and polemics, on 3 September 2009, Boffo resigned from his editorial position and the assistant editor Marco Tarquinio became editor "ad interim".
On 22 September 2009, after a press conference, Berlusconi declared that he had asked his ministers not to respond anymore to questions regarding "gossip". He stated also that the Italian press should talk only about the "successes" of Italian Government in internal and foreign policies, adding also that the press now will be able only to ask questions relating to his administration and not to gossip.
During a contested episode of "AnnoZero" on 1 October 2009, the journalist and presenter Michele Santoro interviewed Patrizia D'Addario. She stated she was contacted by Giampaolo Tarantini – a businessman from Bari – who already knew her and requested her presence at Palazzo Grazioli with "the President". D'Addario also stated that Berlusconi knew that she was a paid escort.
In November 2010, 17-year old Moroccan belly dancer and alleged prostitute Karima El Mahroug (better known as "Ruby Rubacuori") claimed to have been given $10,000 by Berlusconi at parties at his private villas. The girl told prosecutors in Milan that these events were like orgies where Berlusconi and 20 young women performed an African-style ritual known as the "bunga bunga" in the nude.
It was also found out that, on 27 May 2010, El Mahroug had been arrested for theft by the Milan police but (being still a minor) she was directed to a shelter for juvenile offenders. After a couple of hours, while she was being questioned, Berlusconi, who was at the time in Paris, called the head of the police in Milan and pressured for her release, claiming the girl was related to Hosni Mubarak, then President of Egypt, and that in order to avoid a diplomatic crisis, she was to be brought to the custody of Nicole Minetti. Following repeated telephone calls by Berlusconi to the police authorities, El Mahroug was eventually released and entrusted to Minetti's care.
The investigation of Berlusconi for extortion ("") and child prostitution regarding Karima El Mahroug has been referred to as "Rubygate".
MP Gaetano Pecorella proposed to lower the age of majority in Italy to solve the case. Minetti was known for previous associations with Berlusconi, having danced for "Colorado Cafe", a show on one of Berlusconi's TV channels, and on "Scorie", an Italian version of Candid Camera. In November 2009 she became a dental hygienist, and shortly afterward treated Berlusconi for two broken teeth and facial injuries after he was attacked with a marble statue at a political rally. In February 2010, she was selected as one of the candidates representing Berlusconi's The People of Freedom party, despite her lack of any political experience, and was seated on the Regional Council of Lombardy the following month.
"The Guardian" reported that according to a series of media reports in October 2010, Berlusconi had met El Mahroug, then 17, through Nicole Minetti. Mahroug insisted that she had not slept with the then 74-year-old prime minister. She told Italian newspapers that she merely attended dinner at his mansion near Milan. El Mahroug said she sat next to Berlusconi, who later took her upstairs and gave her an envelope containing €7,000. She said he also gave her jewellery.
Berlusconi came under fire for reportedly spending $1.8 million in state funds from RAI Cinema to further the career of a largely unknown Bulgarian actress, Michelle Bonev. The fact that this coincided with severe cuts being made to the country's arts budget provoked a strong reaction from the public.
In January 2011, Berlusconi was placed under criminal investigation relating to El Mahroug for allegedly having sex with an underage prostitute and for abuse of office relating to her release from detention. Berlusconi's lawyers were quick to deny the allegations as "absurd and without foundation" and called the investigation a "serious interference with the private life of the prime minister without precedent in the judicial history of the country".
On 15 February 2011, a judge indicted Berlusconi to stand trial on charges carrying up to 15 years in prison. Paying for sex with a minor in Italy is punished within a range of six months to three years imprisonment, while the crime of malfeasance in office (It: "concussione") is more severely punished, from four years to twelve years imprisonment, as it is considered a type of extortion committed by a public officer.
The fast-track trial opened on 6 April and was adjourned until 31 May. El Mahroug's lawyer said that Mahroug would not be attaching herself to the case as a civil complainant and denies that she ever made herself available for money. Another alleged victim, Giorgia Iafrate, also decided not to be a party to the case. In January 2013, judges rejected an application from Berlusconi's lawyers to have the trial adjourned so that it would not interfere with Italy's 2013 general election in which Berlusconi participated.
On 24 June 2013, Berlusconi was found guilty of paying for sex with an underage prostitute and of abusing his office. He was sentenced to seven years in prison, one more year than had been requested by the prosecution, and banned from public office for life. In the trial, the prosecution claimed that Berlusconi had paid over 4.5 million euros in total for El Mahroug's services. Berlusconi appealed the sentence and his conviction was quashed a year later, on 18 July 2014.
On 1 March 2019, the Moroccan model Imane Fadil, who was one of the main witnesses in the process, died in strange circumstances, allegedly killed by radioactive poisoning.
In April 2016 the Panama Papers scandal broke out; it was a leaked set of 11.5 million confidential documents that provide detailed information about more than 214,000 offshore companies listed by the Panamanian corporate service provider Mossack Fonseca, including the identities of shareholders and directors of the companies. The documents show how wealthy individuals, including public officials, hid their assets from public scrutiny. Silvio Berlusconi was cited in the list, along with his long-time partner at A.C. Milan, Adriano Galliani.
On 13 December 2009, Berlusconi was hit in the face with an alabaster statuette of Milan Cathedral after a rally in Milan's "Piazza del Duomo". As Berlusconi was shaking hands with the public, a man in the crowd stepped forward and launched the statuette at him. The assailant was subsequently detained and identified as Massimo Tartaglia, a 42-year-old surveyor with a history of mental illness but no criminal record, living in the outskirts of Milan. According to a letter released to the Italian news agency ANSA, Tartaglia has apologised for the attack, writing: "I don't recognise myself", and adding that he had "acted alone [with no] form of militancy or political affiliation". Berlusconi suffered facial injuries, a broken nose and two broken teeth; he was subsequently hospitalised. Italian president Giorgio Napolitano and politicians from all parties in Italy condemned the attack.
In the night of 15–16 December, a 26-year-old man was stopped by police and Berlusconi's bodyguards while trying to gain access to Berlusconi's hospital room. A search revealed that he carried no weapons, although three hockey sticks and two knives were later found in his car. The suspect was known to have a history of mental illness and mandatory treatment in mental institutions.
Berlusconi was discharged from the hospital on 17 December 2009.
On 7 June 2016, after the campaign for the local elections, Berlusconi was hospitalized to the San Raffaele Hospital in Milan because of heart problems. After two days, on 9 June, his personal doctor Alberto Zangrillo announced that the stroke could have killed him and he must have a heart surgery to replace a defective aortic valve.
In 2012, "Forbes" magazine reported that Berlusconi was Italy's sixth richest man, with a net worth of $5.9 billion. He holds significant assets in television, newspaper, publishing, cinema, finance, banking, insurance, and sports.
Berlusconi's main company, Mediaset, operates three national television channels, which in total cover half of the national television sector; and "Publitalia" (), the leading Italian advertising and publicity agency. Berlusconi also owns a controlling stake in Arnoldo Mondadori Editore, the largest Italian publishing house, whose publications include "Panorama", one of the country's most popular news magazines. His brother, Paolo Berlusconi, owns and operates "il Giornale", a centre-right newspaper which provides a pro-Berlusconi slant on Italian politics. "Il Foglio", one of the most influential Italian right-wing newspapers, is partially owned by his former wife, Veronica Lario. After Lario sold some of her ownership in 2010, Paolo Berlusconi acquired a majority interest in the newspaper. He founded and is the major shareholder of Fininvest, which is among the largest private companies in Italy; it operates in media and finance. With Ennio Doris he founded Mediolanum, one of the country's biggest banking and insurance groups. He has interests in cinema and home video distribution (Medusa Film and Penta Film). He also owned the football club A.C. Milan from 1986 to 2017, and currently owns A.C. Monza (since 2018). | https://en.wikipedia.org/wiki?curid=26909 |
Sprung rhythm
Sprung rhythm is a poetic rhythm designed to imitate the rhythm of natural speech. It is constructed from feet in which the first syllable is stressed and may be followed by a variable number of unstressed syllables. The British poet Gerard Manley Hopkins said he discovered this previously unnamed poetic rhythm in the natural patterns of English in folk songs, spoken poetry, Shakespeare, Milton, et al. He used diacritical marks on syllables to indicate which should be stressed in cases "where the reader might be in doubt which syllable should have the stress" (acute, e.g. shéer) and which syllables should be pronounced but not stressed (grave, e.g., gleanèd).
Some critics believe he merely coined a name for poems with mixed, irregular feet, like free verse. However, while sprung rhythm allows for an indeterminate number of syllables to a foot, Hopkins was very careful to keep the number of feet per line consistent across each individual work, a trait that free verse does not share. Sprung rhythm may be classed as a form of accentual verse, as it is stress-timed, rather than syllable-timed, and while sprung rhythm did not become a popular literary form, Hopkins's advocacy did assist in a revival of accentual verse more generally.
The Windhover
"To Christ our Lord"
I caught this morning morning's minion, king-
High there, how he rung upon the rein of a wimpling wing
In his ecstasy! then off, off forth on swing,
Stirred for a bird, – the achieve of, the mastery of the thing!
Brute beauty and valour and act, oh, air, pride, plume, here
Times told lovelier, more dangerous, O my chevalier!
Shine, and blue-bleak embers, ah my dear,
—Gerard Manley Hopkins (1844–1889)
Since Hopkins considers that feet always begin in a stressed syllable in
sprung rhythm, for a scansion it is enough to specify which syllables are stressed.
One proposed scansion of this poem is
I cáught this mórning mórning's mínion, kíng-
Hígh there, how he rúng upon the réin of a wímpling wíng
In his écstasy! then óff, óff fórth on swíng,
Stírred for a bírd, – the achíeve of, the mástery of the thíng!
Brute béauty and válour and áct, oh, air, príde, plume, hére
Tímes told lóvelier, more dángerous, Ó my chevalíer!
Shíne, and blúe-bleak émbers, áh my déar,
Authorities disagree about the scansion. The scansion of this poem is also discussed in
. Although they do not give a complete
scansion, their proposal differs in the 10th and 14th lines. | https://en.wikipedia.org/wiki?curid=26911 |
Solanales
The Solanales are an order of flowering plants, included in the asterid group of dicotyledons. Some older sources used the name Polemoniales for this order.
Under the older Cronquist system, the latter three families were placed elsewhere, and a number of others were included:
In the classification system of Dahlgren the Solanales were in the superorder Solaniflorae (also called Solananae).
The following families are included here in newer systems such as that of the Angiosperm Phylogeny Group (APG):
The APG II classification treats the Solanales in the group Euasterids I. | https://en.wikipedia.org/wiki?curid=26913 |
Sheepshead (card game)
Sheepshead or Sheephead is an American trick-taking card game derived from Bavaria's national card game, Schafkopf. Sheepshead is most commonly played by five players, but variants exist to allow for two to eight players. There are also many other variants to the game rules, and many slang terms used with the game.
Sheepshead is most commonly played in Wisconsin, where it is sometimes called the "unofficial" state card game. In 1983, it was declared the official card game of the city of Milwaukee. It is also common among German counties in Southern Indiana, which has large German-American populations, and on the Internet.
Numerous tournaments are held throughout Wisconsin during the year, with the largest tournament being the "Nationals", held annually in the Wisconsin Dells during a weekend in September, October or November, and mini-tournaments held hourly throughout Germanfest in Milwaukee during the last weekend of each July. National 3-Hand Sheepshead Tournament has been held annually in Wisconsin since 1970 in the month of March. 48-hand sessions are held at locations around the state, offering players an opportunity to play in as many of the 100 plus sessions as they wish.
Schafkopf literally means "sheep's head" and may refer to the practice going back over a century of recording the score by drawing a stylised head of a sheep with nine lines. However, some sources argue that the term was probably derived and translated incorrectly from Middle High German and referred to playing cards on a barrel head (from "kopf", meaning head, and "Schaff", meaning a barrel).
Sheepshead is played with 7-8-9-10-J-Q-K-A in four suits, for a total of 32 cards. This is also known as a Piquet deck, as opposed to the 52 or 54 present in a full French deck (also known as a Poker deck, or a regular deck of playing cards). A sheepshead deck is made by removing all of the jokers, sixes, fives, fours, threes, and twos from a standard deck.
Card strength in sheepshead is different from in most other games. It is one of the most difficult things for some beginners to grasp.
There are 14 cards in the trump suit: all four queens, all four jacks, and all of
the diamonds. In order of strength from greatest to least:
Also, there are 6 of each ""fail"" suit (18 total).
Clubs, spades, and hearts take no precedence over other fail suits, unlike trump, which always take fail. (Notice how both aces and tens outrank kings; arguably the most confusing aspect of card strength). The lead suit must be followed if possible; if not, then any card may be played such as trump (which will take the trick), or a fail card. Playing a fail of a different suit is called ""throwing off"" and can be a way to clear up another suit. Additionally, throwing off a point card is called "schmearing."
Each card is given a separate point value as follows:
The strongest cards (queens and jacks) are not worth the most points, giving Sheepshead some of its unusual character.
There are 120 points total in the deck. The goal of the game is to get half of these (60 or 61); in case of a tie, the player who picked up the blinds loses, and that player's opponents win. (There are variant rules for more peculiar situations, such as the Leaster.)
Score is kept using points (not to be confused with the point values of the cards) or using money. Points are given/taken on a zero-sum basis.
The following chart shows the points for a five-person game (though other variations, with a different number of players, have different scoring). Points are awarded based on the point value of cards taken during the hand. When playing for money, each point generally represents a common money unit.
The deck is shuffled and cut. The dealer then deals cards, starting with the player to the dealer's left, and typically two or three at a time to each person. In most standard five and six-handed games, two cards are also dealt to a separate pile called the "blind." Usually this is dealt as a pair between rounds of dealing at any time so long as the last two cards are not dealt into the blind (because the dealer might inadvertently reveal the bottom card while dealing or shuffling).
When done with a five-handed deal, each player should have six cards, with two in the blind.
In one variant, a player may require a redeal if the player's hand has no aces, no face cards, and no trump.
The player to the left of the dealer gets first choice to take the blind (the two face-down cards not dealt to any player). If he passes, the option is given to the next player (in clockwise order). There are several Variations for if the dealer does not wish to pick up the blind—the dealer may be required to pick up the blind, or may have the option to call a Leaster, or may be able to call a Doubler.
The individual who takes the blind is called the "picker". The picker adds the two cards in the blind to his hand and then must choose two cards to lay down or "bury". The buried cards are added to the picker's score if the picker's side takes at least one trick.
The picker may also have a partner on his team who will then play against the remaining players. Depending on the variant or house-rule, the partner must automatically be the player with the jack of diamonds, or the picker may be able to call the ace of a fail suit and have that player be his partner. These are discussed in the Variations section.
One of the more intriguing aspects of Sheepshead is that the picker and partner change each hand, and a good deal of the game's strategy is in determining which player is the partner, as his identity is usually not revealed until after the game has begun.
After the picker has buried his cards, the person to the left of the dealer plays the first card. Play continues clockwise until everyone has played. Every player must follow suit if possible. Trump is considered a suit, so if trump is led, and you have trump in your hand, you must play trump. If you cannot follow suit, then you can play any card from your hand. The person who played the card with the highest strength takes the "trick" (the highest trump, or if none, the highest card of the fail suit that was led). The player who took the previous trick then plays, or "leads," a new card for the second trick. After all tricks have been taken, their point values are totaled and the winner declared, with all players adding or reducing their personal points accordingly (see the charts, above). The deal then shifts to the person to the left of the previous dealer.
There are a number of different play variations for Sheepshead. Variants may change how partners are chosen, scoring, the suits considered fail, or what occurs when the blind is not picked. Variations in the number of players is discussed in the next section.
The following two variants apply only to five and six-player games, and possibly four-player games. Variants differ in whether the picker is permitted to choose to play alone, and in whether there are some situations where the picker may be "required "to play alone.
The picker chooses a "called ace suit" after picking the blind. Whoever has this called ace will be his partner. There are a few further rules behind this.
In this variant, the partner is automatically the individual with the jack of diamonds. Unlike the Called Ace variant, the partner is not required to play the jack of diamonds with any required haste; thus the identity of the partner is usually secret for more of the game.
The normal rule is that if the picker has the jack of diamonds, whether as a result of the deal or picking up the jack in the blind, the picker must play alone. However, there are a number of variants within this method of play.
One variant allows the picker to call "sheepshead." This means that the picker believes he can take every trick. If he succeeds he receives twice the number of points for a trickless game, but if he misses a single trick (even one lacking points), he must pay twice the value his opponents would have paid him for a trickless hand.
If the picker/partner do not win, they are "bumped". The standard method of playing Sheepshead is that the picker/partner lose two times the points that opponents would lose in a similar loss. This may be called the "Punish the picker" rule. Some house rules do not enforce this "Punish" rule.
Some house rules require the picker to take at least one trick. If the picker/partner do not take at least one trick and lose, then only the picker loses points. Picker -18, partner 0, opponents +6.
In this variant, when a player picks up the blind, any player who was not given the opportunity to pick up the blind and who is not the picker's partner may knock or crack by knocking the table with their fist. This automatically doubles the point values determining the score when the game ends. In the "aces" variant, the crack must take place after the ace has been called but before the first card is played.
This variant allows players to double the point value of the game by revealing that they have the two black or red queens.
Typically, diamonds are considered trump, but some groups use another suit (typically clubs around North Central WIsconsin). This would mean a nine of diamonds would be fail while a nine of clubs is trump instead.
Alternatively, in some groups, the strengths of the various queens and jacks differ from standard rules.
A variant popular in some areas of Minnesota and Wisconsin is to change the order of strength of the trump cards. This is done by increasing the seven of diamond's strength to second in the list of trump:
When playing this variant the seven of diamonds is referred to as "the Spitz". Another variation puts the seven of diamonds first in the list of trump.
Several different scenarios can occur if no one picks up the blind, including a forced pick, a Leaster, or a Doubler.
In this variant, the person on the end is required to pick the blind. This is sometimes offset by a "No Punish" rule, and statistics; if no one desired the blind, then there's a better chance that the blind has decent cards, unless the trump is evenly spread out.
In a leaster, the person with the fewest points wins the hand. There is no partner, and the winner simply receives one point from every opponent in the game. The blind is set aside and normally given to the player who takes the last trick. House rules may allow the dealer to declare which trick is given the blind (e.g. the first trick, or the second, etc.). Another house rule may be to set the blind aside so it is not given to anyone. The blind is not viewed until after the hand is over.
A variant of the leaster is the moster, which is played the same as a leaster, but after the hand is scored, the player who took the most points pays out (as if for a simple loss) to all the rest of the players. Thus, in a five-player game, the affected player loses four points and the opponents get one each, unless the score is doubled by other means (cracking, etc.). The exception is taking all of the tricks, which is still scored as a win by the player doing so.
In a doubler, the cards are reshuffled and a new hand is dealt and played as normal. However, at the end of this redeal, the point values lost and gained are doubled.
Typically occurring with a leaster (and during cash games), one point is placed into a pot for the next hand. Then, if the picker wins the hand, he splits the pot with the partner (in a five handed game, the extra point goes to the picker such that he receives three and the partner receives a single point). However, if the picker loses the hand, the picker and partner must pay into the pot what they would have received.
There are numerous variations in rules, so a discussion of house rules generally occurs before play begins. The following variations can be employed to accommodate different numbers of players.
1) Each player is dealt four cards in a row, face down. Then, four cards are dealt face up to each player and placed on top of the first four cards. The eight cards in front of each player are referred to as their 'battery' in the text below. Then, eight cards are dealt to each player's hand.
Every hand is played with no picking nor partner. Whichever player gets the higher number of points wins the hand.
Each trick has four cards - one from each player's hand, and one from each player's battery (table cards). The highest card, per normal rules, takes the trick. At the end of the trick, any uncovered face down card is turned face up, and is in play for the next trick.
For the first trick, the non-dealer leads a card from their hand, then the dealer plays from their hand, then the non-dealer's battery, then the dealer's battery. Whichever hand or battery takes the trick must lead the next trick. Each trick is 'hand hand battery battery', or 'battery battery hand hand'.
2) Sixteen cards are dealt face down in a four by four rectangle. Players are not allowed to look at the face-down cards. Then, a card is dealt face up on top of these. The sixteen cards (eight stacks of two cards) closest to the dealer are the dealer's cards. A card must be face-up to be played. The opponent starts the first trick by playing one of his face-up cards, and the dealer responds by playing one of his. After each trick is played, any face-down cards uncovered are turned face-up. Play continues until all 32 cards have been played. Players are not allowed to look at their own face-down cards.
1) Each player is dealt ten cards, with two going to the blind. The picker faces the other two players.
2) The sevens of clubs and spades are removed, leaving thirty cards. Nine cards are then dealt to each player, with three going to the blind. The picker faces the others.
3) The six non-trump sevens and eights are removed, dealing eight cards to each player, with two in the blind.
1) Seven cards are dealt to each player with four in the blind. Given the large blind, this variation required the picker go "cut-throat "(without a partner).
2) The seven of clubs and seven of spades are removed ("or "the six of clubs and six of spades are added). Seven (or eight) cards are dealt to each player, with two in the blind. Either the jack or ace partner rules may be used.
3) Each player is dealt eight cards, with no blind. Either (A) the two players holding the black queens are partners, where the partners are secret until both cards are played, a player holding both black queens plays cut-throat against the three others; (B) the partners are the first two queens played; or (C) the partners are the first two played of any card agreed upon before the deal (7s, 8s, 9s, Ks, 10s, Js, Qs). In all these variations, the players with the agreed upon partner cards (black, red, or first two played) are considered the picker and partner for scoring purposes. In the latter variations, the timing of playing the agreed upon card is particularly important. For example, it may be worth it to waste the queen or play a card out of normal strategy to become partners with an individual who has already taken a good trick or two, or to avoid being stuck cut-throat or with a bad partner.
5) In this variation popular in southern Indiana, jacks are higher than queens (still clubs-spades-hearts-diamonds), and hearts (rather than diamonds) are trump. All four players are dealt eight cards. Starting with the player to the left of the dealer, the player has the option to "call" (call an fail-suit ace for a partner), go solo (cut-throat), or pass. When going solo, the player may play a "best" (pay normally against the other three), a "side solo" (call another suit rather than hearts to be trump, and then play against the other three), or a "Billy" (plays against the three others but attempts not to take a trick).
Scoring: Players play to 24 on the given system:
Six cards are dealt to each player, with two to the blind. A partner may be chosen by either the ace or jack rules. The partner is the player with the called ace.
1) Five cards are dealt to each player, with two cards in the blind. The partner is automatically the jack of diamonds, and the game is played two against four. If the picker gets the jack of diamonds in the blind, he/she may call the next higher jack not in his/her hand.
2) Five cards are dealt to each player, with two cards in the blind. The partner is automatically the jack of diamonds "and "the ace of the called suit, with the game played three against three. If the picker gets the jack of diamonds in the blind or the jack of diamonds has the ace of the called suit, it is played two against four.
3) Discard the sevens of clubs and spades. Five cards are dealt to each player, with no blind. Queen of clubs and queen of spades are partners, it is played two against four.
4) Discard the sevens of clubs and spades. Five cards are dealt to each player, with no blind. Seven of Diamonds is highest trump. Queen of clubs, queen of spades, and jack of diamonds are partners. A player having both black queens or a black queen and jack of diamonds has the option to pass one of the cards to the player to the left for one of their cards. Passing must be done before the lead player plays out. Double on the bump is applied to this variation.
1) Four cards are dealt to each player, with four to the blind. The picker takes all four cards from the blind, and buries four. The partner is automatically the jack of diamonds. If the picker has the jack, he/she may call up to the next highest jack not in his/her hand.
2) Four cards are dealt to each player, with four to the blind. The picker takes two cards from the blind, and the player immediately behind him takes the other two blind cards; they bury together and then play as partners against the other five. Also known as Shit-On-Your-Neighbor sheepshead.
3) Four cards are dealt to each player, with four to the blind. The picker takes three cards from the blind, and the player immediately behind him take the other card. The partner is automatically the jack of diamonds. The player behind the picker is not automatically the partner, so his bury may count towards the picker's opponents.
4) Four cards are dealt to each player, with four to the blind. A die is rolled, and the partner is whatever number is on the die with 1 representing the player to the pickers left, and counting clockwise with six being the person to the picker's right. Each takes and buries two cards.
5) Four cards are dealt to each player, with four to the blind. The picker takes 0, 1, or 2 cards, the person behind him/her is partner and takes 2, 1, or 0 card respectively. The two remaining cards are not revealed and are automatically buried for the other team. The Dealer may go "nuclear" giving all 4 of his cards to the other team's bury and taking the entire blind, the person behind him/her is still partner.
6) Four cards are dealt to each player, with four to the blind. The picker takes 2 or all 4 cards in the blinds. If the picker takes 2 he rolls a die to determine his partner, who will take the other 2. The number rolled correlates to the partner by counting players clockwise of the picker. If the picker takes all 4 cards from the blinds, they play alone. For the picker to take all four cards in the blind, they must do so by taking them all at once. It is not allowed to look at 2 and then decide if you want the remaining 2 cards in the blinds.
1) Four cards are dealt to each player. The two black queens are partners.
2) Four cards are dealt to each player. The queen of clubs, jack of diamonds, and 7 of diamonds are partners. If one partner has two of these cards, they can call the 8 of diamonds (if they have the 7 and the queen or jack) or jack of hearts (if they have the queen and the jack). If the other partner already has the 8 of diamonds or jack of hearts they can call again. It should always be 3 on 5 unless the partner chooses not to call another partner.
3) Four cards dealt to each player. First two queens played are partners. 7 of diamonds is highest trump.
The following phrases or slang can be used to describe certain behaviors or situations in the game.
A player "mauers" when the player has enough power-cards to pick up the blind, and yet passes (whether for fear one's hand is not actually good enough, or worse, one hopes to set up another player to lose). Mauering is considered to be in very poor taste and in some cases players who do it often enough can be asked to leave a game. Of course, mauering can backfire if the hand results in a leaster, and the mauerer is stuck with what is then a poor hand.
There are different methods of deciding if a player has a strong hand. In a five-handed game, some players pick on any four trump, while others decide based on the number of higher trump (queens and jacks). Others use a numbering system, giving each type of trump a point value and making the decision to pick based on a certain number of points. Statistically, players who have an opportunity to pick first need a stronger hand, while picking on the end usually means that since nobody else picked, the trump are fairly evenly spread out. Because of the complex nature of the game, in most cases mauering is a matter of opinion.
A player "schmears" a trick by playing a high-point card (usually an ace or ten) into a trick that a player thinks will be (or has already been) taken by one of their partners, in order to increase the points earned on that trick. The term may also be a noun, referring to the high-point card played in this manner. An example of schmearing (by Opponents 2 and 3):
This trick was worth 34 points. That's schneider all by itself.
Opponent 1 is guaranteed to win the trick as the queen of clubs is the highest card. As a result, opponents 2 and 3 both took advantage of the situation and put high-counting cards down. Also note that the picker played the 8♦, a no-counting card—the opposite of schmearing.
Schmearing is an important strategy. In this example, schmearing increased the value of the trick by 21 points to a total of 34 points—schneider all by itself and over a quarter of the points available.
A player "reneges" means to fail to follow suit when able and required by the rules to do so. Reneging is a form of cheating. In most circles, this results in the guilty party forfeiting the hand.
When a player holds all or most of the top trump there is no way for the opposition to win. This unusually powerful hand is often derided for its ease of play; "My granny could win that hand." The hand still counts and is played out.
In some circles, the player simply lays down the granny hand and the opponents conceding by acclamation. Even if not completely a granny hand, some circles permit a player to state that he believes he will take all of the remaining tricks (possibly requiring an explanation, say, "I have all of the remaining trump"), giving opponents an opportunity to object (say, if the calling player miscounted trump) -- forestalling the players from needing to play out the remainder of the hand.
When a teammate uses a higher powered card to take a trick that already is already going to his/her team—usually when the trick is necessarily going to another teammate. Sometimes this is unavoidable especially in cases where there is only one card of a particular suit left in a player's hand. Sometimes this is strategic, such as to place an opponent on each side of the picker and/or the partner.
As with any partner game, code words or signs can be used to cheat. This involves 2 players creating a word or phrase which tells their partner in crime what to lead. For instance, Player A and Player B are colluding with each other in a game of 4 handed. Player A has the lead and Player B is behind the dealer without a fail Spade. Player B uses the phrase "let's rock n' roll" to signal Player A to lead spades. Player A leads spades, the picker trumps it, and Player B trumps over the Picker. This is very much frowned upon and if caught, the players are usually kicked out of the game. Also called “Table Talk”.
A player "throws off" or "sloughs" when, after a fail card is played and the player does not have any of that fail suit but does have trump, decides to play a fail card rather than trump. Sloughing well is a key to winning at Sheepshead, especially as the picker. One popular situation to throw off is as follows and is known as "The Throw Off"; (1) a fail suit is led that the picker does not have, (2) the picker is 2nd in line, and (3) the picker throws off, usually because he has a poor hand, hoping his partner can take the trick. | https://en.wikipedia.org/wiki?curid=26914 |
Linguistic relativity
The hypothesis of linguistic relativity, part of relativism, also known as the Sapir–Whorf hypothesis , the Whorf hypothesis, or Whorfianism is a principle claiming that the structure of a language affects its speakers' world view or cognition, and thus people's perceptions are relative to their spoken language.
The principle is often defined in one of two versions: the "strong hypothesis", which was held by some of the early linguists before World War II, and the "weak hypothesis", mostly held by some of the modern linguists.
The principle had been accepted and then abandoned by linguists during the early 20th century following the changing perceptions of social acceptance for the other especially after World War II. The origin of formulated arguments against the acceptance of linguistic relativity is attributed to Noam Chomsky.
The term "Sapir–Whorf hypothesis" is considered a misnomer by linguists for several reasons: Edward Sapir and Benjamin Lee Whorf never co-authored any works, and never stated their ideas in terms of a hypothesis. The distinction between a weak and a strong version of this hypothesis is also a later invention; Sapir and Whorf never set up such a dichotomy, although often their writings and their views of this relativity principle are phrased in stronger or weaker terms.
The idea was first clearly expressed by 19th-century thinkers, such as Wilhelm von Humboldt, who saw language as the expression of the spirit of a nation. Members of the early 20th-century school of American anthropology headed by Franz Boas and Edward Sapir also embraced forms of the idea to a certain extent, including in a 1928 meeting of the Linguistic Society of America, but Sapir in particular, wrote more often against than in favor of anything like linguistic determinism. Sapir's student, Benjamin Lee Whorf, came to be seen as the primary proponent as a result of his published observations of how he perceived linguistic differences to have consequences in human cognition and behavior. Harry Hoijer, another of Sapir's students, introduced the term "Sapir–Whorf hypothesis", even though the two scholars never formally advanced any such hypothesis. A strong version of relativist theory was developed from the late 1920s by the German linguist Leo Weisgerber. Whorf's principle of linguistic relativity was reformulated as a testable hypothesis by Roger Brown and Eric Lenneberg who conducted experiments designed to find out whether color perception varies between speakers of languages that classified colors differently. As the study of the universal nature of human language and cognition came into focus in the 1960s the idea of linguistic relativity fell out of favor among linguists. A 1969 study by Brent Berlin and Paul Kay demonstrated the existence of universal semantic constraints in the field of color terminology which were widely seen to discredit the existence of linguistic relativity in this domain, although this conclusion has been disputed by relativist researchers.
From the late 1980s, a new school of linguistic relativity scholars has examined the effects of differences in linguistic categorization on cognition, finding broad support for non-deterministic versions of the hypothesis in experimental contexts. Some effects of linguistic relativity have been shown in several semantic domains, although they are generally weak. Currently, a balanced view of linguistic relativity is espoused by most linguists holding that language influences certain kinds of cognitive processes in non-trivial ways, but that other processes are better seen as arising from connectionist factors. Research is focused on exploring the ways and extent to which language influences thought. The principle of linguistic relativity and the relation between language and thought has also received attention in varying academic fields from philosophy to psychology and anthropology, and it has also inspired and coloured works of fiction and the invention of constructed languages.
The strongest form of the theory is linguistic determinism, which holds that language entirely determines the range of cognitive processes. The hypothesis of linguistic determinism is now generally agreed to be false.
This is the weaker form, proposing that language provides constraints in some areas of cognition, but that it is by no means determinative. Research on weaker forms has produced positive empirical evidence for a relationship.
The idea that language and thought are intertwined is ancient. Plato argued against sophist thinkers such as Gorgias of Leontini, who held that the physical world cannot be experienced except through language; this made the question of truth dependent on aesthetic preferences or functional consequences. Plato held instead that the world consisted of eternal ideas and that language should reflect these ideas as accurately as possible. Following Plato, St. Augustine, for example, held the view that language was merely labels applied to already existing concepts. This view remained prevalent throughout the Middle Ages. Roger Bacon held the opinion that language was but a veil covering up eternal truths, hiding them from human experience. For Immanuel Kant, language was but one of several tools used by humans to experience the world.
In the late 18th and early 19th centuries, the idea of the existence of different national characters, or "Volksgeister", of different ethnic groups was the moving force behind the German romantics school and the beginning ideologies of ethnic nationalism.
Although himself a Swede, Emanuel Swedenborg inspired several of the German Romantics. As early as 1749, he alludes to something along the lines of linguistic relativity in commenting on a passage in the table of nations in the book of Genesis: In 1771 he spelled this out more explicitly:
Johann Georg Hamann is often suggested to be the first among the actual German Romantics to speak of the concept of "the genius of a language." In his "Essay Concerning an Academic Question," Hamann suggests that a people's language affects their worldview:
In 1820, Wilhelm von Humboldt connected the study of language to the national romanticist program by proposing the view that language is the fabric of thought. Thoughts are produced as a kind of internal dialog using the same grammar as the thinker's native language. This view was part of a larger picture in which the world view of an ethnic nation, their "Weltanschauung", was seen as being faithfully reflected in the grammar of their language. Von Humboldt argued that languages with an inflectional morphological type, such as German, English and the other Indo-European languages, were the most perfect languages and that accordingly this explained the dominance of their speakers over the speakers of less perfect languages. Wilhelm von Humboldt declared in 1820:
The idea that some languages are superior to others and that lesser languages maintained their speakers in intellectual poverty was widespread in the early 20th century. American linguist William Dwight Whitney, for example, actively strove to eradicate Native American languages, arguing that their speakers were savages and would be better off learning English and adopting a "civilized" way of life. The first anthropologist and linguist to challenge this view was Franz Boas. While undertaking geographical research in northern Canada he became fascinated with the Inuit people and decided to become an ethnographer. Boas stressed the equal worth of all cultures and languages, that there was no such thing as a primitive language and that all languages were capable of expressing the same content, albeit by widely differing means. Boas saw language as an inseparable part of culture and he was among the first to require of ethnographers to learn the native language of the culture under study and to document verbal culture such as myths and legends in the original language.
Boas:
Boas' student Edward Sapir reached back to the Humboldtian idea that languages contained the key to understanding the world views of peoples. He espoused the viewpoint that because of the differences in the grammatical systems of languages no two languages were similar enough to allow for perfect cross-translation. Sapir also thought because language represented reality differently, it followed that the speakers of different languages would perceive reality differently.
Sapir:
On the other hand, Sapir explicitly rejected strong linguistic determinism by stating, "It would be naïve to imagine that any analysis of experience is dependent on pattern expressed in language."
Sapir was explicit that the connections between language and culture were neither thoroughgoing nor particularly deep, if they existed at all:
Sapir offered similar observations about speakers of so-called "world" or "modern" languages, noting, "possession of a common language is still and will continue to be a smoother of the way to a mutual understanding between England and America, but it is very clear that other factors, some of them rapidly cumulative, are working powerfully to counteract this leveling influence. A common language cannot indefinitely set the seal on a common culture when the geographical, physical, and economics determinants of the culture are no longer the same throughout the area."
While Sapir never made a point of studying directly how languages affected thought, some notion of (probably "weak") linguistic relativity underlay his basic understanding of language, and would be taken up by Whorf.
Drawing on influences such as Humboldt and Friedrich Nietzsche, some European thinkers developed ideas similar to those of Sapir and Whorf, generally working in isolation from each other. Prominent in Germany from the late 1920s through into the 1960s were the strongly relativist theories of Leo Weisgerber and his key concept of a 'linguistic inter-world', mediating between external reality and the forms of a given language, in ways peculiar to that language. Russian psychologist Lev Vygotsky read Sapir's work and experimentally studied the ways in which the development of concepts in children was influenced by structures given in language. His 1934 work ""Thought and Language"" has been compared to Whorf's and taken as mutually supportive evidence of language's influence on cognition. Drawing on Nietzsche's ideas of perspectivism Alfred Korzybski developed the theory of general semantics that has been compared to Whorf's notions of linguistic relativity. Though influential in their own right, this work has not been influential in the debate on linguistic relativity, which has tended to center on the American paradigm exemplified by Sapir and Whorf.
More than any linguist, Benjamin Lee Whorf has become associated with what he called the "linguistic relativity principle". Studying Native American languages, he attempted to account for the ways in which grammatical systems and language-use differences affected perception. Whorf also examined how a scientific account of the world differed from a religious account, which led him to study the original languages of religious scripture and to write several anti-evolutionist pamphlets. Whorf's opinions regarding the nature of the relation between language and thought remain under contention. Critics such as Lenneberg, Black and Pinker attribute to Whorf a strong linguistic determinism, while Lucy, Silverstein and Levinson point to Whorf's explicit rejections of determinism, and where he contends that translation and commensuration is possible.
Although Whorf lacked an advanced degree in linguistics, his reputation reflects his acquired competence. His peers at Yale University considered the 'amateur' Whorf to be the best man available to take over Sapir's graduate seminar in Native American linguistics while Sapir was on sabbatical in 1937–38. He was highly regarded by authorities such as Boas, Sapir, Bloomfield and Tozzer. Indeed, Lucy wrote, "despite his 'amateur' status, Whorf's work in linguistics was and still is recognized as being of superb professional quality by linguists".
Detractors such as Lenneberg, Chomsky and Pinker criticized him for insufficient clarity in his description of how language influences thought, and for not proving his conjectures. Most of his arguments were in the form of anecdotes and speculations that served as attempts to show how 'exotic' grammatical traits were connected to what were apparently equally exotic worlds of thought. In Whorf's words:
Among Whorf's best-known examples of linguistic relativity are instances where an indigenous language has several terms for a concept that is only described with one word in European languages (Whorf used the acronym SAE "Standard Average European" to allude to the rather similar grammatical structures of the well-studied European languages in contrast to the greater diversity of less-studied languages).
One of Whorf's examples was the supposedly large number of words for 'snow' in the Inuit language, an example which later was contested as a misrepresentation.
Another is the Hopi language's words for water, one indicating drinking water in a container and another indicating a natural body of water. These examples of polysemy served the double purpose of showing that indigenous languages sometimes made more fine grained semantic distinctions than European languages and that direct translation between two languages, even of seemingly basic concepts such as snow or water, is not always possible.
Another example is from Whorf's experience as a chemical engineer working for an insurance company as a fire inspector. While inspecting a chemical plant he observed that the plant had two storage rooms for gasoline barrels, one for the full barrels and one for the empty ones. He further noticed that while no employees smoked cigarettes in the room for full barrels, no-one minded smoking in the room with empty barrels, although this was potentially much more dangerous because of the highly flammable vapors still in the barrels. He concluded that the use of the word "empty" in connection to the barrels had led the workers to unconsciously regard them as harmless, although consciously they were probably aware of the risk of explosion. This example was later criticized by Lenneberg as not actually demonstrating causality between the use of the word "empty" and the action of smoking, but instead was an example of circular reasoning. Pinker in "The Language Instinct" ridiculed this example, claiming that this was a failing of human insight rather than language.
Whorf's most elaborate argument for linguistic relativity regarded what he believed to be a fundamental difference in the understanding of time as a conceptual category among the Hopi. He argued that in contrast to English and other SAE languages, Hopi does not treat the flow of time as a sequence of distinct, countable instances, like "three days" or "five years," but rather as a single process and that consequently it has no nouns referring to units of time as SAE speakers understand them. He proposed that this view of time was fundamental to Hopi culture and explained certain Hopi behavioral patterns. Malotki later claimed that he had found no evidence of Whorf's claims in 1980's era speakers, nor in historical documents dating back to the arrival of Europeans. Malotki used evidence from archaeological data, calendars, historical documents, modern speech and concluded that there was no evidence that Hopi conceptualize time in the way Whorf suggested. Universalist scholars such as Pinker often see Malotki's study as a final refutation of Whorf's claim about Hopi, whereas relativist scholars such as Lucy and Penny Lee criticized Malotki's study for mischaracterizing Whorf's claims and for forcing Hopi grammar into a model of analysis that doesn't fit the data.
Whorf died in 1941 at age 44, leaving multiple unpublished papers. His line of thought was continued by linguists and anthropologists such as Hoijer and Lee who both continued investigations into the effect of language on habitual thought, and Trager, who prepared a number of Whorf's papers for posthumous publishing. The most important event for the dissemination of Whorf's ideas to a larger public was the publication in 1956 of his major writings on the topic of linguistic relativity in a single volume titled "Language, Thought and Reality".
In 1953, Eric Lenneberg criticised Whorf's examples from an objectivist view of language holding that languages are principally meant to represent events in the real world and that even though languages express these ideas in various ways, the meanings of such expressions and therefore the thoughts of the speaker are equivalent. He argued that Whorf's English descriptions of a Hopi speaker's view of time were in fact translations of the Hopi concept into English, therefore disproving linguistic relativity. However Whorf was concerned with how the habitual "use" of language influences habitual behavior, rather than translatability. Whorf's point was that while English speakers may be able to "understand" how a Hopi speaker thinks, they do not "think" in that way.
Lenneberg's main criticism of Whorf's works was that he never showed the connection between a linguistic phenomenon and a mental phenomenon. With Brown, Lenneberg proposed that proving such a connection required directly matching linguistic phenomena with behavior. They assessed linguistic relativity experimentally and published their findings in 1954.
Since neither Sapir nor Whorf had ever stated a formal hypothesis, Brown and Lenneberg formulated their own. Their two tenets were (i) "the world is differently experienced and conceived in different linguistic communities" and (ii) "language causes a particular cognitive structure". Brown later developed them into the so-called "weak" and "strong" formulation:
Brown's formulations became widely known and were retrospectively attributed to Whorf and Sapir although the second formulation, verging on linguistic determinism, was never advanced by either of them.
Since Brown and Lenneberg believed that the objective reality denoted by language was the same for speakers of all languages, they decided to test how different languages codified the same message differently and whether differences in codification could be proven to affect behavior.
They designed experiments involving the codification of colors. In their first experiment, they investigated whether it was easier for speakers of English to remember color shades for which they had a specific name than to remember colors that were not as easily definable by words. This allowed them to compare the linguistic categorization directly to a non-linguistic task. In a later experiment, speakers of two languages that categorize colors differently (English and Zuni) were asked to recognize colors. In this way, it could be determined whether the differing color categories of the two speakers would determine their ability to recognize nuances within color categories. Brown and Lenneberg found that Zuñi speakers who classify green and blue together as a single color did have trouble recognizing and remembering nuances within the green/blue category. Brown and Lenneberg's study began a tradition of investigation of linguistic relativity through color terminology.
Lenneberg was also one of the first cognitive scientists to begin development of the Universalist theory of language that was formulated by Chomsky in the form of Universal Grammar, effectively arguing that all languages share the same underlying structure. The Chomskyan school also holds the belief that linguistic structures are largely innate and that what are perceived as differences between specific languages are surface phenomena that do not affect the brain's universal cognitive processes. This theory became the dominant paradigm in American linguistics from the 1960s through the 1980s, while linguistic relativity became the object of ridicule.
Examples of universalist influence in the 1960s are the studies by Berlin and Kay who continued Lenneberg's color research. They studied color terminology formation and showed clear universal trends in color naming. For example, they found that even though languages have different color terminologies, they generally recognize certain hues as more focal than others. They showed that in languages with few color terms, it is predictable from the number of terms which hues are chosen as focal colors, for example, languages with only three color terms always have the focal colors black, white and red. The fact that what had been believed to be random differences between color naming in different languages could be shown to follow universal patterns was seen as a powerful argument against linguistic relativity. Berlin and Kay's research has since been criticized by relativists such as Lucy, who argued that Berlin and Kay's conclusions were skewed by their insistence that color terms encode only color information. This, Lucy argues, made them blind to the instances in which color terms provided other information that might be considered examples of linguistic relativity.
Other universalist researchers dedicated themselves to dispelling other aspects of linguistic relativity, often attacking Whorf's specific points and examples. For example, Malotki's monumental study of time expressions in Hopi presented many examples that challenged Whorf's "timeless" interpretation of Hopi language and culture, but seemingly failed to address linguistic relativist argument actually posed by Whorf (i.e. that the understanding of time by native Hopi speakers differed from that of speakers of European languages due to the differences in the organization and construction of their respective languages; Whorf never claimed that Hopi speakers lacked any concept of time). Malotki himself acknowledges that the conceptualizations are different, but because he ignores Whorf's use of scare quotes around the word "time" and the qualifier "what we call," takes Whorf to be arguing that the Hopi have no concept of time at all.
Today many followers of the universalist school of thought still oppose linguistic relativity. For example, Pinker argues in "The Language Instinct" that thought is independent of language, that language is itself meaningless in any fundamental way to human thought, and that human beings do not even think in "natural" language, i.e. any language that we actually communicate in; rather, we think in a meta-language, preceding any natural language, called "mentalese." Pinker attacks what he calls "Whorf's radical position," declaring, "the more you examine Whorf's arguments, the less sense they make."
Pinker and other universalists have been accused by relativists of misrepresenting Whorf's views and arguing against strawmen.
Joshua Fishman argued that Whorf's true position was largely overlooked. In 1978, he suggested that Whorf was a "neo-Herderian champion" and in 1982, he proposed "Whorfianism of the third kind" in an attempt to refocus linguists' attention on what he claimed was Whorf's real interest, namely the intrinsic value of "little peoples" and "little languages". Whorf had criticized Ogden's Basic English thus:
Where Brown's weak version of the linguistic relativity hypothesis proposes that language "influences" thought and the strong version that language "determines" thought, Fishman's 'Whorfianism of the third kind' proposes that language "is a key to culture".
In the late 1980s and early 1990s, advances in cognitive psychology and cognitive linguistics renewed interest in the Sapir–Whorf hypothesis. One of those who adopted a more Whorfian approach was George Lakoff. He argued that language is often used metaphorically and that languages use different cultural metaphors that reveal something about how speakers of that language think. For example, English employs conceptual metaphors likening time with money, so that time can be saved and spent and invested, whereas other languages do not talk about time in that way. Other such metaphors are common to many languages because they are based on general human experience, for example, metaphors associating "up" with "good" and "bad" with "down". Lakoff also argued that metaphor plays an important part in political debates such as the "right to life" or the "right to choose"; or "illegal aliens" or "undocumented workers".
In his book "Women, Fire and Dangerous things: What categories reveal about the mind," Lakoff reappraised linguistic relativity and especially Whorf's views about how linguistic categorization reflects and/or influences mental categories. He concluded that the debate had been confused. He described four parameters on which researchers differed in their opinions about what constitutes linguistic relativity:
Lakoff concluded that many of Whorf's critics had criticized him using novel definitions of linguistic relativity, rendering their criticisms moot.
The publication of the 1996 anthology "Rethinking Linguistic Relativity" edited by Gumperz and Levinson began a new period of linguistic relativity studies that focused on cognitive and social aspects. The book included studies on the linguistic relativity and universalist traditions. Levinson documented significant linguistic relativity effects in the linguistic conceptualization of spatial categories between languages. For example, men speaking the Guugu Yimithirr language in Queensland gave accurate navigation instructions using a compass-like system of north, south, east and west, along with a hand gesture pointing to the starting direction.
Separate studies by Bowerman and Slobin treated the role of language in cognitive processes. Bowerman showed that certain cognitive processes did not use language to any significant extent and therefore could not be subject to linguistic relativity. Slobin described another kind of cognitive process that he named "thinking for speaking" – the kind of process in which perceptional data and other kinds of prelinguistic cognition are translated into linguistic terms for communication. These, Slobin argues, are the kinds of cognitive process that are at the root of linguistic relativity.
Researchers such as Boroditsky, Lucy and Levinson believe that language influences thought in more limited ways than the broadest early claims. Researchers examine the interface between thought (or cognition), language and culture and describe the relevant influences. They use experimental data to back up their conclusions. Kay ultimately concluded that "[the] Whorf hypothesis is supported in the right visual field but not the left". His findings show that accounting for brain lateralization offers another perspective.
Psycholinguistic studies explored motion perception, emotion perception, object representation and memory. The gold standard of psycholinguistic studies on linguistic relativity is now finding non-linguistic cognitive differences in speakers of different languages (thus rendering inapplicable Pinker's criticism that linguistic relativity is "circular").
Recent work with bilingual speakers attempts to distinguish the effects of language from those of culture on bilingual cognition including perceptions of time, space, motion, colors and emotion. Researchers described differences between bilinguals and monolinguals in perception of color, representations of time and other elements of cognition.
Lucy identified three main strands of research into linguistic relativity.
The "structure-centered" approach starts with a language's structural peculiarity and examines its possible ramifications for thought and behavior. The defining example is Whorf's observation of discrepancies between the grammar of time expressions in Hopi and English. More recent research in this vein is Lucy's research describing how usage of the categories of grammatical number and of numeral classifiers in the Mayan language Yucatec result in Mayan speakers classifying objects according to material rather than to shape as preferred by English speakers.
The "domain-centered" approach selects a semantic domain and compares it across linguistic and cultural groups. It centered on color terminology, although this domain is acknowledged to be sub-optimal, because color perception, unlike other semantic domains, is hardwired into the neural system and as such is subject to more universal restrictions than other semantic domains.
Space is another semantic domain that has proven fruitful for linguistic relativity studies. Spatial categories vary greatly across languages. Speakers rely on the linguistic conceptualization of space in performing many ordinary tasks. Levinson and others reported three basic spatial categorizations. While many languages use combinations of them, some languages exhibit only one type and related behaviors. For example, Yimithirr only uses absolute directions when describing spatial relations — the position of everything is described by using the cardinal directions. Speakers define a location as "north of the house", while an English speaker may use relative positions, saying "in front of the house" or "to the left of the house".
The "behavior centered" approach starts by comparing behavior across linguistic groups and then searches for causes for that behavior in the linguistic system. Whorf attributed the occurrence of fires at a chemical plant to the workers' use of the word 'empty' to describe the barrels containing only explosive vapors. Bloom noticed that speakers of Chinese had unexpected difficulties answering counter-factual questions posed to them in a questionnaire. He concluded that this was related to the way in which counter-factuality is marked grammatically in Chinese. Other researchers attributed this result to Bloom's flawed translations. Strømnes examined why Finnish factories had a higher occurrence of work related accidents than similar Swedish ones. He concluded that cognitive differences between the grammatical usage of Swedish prepositions and Finnish cases could have caused Swedish factories to pay more attention to the work process while Finnish factory organizers paid more attention to the individual worker.
Everett's work on the Pirahã language of the Brazilian Amazon found several peculiarities that he interpreted as corresponding to linguistically rare features, such as a lack of numbers and color terms in the way those are otherwise defined and the absence of certain types of clauses. Everett's conclusions were met with skepticism from universalists who claimed that the linguistic deficit is explained by the lack of need for such concepts.
Recent research with non-linguistic experiments in languages with different grammatical properties (e.g., languages with and without numeral classifiers or with different gender grammar systems) showed that language differences in human categorization are due to such differences. Experimental research suggests that this linguistic influence on thought diminishes over time, as when speakers of one language are exposed to another.
A study published by the American Psychological Association's Journal of Experimental Psychology claimed that language can influence how one estimates time. The study focused on three groups, those who spoke only Swedish, those who spoke only Spanish and bilingual speakers who spoke both of those languages. Swedish speakers describe time using distance terms like "long" or "short" while Spanish speakers do it using quantity related terms like "a lot" or "little". The researchers asked the participants to estimate how much time had passed while watching a line growing across a screen, or a container being filled, or both. The researchers stated that "When reproducing duration, Swedish speakers were misled by stimulus length, and Spanish speakers were misled by stimulus size/quantity." When the bilinguals were prompted with the word "duración" (the Spanish word for duration) they based their time estimates of how full the containers were, ignoring the growing lines. When prompted with the word "tid" (the Swedish word for duration) they estimated the time elapsed solely by the distance the lines had traveled.
Research continued after Lenneberg/Roberts and Brown/Lenneberg. The studies showed a correlation between color term numbers and ease of recall in both Zuni and English speakers. Researchers attributed this to focal colors having higher codability than less focal colors, and not with linguistic relativity effects. Berlin/Kay found universal typological color principles that are determined by biological rather than linguistic factors. This study sparked studies into typological universals of color terminology. Researchers such as Lucy, Saunders and Levinson argued that Berlin and Kay's study does not refute linguistic relativity in color naming, because of unsupported assumptions in their study (such as whether all cultures in fact have a clearly defined category of "color") and because of related data problems. Researchers such as Maclaury continued investigation into color naming. Like Berlin and Kay, Maclaury concluded that the domain is governed mostly by physical-biological universals.
Linguistic relativity inspired others to consider whether thought could be influenced by manipulating language.
The question bears on philosophical, psychological, linguistic and anthropological questions.
A major question is whether human psychological faculties are mostly innate or whether they are mostly a result of learning, and hence subject to cultural and social processes such as language. The innate view holds that humans share the same set of basic faculties, and that variability due to cultural differences is less important and that the human mind is a mostly biological construction, so that all humans sharing the same neurological configuration can be expected to have similar cognitive patterns.
Multiple alternatives have advocates. The contrary constructivist position holds that human faculties and concepts are largely influenced by socially constructed and learned categories, without many biological restrictions. Another variant is idealist, which holds that human mental capacities are generally unrestricted by biological-material strictures. Another is essentialist, which holds that essential differences may influence the ways individuals or groups experience and conceptualize the world. Yet another is relativist (Cultural relativism), which sees different cultural groups as employing different conceptual schemes that are not necessarily compatible or commensurable, nor more or less in accord with external reality.
Another debate considers whether thought is a form of internal speech or is independent of and prior to language.
In the philosophy of language the question addresses the relations between language, knowledge and the external world, and the concept of truth. Philosophers such as Putnam, Fodor, Davidson, and Dennett see language as representing directly entities from the objective world and that categorization reflect that world. Other philosophers (e.g. Quine, Searle, Foucault) argue that categorization and conceptualization is subjective and arbitrary.
Another question is whether language is a tool for representing and referring to objects in the world, or whether it is a system used to construct mental representations that can be communicated.
Sapir/Whorf contemporary Alfred Korzybski was independently developing his theory of general semantics, which was aimed at using language's influence on thinking to maximize human cognitive abilities. Korzybski's thinking was influenced by logical philosophy such as Russell and Whitehead's "Principia Mathematica" and Wittgenstein's "Tractatus Logico-Philosophicus". Although Korzybski was not aware of Sapir and Whorf's writings, the movement was followed by Whorf-admirer Stuart Chase, who fused Whorf's interest in cultural-linguistic variation with Korzybski's programme in his popular work ""The Tyranny of Words"". S. I. Hayakawa was a follower and popularizer of Korzybski's work, writing "Language in Thought and Action". The general semantics movement influenced the development of neurolinguistic programming, another therapeutic technique that seeks to use awareness of language use to influence cognitive patterns.
Korzybski independently described a "strong" version of the hypothesis of linguistic relativity.
In their fiction, authors such as Ayn Rand and George Orwell explored how linguistic relativity might be exploited for political purposes. In Rand's "Anthem", a fictive communist society removed the possibility of individualism by removing the word "I" from the language. In Orwell's "1984" the authoritarian state created the language Newspeak to make it impossible for people to think critically about the government, or even to contemplate that they might be impoverished or oppressed, by reducing the number of words to reduce the thought of the locutor.
Others have been fascinated by the possibilities of creating new languages that could enable new, and perhaps better, ways of thinking. Examples of such languages designed to explore the human mind include Loglan, explicitly designed by James Cooke Brown to test the linguistic relativity hypothesis, by experimenting whether it would make its speakers think more logically. Speakers of Lojban, an evolution of Loglan, report that they feel speaking the language enhances their ability for logical thinking. Suzette Haden Elgin, who was involved in the early development of neurolinguistic programming, invented the language Láadan to explore linguistic relativity by making it easier to express what Elgin considered the female worldview, as opposed to Standard Average European languages which she considered to convey a "male centered" world view. John Quijada's language Ithkuil was designed to explore the limits of the number of cognitive categories a language can keep its speakers aware of at once. Similarly, Sonja Lang's Toki Pona was developed according to a Taoist point of view for exploring how (or if) such a language would direct human thought.
APL programming language originator Kenneth E. Iverson believed that the Sapir–Whorf hypothesis applied to computer languages (without actually mentioning it by name). His Turing award lecture, "Notation as a tool of thought", was devoted to this theme, arguing that more powerful notations aided thinking about computer algorithms.
The essays of Paul Graham explore similar themes, such as a conceptual hierarchy of computer languages, with more expressive and succinct languages at the top. Thus, the so-called "blub" paradox (after a hypothetical programming language of average complexity called "Blub") says that anyone preferentially using some particular programming language will "know" that it is more powerful than some, but not that it is less powerful than others. The reason is that "writing" in some language means "thinking" in that language. Hence the paradox, because typically programmers are "satisfied with whatever language they happen to use, because it dictates the way they think about programs".
In a 2003 presentation at an open source convention, Yukihiro Matsumoto, creator of the programming language Ruby, said that one of his inspirations for developing the language was the science fiction novel "Babel-17", based on the Sapir–Whorf Hypothesis.
Ted Chiang's short story "Story of Your Life" developed the concept of the Sapir–Whorf hypothesis as applied to an alien species which visits Earth. The aliens' biology contributes to their spoken and written languages, which are distinct. In the 2016 American film "Arrival", based on Chiang's short story, the Sapir–Whorf hypothesis is the premise. The protagonist explains that "the Sapir–Whorf hypothesis is the theory that the language you speak determines how you think".
In his science fiction novel "The Languages of Pao" the author Jack Vance describes how specialized languages are a major part of a strategy to create specific classes in a society, to enable the population to withstand occupation and develop itself. | https://en.wikipedia.org/wiki?curid=26915 |
Statute of limitations
A statute of limitations, known in civil law systems as a prescriptive period, is a law passed by a legislative body to set the maximum time after an event within which legal proceedings may be initiated.
When the time specified in a statute of limitations passes, a claim might no longer be filed or, if filed, may be subject to dismissal if the defense against that claim is raised that the claim is time-barred as having been filed after the statutory limitations period. When a statute of limitations expires in a criminal case, the courts no longer have jurisdiction. Most crimes that have statutes of limitations are distinguished from serious crimes as these may be brought at any time.
In civil law systems, such provisions are typically part of their civil or criminal codes. The cause of action dictates the statute of limitations, which can be reduced (or extended) to ensure a fair trial. The intention of these laws is to facilitate resolution within a "reasonable" length of time. What amount of time is considered "reasonable" varies from country to country, and within countries such as the United States from state to state. Within countries and states, the statute of limitations may vary from one civil or criminal action to another. Some nations have no statute of limitations whatsoever.
Analysis of a statute of limitations also requires the examination of any associated statute of repose, tolling provisions, and exclusions.
Common law legal systems can include a statute specifying the length of time within which a claimant or prosecutor must file a case. In some civil jurisdictions (e.g., California), a case cannot begin after the period specified, and courts have no jurisdiction over cases filed after the statute of limitations has expired. In some other jurisdictions (e.g., New South Wales, Australia), a claim can be filed which may prove to have been brought outside the limitations period, but the court will retain jurisdiction in order to determine that issue, and the onus is on the defendant to plead it as part of their defence, or else the claim will not be statute barred.
Once filed, cases do not need to be resolved within the period specified in the statute of limitations.
The purpose and effect of statutes of limitations are to protect defendants. There are three reasons for their enactment:
In Classical Athens, a five-year statute of limitations was established for all cases except homicide and the prosecution of non-constitutional laws (which had no limitation). Demosthenes wrote that these statutes of limitations were adopted to control "sycophants" (professional accusers).
The limitation period generally begins when the plaintiff's cause of action accrues, meaning the date upon which the plaintiff is first able to maintain the cause of action in court, or when the plaintiff first becomes aware of a previous injury (for example, occupational lung diseases such as asbestosis).
A statute of repose limits the time within which an action may be brought based upon when a particular event occurred (such as the completion of construction of a building or the date of purchase of manufactured goods), and does not permit extensions. A statute of limitations is similar to a statute of repose, but may be extended for a variety of reasons (such as the minority of the victim).
For example, most U.S. jurisdictions have passed statutes of repose for construction defects. If a person receives an electric shock due to a wiring defect that resulted from the builder's negligence during construction of a building, the builder is potentially liable for damages if the suit is brought within the time period defined by the statute, normally starting with the date that construction is substantially completed. After the statutory time period has passed, without regard to the nature or degree of the builder's negligence or misconduct, the statute of repose presents an absolute defense to the claim.
Statutes of repose are sometimes controversial; manufacturers contend that they are necessary to avoid unfair litigation and encourage consumers to maintain their property. Alternatively, consumer advocates argue that they reduce incentives to manufacture durable products and disproportionately affect the poor, because manufacturers will have less incentive to ensure low-cost or "bargain" products are manufactured to exacting safety standards.
Many jurisdictions toll or suspend the limitation period under certain circumstances such as if the aggrieved party (plaintiff) was a minor or filed a bankruptcy proceeding. In those instances, the running of limitations is tolled, or paused, until the condition ends. Equitable tolling may also be applied if an individual may intimidate a plaintiff into not reporting or has been promised a suspended period.
The statute of limitations may begin when the harmful event, such as fraud or injury, occurs or when it is discovered. The US Supreme Court has described the "standard rule" of when the time begins as "when the plaintiff has a complete and present cause of action." The rule has existed since the 1830s. A "discovery rule" applies in other cases (including medical malpractice), or a similar effect may be applied by tolling.
As discussed in "Wolk v. Olson", the discovery rule does not apply to mass media such as newspapers and the Internet; the statute of limitations begins to run at the date of publication. In 2013, the US Supreme Court of the United States unanimously ruled in "Gabelli v. SEC" that the discovery rule does not apply to U.S. Securities and Exchange Commission's investment-advisor-fraud lawsuits since one of the purposes of the agency is to root out fraud.
In private civil matters, the limitation period may generally be shortened or lengthened by agreement of the parties. Under the Uniform Commercial Code, the parties to a contract for sale of goods may reduce the limitation period to one year but not extend it.
Limitation periods that are known as laches may apply in situations of equity; a judge will not issue an injunction if the requesting party waited too long to ask for it. Such periods are subject to broad judicial discretion.
For US military cases, the Uniform Code of Military Justice (UCMJ) states that all charges except those facing court-martial on a capital charge have a five-year statute of limitations. If the charges are dropped in all UCMJ proceedings except those headed for general court-martial, they may be reinstated for six months after which the statute of limitations has run out.
In civil law countries, almost all lawsuits must be brought within a legally-determined period at the end of which the right of action is extinguished. This is known as liberative or extinctive prescription. Under Italian and Romanian law, criminal trials must be ended within a time limit.
In criminal cases, the public prosecutor must lay charges within a time limit which varies by jurisdiction and varies based on the nature of the charge; in many jurisdictions, there is no statute of limitations for murder. Over the last decade of the 20th century, many United States jurisdictions significantly lengthened the statute of limitations for sex offenses, particularly against children, as a response to research and popular belief that a variety of causes can delay the recognition and reporting of crimes of this nature.
Common triggers for suspending the prescription include a defendant's fugitive status or the commission of a new crime. In some jurisdictions, a criminal may be convicted "in absentia". Prescription should not be confused with the need to prosecute within "a reasonable delay" as obligated by the European Court of Human Rights.
Under international law, genocide, crimes against humanity and war crimes are usually not subject to the statute of limitations as codified in a number of multilateral treaties. States ratifying the Convention on the Non-Applicability of Statutory Limitations to War Crimes and Crimes Against Humanity agree to disallow limitations claims for these crimes. In Article 29 of the Rome Statute of the International Criminal Court, genocide, crimes against humanity and war crimes "shall not be subject to any statute of limitations".
The Limitations Act of 1958 allows 12 years for victims of child abuse to make a claim, with age 37 the latest at which a claim can be made. The police submitted evidence to a commission, the Victorian Inquiry into Church and Institutional Child Abuse (in existence since 2012) indicating that it takes an average of 24 years for a survivor of child sexual abuse to go to the police. According to Attorney General Robert Clark, the government will remove statutes of limitations on criminal child abuse; survivors of violent crime should be given additional time, as adults, to deal with the legal system. Offenders of minors and the disabled have used the statute of limitations to avoid detection and prosecution, moving from state to state and country to country; an example presented to the Victorian Inquiry was the Christian Brothers.
An argument for abolishing statutes of limitations for civil claims by minors and people under guardianship is ensuring that abuse of vulnerable people would be acknowledged by lawyers, police, organisations and governments, with enforceable penalties for organisations which have turned a blind eye in the past. Support groups such as SNAP Australia, Care Leavers Australia Network and Broken Rites have submitted evidence to the Victoria inquiry, and the Law Institute of Victoria has advocated changes to the statute of limitations.
For crimes other than summary conviction offences, there is no statute of limitations in Canadian criminal law and warrants have remained outstanding for more than 20 years.
For indictable (serious) offences such as major theft, murder, kidnapping or sexual assault, a defendant can be charged at any future date; In sexual abuse cases in particular, men and women have been charged and convicted up to 5 decades after the abuse had been committed.
Civil law limitations vary by province, with Ontario introducing the Limitations Act, 2002 on January 1, 2004.
In Germany, the statute of limitations on crimes varies by type of crime, with the highest being 30 years for voluntary manslaughter (Totschlag). Murder, genocide, crimes against humanity, war crimes and crime of aggression have no statute of limitations.
Murder used to have 20 years' statute of limitations, which was then extended to 30 years in 1969. The limitations were abolished altogether in 1979, to prevent Nazi criminals from avoiding criminal liability.
For most other criminal offences, the statute of limitations is set by Section 78(3) of the Criminal Code (Strafgesetzbuch) as follows:
In the civil code (Bürgerliches Gesetzbuch), the regular statute of limitations is three years (plus the time until the end of the calendar year); however, different terms between two and thirty years may apply in specific situations. For example, the term is only two years for claims for alleged defects of purchased goods, but 30 years for claims resulting from a court judgement (such as awarded damages).
The statute of limitations in India is defined by the Limitations Act, 1963.
The statute of limitations for criminal offences is governed by Sec. 468 of the Criminal Procedure Code.
The statute of limitations on murder was abolished by a change in law on 1 July 2014, causing any murders committed after 1 July 1989 to have no statute of limitations. This led to the national police force implementing a new investigation group for old cases called the "Cold Case" group. The law was also changed to let cases involving domestic violence, forced marriage, human trafficking and genital mutilation to count from the day the defendant turns 18 years old. Cases where the statute of limitations has already passed can not be extended due to the constitution preventing it.
In July 2015, the National Assembly abolished a 25-year limit on first degree murder; it had previously been extended from 15 to 25 years in December 2007.
Unlike other European countries, the United Kingdom has no statute of limitations for any criminal offence, except for summary offences (offences tried in the magistrates’ court). In these cases, criminal proceedings must be brought within 6 months.
In the United States, statutes of limitations apply to both civil lawsuits and criminal prosecutions. Statutes of limitations vary significantly between U.S. jurisdictions.
In Stogner v. California the Supreme Court of the United States held by a 5-4 majority that California's retroactive extension of the statute of limitations for sexual offenses committed against minors was an unconstitutional ex post facto law.
A civil statute of limitations applies to a non-criminal legal action, including a tort or contract case. If the statute of limitations expires before a lawsuit is filed, the defendant may raise the statute of limitations as an affirmative defense to seek dismissal of the charge. The exact time period depends on both the state and the type of claim (contract claim, personal injury, fraud etc.). Most fall in the range of one to ten years, with two to three years being most common.
A criminal statute of limitations defines a time period during which charges must be initiated for a criminal offense. If a charge is filed after the statute of limitations expires, the defendant may obtain dismissal of the charge.
The statute of limitations in a criminal case only runs until a criminal charge is filed and a warrant issued, even if the defendant is a fugitive.
When the identity of a defendant is not known, some jurisdictions provide mechanisms to initiate charges and thus stop the statute of limitations from running. For example, some states allow an indictment of a John Doe defendant based upon a DNA profile derived from evidence obtained through a criminal investigation. Although rare, a grand jury can issue an indictment in absentia for high-profile crimes to get around an upcoming statute of limitations deadline. One example is the skyjacking of Northwest Orient Airlines Flight 305 by D.B. Cooper in 1971. The identity of D. B. Cooper remains unknown to this day, and he was indicted under the name "John Doe, aka Dan Cooper."
Crimes considered heinous by society have no statute of limitations. Although there is usually no statute of limitations for murder (particularly first-degree murder), judges have been known to dismiss murder charges in cold cases if they feel the delay violates the defendant's right to a speedy trial. For example, waiting many years for an alibi witness to die before commencing a murder trial would be unconstitutional. In 2003, the U.S. Supreme Court in "Stogner v. California" ruled that the retroactive extension of the statute of limitations for sexual offenses committed against minors was an unconstitutional "ex post facto" law.
Under the U.S. Uniform Code of Military Justice (UCMJ), desertion has no statute of limitations.
Maritime Injury Law
Under 46 U.S. Code § 30106, "Except as otherwise provided by law, a civil action for damages for personal injury or death arising out of a maritime tort must be brought within 3 years after the cause of action arose." There are some exceptions to this, primarily with regard to Jones Act cases filed against the government, in which case the statute of limitations can be less than 2 years.
U.S. jurisdictions recognize exceptions to statutes of limitation that may allow for the prosecution of a crime or civil lawsuit even after the statute of limitations would otherwise have expired. Some states stop the clock for a suspect who is not residing within the state or is purposely hiding. Kentucky, North Carolina, and South Carolina have no statutes of limitation for felonies, while Wyoming includes misdemeanors as well. However, the right to speedy trial may derail any prosecution after many years have passed.
When an officer of the court is found to have fraudulently presented facts to impair the court's impartial performance of its legal task, the act (known as "fraud upon the court") is not subject to a statute of limitation. This mainly covers a "fraud where the court or a member is corrupted or influenced or influence is attempted or where the judge has not performed his judicial function — thus where the impartial functions of the court have been directly corrupted." In this regard, the U.S. Court of Appeals for the Third Circuit has stated the following:
Officer of the court in general includes any judge, law clerk, court clerk, lawyer, investigator, probation officer, referee, legal guardian, parenting-time expeditor, mediator, evaluator, administrator, special appointee, and/or anyone else whose influence is part of the judicial mechanism.
In tort law, if any person or entity commits a series of illegal acts against another person or entity (or in criminal law if a defendant commits a continuing crime) the limitation period may begin to run from the last act in the series. The entire chain of events can be tolled if the violations were continuing. The Court of Appeals for the Eighth Circuit has explained that the continuing-violations doctrine "tolls the statute of limitations in situations where a continuing pattern forms due to discriminatory acts occurring over a period of time, as long as at least one incident of discrimination occurred within the limitations period." Whether the continuing-violations doctrine applies to a particular violation is subject to judicial discretion; it was said to apply to copyright infringement in the jurisdiction of the Seventh Circuit, but not in the jurisdiction of the Second Circuit. | https://en.wikipedia.org/wiki?curid=26917 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.